id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
17880550
pes2o/s2orc
v3-fos-license
On the Convergence of the WKB Series for the Angular Momentum Operator In this paper we prove a recent conjecture [Robnik M and Salasnich L 1997 J. Phys. A: Math. Gen. 30 1719] about the convergence of the WKB series for the angular momentum operator. We demonstrate that the WKB algorithm for the angular momentum gives the exact quantization formula if all orders are summed. Introduction The semiclassical methods used to solve the Schrödinger problem are of extreme importance to understand the global behaviour of eigenfunctions and energy spectra, since they allow to obtain analytic expressions. The leading semiclassical approximation (torus quantization) is just the first term of a certainh-expansion, which is called WKB (Maslov and Fedoriuk 1981). Recently it was observed (Prosen and Robnik 1993, Graffi, Manfredi and Salasnich 1994, Robnik and Salasnich 1997ain the following this work will be referred to as I) that the torus quantization generally fails to predict the individual energy levels (and the eigenstates) within a vanishing fraction of the mean-energy level spacing. This conclusion is believed to be correct for general systems, including the chaotic ones. Therefore, a systematic study of the accuracy of semiclassical approximation is very important, especially in the context of quantum chaos (Casati andChirikov 1995, Gutzwiller 1990). Since this is a difficult task, it has been attempted for simple systems, where in a few cases even exact solutions may be worked out (Dunham 1932, Bender, Olaussen and Wang 1977, Voros 1993, Robnik and Salasnich 1997a. Robnik and Salasnich (1997b) (this work will be referred to as II) dealt with the WKB expansion for the Kepler problem: it was proved that an exact result is obtained once all terms are summed. In particular, the torus quantization (the leading WKB term) of the full problem is exact, even if the individual torus quantization of the angular momentum and of the radial Kepler problem separately are not, because the quantum corrections (i.e. terms higher than the torus quantization) compensate mutually term by term. In the paper II Robnik and Salasnich had to do a conjecture about the higher terms of the WKB expansion. This conjecture is perfectly reasonable but not rigorously proved. In this work our goal is to prove that the same result of II can be reached rigorously by means of a slightly modified procedure. In the framework of the supersymmetric semiclassical quantization (SWKB), Comtet, Bandrauk and Campbell (1985) obtained at the leading order the exact quantization of the radial part of the Kepler problem by using the correct value L 2 =h 2 l(l + 1). In the last section we complete the result of Comtet, Bandrauk and Campbell (1985). In fact, we show that also the exact quantization of the angular momentum is obtained at the first order of the SWKB expansion. Eigenvalue problem for the angular momentum The eigenvalue equation of the angular momentum operator (Landau and Lifshitz 1977 After the substitution we obtain We shall consider the azimuthal quantum number m as fixed. As well known, Eq. (4) is exactly solvable. Its eigenvalues and eigenfunctions are known from any text of quantum mechanics (see, e.g. Landau and Lifshitz 1977): the former are λ 2 = l(l + 1), l ≥ m; the latter are the associate Legendre polynomials. The WKB expansion for Eq. (4) has been studied in II; it was shown that higher-order terms quickly increase in complexity. The method of solution is to find an analytical recursive expression for all the higher-order terms, to sum the entire infinite series, and show that it is convergent to the exact result. Instead of the original function T we shall use the associated function F : from which we obtain This equation has the standard form of the one-dimensional Schrödinger equation withh = 2M ≡ 1. Its eigenvalues are (λ 2 + 1/4). We make the substitution of variable: x = θ + π/2, and the positions U = m 2 − 1/4, E = λ 2 + 1/4. Then Eq. (6) becomes This is the main result of our paper, because the problem of the WKB quantization of Eq. (7) has already been dealt with in I. As we shall show, from I one proves that: i) Eq. (7) can be solved exactly ; ii) a semiclassical expansion of (7) may be carried on to all orders (i.e. all terms may be exactly and analytically computed and summed); iii) the exact and the semiclassical eigenvalues are the same. WKB series for the angular momentum We observe that in Eq. (7) does not appearh, therefore an expansion in powers of this parameter is not possible. To override this difficulty a small parameter ǫ is introduced: This parameter ǫ, which will be set to 1 at the end of the calculation, has formally the same role ofh as ordering parameter. It has already been used in II to deal with the WKB expansion of (4). The formal WKB expansion for F reads: and we obtain a recursion relation for the phases: The quantization condition is obtained by requiring that the wavefunction be single valued: where n θ is an integer number. All odd terms higher than the first vanish when integrated along the closed contour since they are exact differentials (Bender, Olaussen and Wang 1977) dσ 2k+1 = 0 , k > 0 . It may be proved by induction (see I) that the solution of (10-11) is with f (n) = 0 for n even, f (n) = 1 for n odd, with g(n) = (3n − 2)/2 for n even, g(n) = (3n − 3)/2 for n odd, C 0,0 = 1, C 1,0 = U/2, C 2k,0 = (−1) k (U/2) 2k 1/2 k , and C 2k+1,0 = 0, k > 0. It is not necessary to know the value of the other coefficients since one finds that all the terms proportional to C n,l , l > 0 disappear after integration. The integral (12) becomes (see I for more details) Now, because E = λ 2 + 1/4 and U = m 2 − 1/4, we obtain and, with the position l = n θ + m, we have which is the expected result. Please note that the WKB series is convergent for |x| > 1, thus for m > 0. We observe that the ǫ-expansion is equivalent to the 1/U-expansion (this is clear from the structure of Eq. 8). In the limit U → ∞ it is easy to get the WKB expansion to the first order, which gives λ 2 = (l + 1/2) 2 , i.e. the torus quantization of the angular momentum (Langer 1937). SWKB quantization of the angular momentum To perform the supersymmetric semiclassical quantization (SWKB) of Eq. (4) or (6), it is necessary to know the ground state wave-function T 0 (θ) = sin m (θ) and its eigenvalue λ 0 = m(m + 1). Then we can define the supersymmetric (SUSY) potential with From Φ the two SUSY partner potentials and Hamiltonians may be defined It is possible to prove (see Junker 1996 and references therein for details) that: i) the ground-state energy of H − , E 0 − , vanishes; ii) all other eigenvalues of H − , E − , coincide with that of H + ; iii) the spectrum of H − and that of (7) differ by a constant: where λ 0 = m(m + 1) is the eigenvalue of the ground state of Eq. (4). Now we apply the SWKB formalism to H − of Eq. (24). At the leading order one gets This formula is also referred to as CBC formula, from Comtet, Bandrauk and Campbell (1985). We observe that on the left hand side of the previous formulas Φ 2 appears instead of the full potential V − . From Eqns. (22), (27) and (28) one easily finds with b = −a = arctan (m+ 1 2 ) E − . By inverting the previous formula we have and, by using Eq. (26) with λ 0 = m(m + 1), we get which yields the exact quantization, after the position l = n θ + m. Conclusions The three-dimensional central potentials are fundamental in physics, and also the semiclassical treatment of them has implications in many fields: factorization properties of the one-dimensional potentials (Infeld and Hull 1957), general properties of the semiclassical quantization of the systems with more than one degree of freedom, both integrable or not. Nevertheless, until the paper of Robnik and Salasnich (1997b), no detailed study had been done on half of the problem, the WKB quantization of the angular part. Our present paper completes that work because it gives a rigorous proof of the convergence of the WKB series to the exact result. Moreover, in the last section, we have demonstrated that, by using SUSY quantum mechanics, the eigenvalue problem of the angular momentum operator can be solved exactly at the lowest order within the semiclassical approximation. LS thanks Marko Robnik for many enlightening discussions. FS has been supported during this work by a grant of the Italian MURST.
2014-10-01T00:00:00.000Z
1997-08-22T00:00:00.000
{ "year": 1997, "sha1": "a34dfa69c3d087ca150d14d1d1d92a0e161c24a1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/9708036", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9b4dbb843f788f89db04a7384d75081a19631ed4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3976463
pes2o/s2orc
v3-fos-license
Prevalence of Trachoma in Northern Benin: Results from 11 Population-Based Prevalence Surveys Covering 26 Districts Aims We sought to evaluate trachoma prevalence in all suspected-endemic areas of Benin. Methods We conducted population-based surveys covering 26 districts grouped into 11 evaluation units (EUs), using a two-stage, systematic and random, cluster sampling design powered at EU level. In each EU, 23 villages were systematically selected with population proportional to size; 30 households were selected from each village using compact segment sampling. In selected households, we examined all consenting residents aged one year or above for trichiasis, trachomatous inflammation – follicular (TF), and trachomatous inflammation – intense. We calculated the EU-level backlog of trichiasis and delineated the ophthalmic workforce in each EU using local interviews and telephone surveys. Results At EU-level, the TF prevalence in 1–9-year-olds ranged from 1.9 to 24.0%, with four EUs (incorporating eight districts) demonstrating prevalences ≥5%. The prevalence of trichiasis in adults aged 15+ years ranged from 0.1 to 1.9%. In nine EUs (incorporating 19 districts), the trichiasis prevalence in adults was ≥0.2%. An estimated 11,457 people have trichiasis in an area served by eight ophthalmic clinical officers. Conclusion In northern Benin, over 8000 people need surgery or other interventions for trichiasis to reach the trichiasis elimination threshold prevalence in each EU, and just over one million people need a combination of antibiotics, facial cleanliness and environmental improvement for the purposes of trachoma’s elimination as a public health problem. The current distribution of ophthalmic clinical officers does not match surgical needs. Introduction The year 2020 is the target date for the global elimination of trachoma as a public health problem. 1 As the first step to achieving this goal, mapping is needed to assess the endemicity of trachoma and determine the need for interventions. 2 In 2012, the World Health Organization (WHO) estimated that 21 million people had active (inflammatory) trachoma (trachomatous inflammationfollicular, TF and/or trachomatous inflammationintense, TI), 3 and more than 7 million people had in-turned eyelashes (trachomatous trichiasis) that could lead to corneal opacity and blindness. 4 Sub-Saharan Africa bears the largest burden of disease, with more than 80% of the cases of trachoma. 5 An assessment of trichiasis surgeries conducted in northern Benin in 2013 suggested that some districts were suspected-trachoma-endemic at a level that constitutes a public health problem, but in the absence of survey data, the need for public-health-level interventions was not known. We sought to map suspected endemic areas in Benin in order to decide if and where to implement trachoma control efforts. Methods This series of surveys was undertaken as part of the Global Trachoma Mapping Project (GTMP), 6 an international effort to complete trachoma mapping in all potentially endemic populations. Benin, a country of 11 million inhabitants sharing borders with Nigeria (to the east), Togo (to the west), and CONTACT Amadou Alfa Bio bioamadou@yahoo.fr National Control Program of Communicable Diseases, Ministry of Health, Benin. *See Appendix Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/iope. Burkina Faso and Niger (to the north) is divided into 12 departments consisting of 77 districts. A national audit of data on the conduct of surgery for trichiasis, conducted in 2013, identified the districts to be mapped for trachoma: only 26 districts, all in the northern region, were identified, and there was no evidence or suspicion of trachoma being endemic in the remainder of the country. The total population of the northern region is 3,382,083. 7 Urban areas (Parakou and Natitingou) were not considered trachomasuspect and were not included in the survey. Some districts were combined with adjacent, geographically and socioculturally similar districts to create 11 separate evaluation units (EUs) for survey purposes; the number of districts in each EU ranged from one to four. In general, WHO recommends that trachoma surveys be implemented in populations of between 100,000 and 250,000 people; 8 the estimated 2014 populations of our EUs ranged from 114,659 (N'dali, Borgou Département) to 542,605 (Bassila Copargo Djougou Ouake, Donga Département) ( Table 1). (The name of each EU (Tables 1 and 2) is a concatenation of the names of its constituent districts.) Each of the 11 separate cross-sectional populationbased surveys conducted was designed to obtain EUlevel prevalence estimates for TF in children aged 1-9 years; and trichiasis in persons aged 15 years and above. The GTMP survey methods that were used in this study are described in detail elsewhere. 9 Briefly, the target sample size was calculated to estimate, at EU level, an expected TF prevalence of 10% in 1-9-year-olds with an absolute precision of 3%, using a design effect of 2.65 and inflation by a factor of 1.2 to allow for non-response. The sample size of 1222 children for each EU was chosen as follows: in each EU, we systematically selected 23 clusters (villages) by probability proportional to size sampling. We then (because of the absence of household registers) selected 30 households in each selected village by compact segment sampling; this entailed drawing a map of the village and creating approximately equally-sized segments of about 30 households, and selecting one segment by random draw. Going house to house, field teams invited all residents of households in selected segments aged 1 year and above to be examined. For survey purposes, a household resident was defined as a person who, for the previous month or longer, shared one or more premises connected to the place where the head of the household usually sleeps (regardless of their relationship to the head of the household) and who had meals more than 3 nights per week at that location. We examined all consenting residents for evidence of trichiasis, TF and TI, using 2.5× magnifying loupes. Additionally, we collected Global Positioning System coordinates outside the most prominent building of each household, and household-level data on access to water, sanitation, and hygiene. 9 For individuals found to have trichiasis, to obtain information on previous interaction with the health care system, we asked questions to determine whether health workers had previously recommended trichiasis surgery or epilation. To prepare for the surveys, in November 2013, two ophthalmologists (AAB and JEB) underwent training and certification as GTMP grader trainers in Ethiopia. They then conducted the training of survey graders and recorders in Benin, using version 2 of the GTMP training protocols. 9,10 All survey graders were ophthalmic clinical officers (OCOs), dedicated eye care professionals with at least two years of training in eye care. A minimum kappa of 0.7 for the diagnosis of TF in an inter-grader agreement test with 50 eyes of 50 children was required to pass the grading examination. Of the 14 OCOs enrolled in the training programme, 11 were certified as GTMP graders and became members of survey teams. Survey recorders completed training on data capture using the GTMP-LINKS app on Android smartphones. Prior to fieldwork, we undertook a pilot test in a village not selected for any of the surveys. The 11 teams (each containing one grader and one recorder) were then deployed to the field from March to April 2014. In November 2014, we re-assessed the EU of Tchaourou because an inadequate number of 1-9-year-old children had been examined there. In April 2015, we surveyed Natitingou rural because adjacent districts were endemic for trachoma, based on the 2014 data. Therefore 13 further clusters were added to the previously sampled clusters for the Natitingou EU using the same GTMP methodology. We uploaded all data to the GTMP secure server, and undertook standardized analyses to estimate the prevalence of TF in children aged 1-9 years and trichiasis in adults aged 15 years and above, in each EU. We standardized prevalence estimates for age and sex based on rural Benin population data, as previously described. 9 We calculated the backlog of trichiasis cases in each EU (calculated by multiplying the prevalence of trichiasis by the census population of adults). For planning purposes, we also calculated the current number of people requiring surgery or other interventions for trichiasis in order to reach the WHO elimination threshold of less than 0.2% of adults having trichiasis (defined as the backlog minus the elimination threshold). 11 Both the trichiasis backlog and the number needing trichiasis management to reach the elimination threshold include people with trichiasis, whether they had any previous interaction with the health system or not. Finally, we undertook an assessment of the eye care human resources potentially available to manage trichiasis in northern Benin. This included a standard interview administered during a visit to 9 of the 14 OCOs in northern Benin, asking for the names, locations and contact details of ophthalmic colleagues, identifying ophthalmic personnel working in the region using snowball sampling. The five OCOs who could not be visited were contacted by mobile phone. Ethical clearance was obtained from the Comite National d'Ethique pour la Recherche en Sante (070/ MS/DC/SGM/DFR/CNERS/SA), and from the ethics committee of the London School of Hygiene & Tropical Medicine (6319). Prior to examination, verbal informed consent was obtained from adults for enrollment of themselves and for children (aged under 15 years) in their care. Any participant found to have TF and/or TI was treated with 1% tetracycline eye ointment, and individuals with trichiasis or other ocular conditions were referred to the nearest eye unit. Results Across the 11 EUs, in total, 7719 households were surveyed in 266 clusters. A total of 46,471 people of all ages were examined (range per EU: 2449-5630), including 18,085 adults (6422 males; 11,663 females) aged 15+ years, and 23,006 children (11,678 males; 11,328 females) aged 1-9 years ( Table 1). The overall participation rate was 91.5%, ranging across EUs from 87.8% to 99.2% (Table 1). The major reason for nonparticipation was being away from the village at the time of the survey. The EU-level age-adjusted prevalence of TF in 1-9year-olds ranged from 1.9% to 24.0%. Four EUs, comprising eight districts, had TF prevalences above 5%, warranting intervention (Table 1 and Figure 1). The age-adjusted prevalence of TI in 1-9-year-olds was <2% in all EUs except for the EU comprising the districts of Boukoumbe, Toukountouna, and Natitingou in Atacora Department, in which the prevalence was 5.4%. The age-and sex-adjusted prevalence of trichiasis in adults ranged from 0.12% to 1.92% (Table 2 and Figure 2). Nine EUs (19 districts) had trichiasis prevalences greater than 0.2% in adults, thus requiring public health-level surgery interventions. The EU surveyed in the department of Donga (comprising the districts Bassila, Copargo, Djougou, and Ouake) and one of the EUs in the department of Atacora (comprising the districts Tanguiéta, Cobly, and Materi) had trichiasis prevalences in adults of <0.2%. The estimated total backlog of trichiasis in northern Benin is 11,457 people, with an estimated 8155 individuals needing to be offered appropriate management to meet the trichiasis component of the definition of "elimination of trachoma as a public health problem." Among 282 people identified with trichiasis in the survey, 71 (25%) had had previous trichiasis surgery, 57 (80%) of whom came from the EU comprising the districts of Boukoumbe, Toukountouna, and Natitingou in Atacora Department. In the 26 districts in northern Benin there are, at present, 13 OCOs and four ophthalmologists. OCOs received training at a number of training centers outside of Benin; experience in trichiasis surgery ranged from none to only a few cases. Supervision of OCOs for trichiasis surgery is limited to the OCOs in the Parakou area. Five of the 13 OCOs and the four ophthalmologists are in the urban area of Parakou. There is no OCO in Banikoara or Tchaourou, the two EUs with the highest numbers of people with trichiasis needing intervention. Discussion These are the first population-based prevalence surveys of trachoma in Benin, and their findings are now being used to facilitate planning and implementation of the SAFE strategy (surgery, antibiotics, facial cleanliness, environmental improvement) 12 in endemic districts. Our work indicates that these endemic districts will require different combinations of interventions, as prevalences of TF and trichiasis do not invariably exceed the respective elimination thresholds in the same places, as also observed in other trachoma endemic countries, including Cameroon. 13 Three of the countries bordering Benin (Nigeria, Niger, and Burkina Faso) are endemic for trachoma, while available evidence suggests that active trachoma is not a public health problem in Togo. 14 The Nigerian states bordering northern Benin (Kwara, Niger, 15 and Kebbi) have a few trachoma endemic districts, but prevalences there are not high. Large areas of the Niger and Burkina Faso 16 border zones are national parks and forests in which population density is very low. In fact, 2015 data available at www.trachomaatlas. org suggest that, of districts bordering Benin, only Dandi Local Government Area of Kebbi State, Nigeria and Pama Department of Burkina Faso had most recent TF prevalence estimates above the elimination threshold. There is one transit route to Burkina Faso through the districts of Natitingou and Toukountouna. Boukoumbé is far from this transit route. In these three districts, there is limited access to water, and sanitation is generally poor; this may be part of the reason for the high prevalence of TF in this area. The general absence of trachoma in the southern-most part of the northern region surveyed here probably indicates a low likelihood of disease being a public health problem further south. In addition, in 2012, the surgical records of hospitals throughout Benin were reviewed; no trichiasis cases had been operated on during the previous 2 years. As expected, the burden of trichiasis is mostly found in areas with TF; this offers opportunities for case finding during mass drug administration and other community-based interventions. Nevertheless, there are 4246 people with trichiasis (37% of the backlog calculated here) needing management who live in areas in which the mass distribution of antibiotics is not indicated. The two EUs of Banikoara and Tchaourou account for the highest number of people needing trichiasis management (3896 or 48% of the total requirement) yet there is no OCO in either. Training (or re-training) the OCOs in Natitingou and Malanville to undertake management of trichiasis is a priority. While the long-term approach will be to place a trained OCO in Banikoara and Tchaourou and support them to provide trichiasis surgical services, a short-term measure may be to organize outreach from neighboring areas in which OCOs are based, after they have been appropriately trained and certified. 17 It is important to note that the 3302 people with trichiasis constituting the difference between the backlog and the number that must be treated to reach elimination prevalence thresholds all still need interventions against trichiasis in order to prevent trachomatous visual impairment. An ongoing, funded strategy to deliver services to individuals with incident trichiasis is a requirement for validation of trachoma elimination. There were two EUs (four districts) in which TF prevalences were above 10%, thus indicating the need for at least 3 years of annual mass drug administration of antibiotics and F and E interventions. In an additional two EUs (four districts) in which prevalences of TF were between 5 and 9.9%, it is recommended that implementation of facial cleanliness promotion and environmental improvement are supplemented by one round of mass azithromycin administration. In total, over one million people live in areas that will require intervention to address active trachoma. If these measures are successful at reducing the prevalence of TF in children at impact survey, and if TF prevalence subsequently remains below 5% during surveillance, Benin will be on the pathway to elimination of trachoma as a public health problem. The major strengths of our series of surveys are the use of gold-standard survey methodologies for estimating prevalence, good geographical sample coverage in each EU, a high participation rate, rigorous training and supervision, and standardized approaches to data cleaning and analysis. The relatively large populations of some EUs, 8 and the fact that we did not include examination for trachomatous conjunctival scarring 3 in eyes that had trichiasis, 18 are both potential limitations. Although it is possible that some small pockets of trachoma were missed in surveyed districts, we believe we have adequately delineated the areas of Benin in which trachoma affects sufficiently large populations to be considered a public health problem, and can now confidently chart a course towards trachoma elimination.
2018-04-03T05:10:59.138Z
2017-04-25T00:00:00.000
{ "year": 2017, "sha1": "9ba7f982302d0b6682def712c4c432ba1f745156", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09286586.2017.1279337?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9e15bc1dfd25831801e962aae1086d8c1318d2de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237938707
pes2o/s2orc
v3-fos-license
Prevalence and Subtype Distribution of Blastocystis sp. in Diarrheic Pigs in Southern China Blastocystis sp. is a common pathogen that infects the intestines of humans and animals, causing a threat to public health. However, little information on the prevalence and subtypes of Blastocystis sp. in diarrheic pigs in China is available. Herein, 1254 fecal samples were collected from diarrheic pigs in 37 intensive pig farms in Hunan, Jiangxi, and Fujian provinces in southern China, and the prevalence and subtypes of Blastocystis sp. were investigated. Blastocystis sp. was detected by PCR assay, which amplified the small subunit rRNA (SSU rRNA) gene. Overall prevalence of Blastocystis sp. was 31.4% (394/1254), including 21.5% (66/307), 33.1% (99/299), 58.9% (56/95), and 31.3% (173/553) in suckling piglets, weaned piglets, fattening pigs, and sows, respectively. Moreover, age and region factors were significantly related to prevalence of Blastocystis sp. (p < 0.05). Four Blastocystis sp. subtypes were identified, including ST1, ST3, ST5, and ST14. The preponderant subtype was ST5 (76.9%, 303/394). To our knowledge, ST14 was firstly found in pigs in China. The human-pathogenic subtypes (ST1, ST3, ST5, and ST14) that were observed in this study indicate a potential threat to public health. These findings provided a new sight for studying the genetic structure of Blastocystis sp. Introduction Blastocystis sp. is a zoonotic intestinal protozoan with a worldwide distribution. The host range of Blastocystis sp. is extensive, including humans, non-human primates, mammals, birds, fish, annelids, arthropods, reptiles, and amphibians [1]. Since the term "Blastocystis" was introduced by A. Alexieff in 1911, there has been a consensus that Blastocystis sp. is transmitted through the oral-fecal route, although its pathogenicity has been controversial [2,3]. Blastocystis sp. infection is in some cases thought to be associated with clinical symptoms, including abdominal pain, diarrhea, nausea, irritable bowel syndrome (IBS), and inflammatory bowel disease (IBD), which cause significant physical discomfort to human and animals [4][5][6][7]. Furthermore, in an infected host, Blastocystis sp. infection may concur with other zoonotic parasites such as Giardia duodenalis and Cryptosporidium spp. [8][9][10]. Hence, investigation of the prevalence and subtypes of Blastocystis sp. plays an important role in tracking and preventing the transmission of this protist. Previous studies have reported high prevalence of Blastocystis sp. in humans, domestic animals, or wild animals in several provinces of China [22]. Prevalence of Blastocystis sp. in pigs has been reported in Shaanxi, Guangdong, Zhejiang, Heilongjiang, Jiangxi, and Yunnan provinces and Xinjiang Hui Autonomous Region in China [6,7,9,17,23,24]. However, there is no report of Blastocystis sp. infection in pigs in Hunan and Fujian provinces in China. Although there was a previous report of pig infection with Blastocystis sp. in Jiangxi Province [23], the sample size was too small, and might not have reflected the true situation of pigs infected with Blastocystis sp. Therefore, this study examined the prevalence of Blastocystis sp. and its subtypes in diarrheic pigs of different age groups and regions in three southern provinces of China. Prevalence of Blastocystis sp. in Diarrheic Pigs In the present study, 31 Table 2). Significant differences (p < 0.05) in the prevalence of Blastocystis sp. in pigs in the three investigated provinces and in different cities of Jiangxi province were observed (Tables 1 and 2). , and the differences were statistically significant (p < 0.001) ( Table 1). Furthermore, fattening pigs had 5.24 times (95% CI 3.21-8.57) more risk of infection with Blastocystis than that of suckling piglets. Phylogenetic Analysis of Blastocystis sp. Phylogenetic analyses showed that the sequences of the four subtypes (ST1, ST3, ST5, and ST14) obtained from pigs in this study clustered with other ST1, ST3, ST5, and ST14 sequences obtained from other animals or humans into one branch, with high bootstrap values ( Figure 1). Notably, The ST1 sequences (MW767060-MW767062) obtained from pigs in this study were closely related to the ST1 sequence (MK719635) obtained from humans ( Figure 1). Furthermore, the ST14 sequences obtained from pigs in this study showed a closer genetic relationship with other ST14 sequences from ruminants ( Figure 2). with black triangles indicate the sequences obtained in this study, and the GenBank accession numbers of the sequences are shown to the right of the triangle. Discussion Although Blastocystis sp. has been researched for more than a century, its pathogenicity remains controversial [6,7]. There is not enough evidence for the clinical importance of Blastocystis sp., but its potential pathogenicity has long been studied [9]. Therefore, extensive investigation of Blastocystis sp. may improve the understanding of its pathogenicity and lead to effective prevention and control. Discussion Although Blastocystis sp. has been researched for more than a century, its pathogenicity remains controversial [6,7]. There is not enough evidence for the clinical importance of Blastocystis sp., but its potential pathogenicity has long been studied [9]. Therefore, extensive investigation of Blastocystis sp. may improve the understanding of its pathogenicity and lead to effective prevention and control. In previous reports, the prevalence of Blastocystis sp. in sows was generally significantly higher than that in fattening pigs, but in this study, the Blastocystis prevalence in fattening pigs was higher than that in other growing stages of pigs, which was consistent with the results of other studies [7,9,32]. This difference may have been caused by the raising condition. Furthermore, the infection rate of Blastocystis sp. was the lowest in suckling piglets compared to other growing stages of pigs, which might have been related to the important role of maternal antibodies. Comparing the infection rate of ST3 with previous reports [5,9,15,21,32], we found that ST3 had the highest infection rate in this study (Table 1). According to previous studies, ST5 was widely distributed in all age groups of pigs, and only ST5 was detected in sows [6,9]. However, we found ST1, ST3, ST5, and ST14 in all age groups. Diarrhea often destroys the structure of the intestinal flora and alters the intestinal environment, which might lead to the change in dominant subtype [32]. This might explain the higher positive rate of ST3 compared to ST1 in diarrheic pigs in this study. Although the available data for ST14 in pigs is very limited [20], ST14 has been reported in humans [11], suggesting that ST14 has zoonotic potential, and that pigs may be a link in the transmission of ST14. Mixed infections were not detected in this investigation. While primers for subtype-specific detection are now available, only a limited number of subtypes are currently available for detection [17]. Phylogenetic analysis revealed that sequences of ST1 obtained from pigs in this study were closely related to human-derived sequences of ST1 (MK719635) (Figure 1), which further proved that pigs could be a possible reservoir for human infection with Blastocystis sp. The sequences of ST14 obtained in pigs and sequences of ST14 isolated in ruminants were clustered together ( Figure 2). Since ST14 is more prevalent in ruminants, we speculated that the source of ST14 infection in pigs might be from ruminants. These findings should be enhanced in future molecular epidemiological studies. Sampling From 2015 to 2019, a total of 1254 fresh fecal samples from pigs with diarrhea were collected from Jiangxi Province (n = 1036), Hunan Province (n = 83), and Fujian Province (n = 135) ( Table 1 and Figure 3). Only pigs with dilute feces or watery diarrhea were sampled, and all samples were collected by anal swab. Among these fecal samples, 307 fecal samples were from suckling piglets (<21 days), 299 fecal samples were from weaned piglets (21-70 days), 95 fecal samples were from fattening pigs (71-180 days), and 553 fecal samples were from sows (>180 days) ( Table 1). All fecal samples were placed in a cryopreservation box with an adequate amount of ice bags immediately after sampling, and then were stored in a refrigerator at −80°C before DNA extraction. Genomic DNA Extraction and PCR Amplification Approximately 300 mg of each fecal sample was washed 3 times with distilled water by centrifuging at 13,000× g for 5 min to remove the impurities. The remaining sediments were used to extract the genomic DNA using the E.Z.N.A. ® Fecal DNA Kit (D4015-02, Omega Bio-Tek Inc. Norcross, GA, USA). The genomic DNA was stored at −20 ℃ for further analysis. The genomic DNA samples were screened for Blastocystis sp. by PCR amplification of the SSU rRNA genes with a target fragment size of roughly 600 using the primers BhRDr (5′-GAGCTTTTTAACTGCAACAACG-3′) and RD5 (5′-ATCTGGTT-GATCCTGCCAGT-3′) [9]. The 25 μL reaction system contained 2 μL of genomic DNA, 0.2 mM of dNTP mixture, 1.5 mM of MgCl2, 2.5 μL of 10× Ex Taq buffer, 1.25 U of TaKaRa Ex Taq® (Takara Bio Inc., Dalian, China), and 0.25 μL of primers (10 mol/μL). The PCR reaction conditions were set as follows: initial denaturation at 94 °C for 5 min; 35 cycles at 94 °C for 45 s, 59 °C for 45 s, and 72 °C for 1 min; and an additional 72 °C extension for 3 min. Each reaction included a positive (DNA from Blastocystis sp.) and a negative control (reagent water). The final PCR products were identified by 2% (w/v) agarose gel electrophoresis and stained with ethidium bromide. Sequence Analysis Approximately 600 bp PCR product of each sample was recovered and purified by Tsingke Biotechnology Co., Ltd. for sequencing using the Sanger sequencing method. The subtypes of Blastocystis sp. were identified by aligning with obtained sequences and corresponding sequences available in the GenBank database (http://www.ncbi.lm.nih.gov/GenBank/, accessed on 21 March 2021) using Clustal X 2.1 (http://www.clustal.org/clustal2/, accessed on 28 March 2021) [26]. The maximum likelihood (ML) method with a Kimura 2-parameter model in MEGA 7.0 Genomic DNA Extraction and PCR Amplification Approximately 300 mg of each fecal sample was washed 3 times with distilled water by centrifuging at 13,000× g for 5 min to remove the impurities. The remaining sediments were used to extract the genomic DNA using the E.Z.N.A. ® Fecal DNA Kit (D4015-02, Omega Bio-Tek Inc., Norcross, GA, USA). The genomic DNA was stored at −20°C for further analysis. The genomic DNA samples were screened for Blastocystis sp. by PCR amplification of the SSU rRNA genes with a target fragment size of roughly 600 using the primers BhRDr (5 -GAGCTTTTTAACTGCAACAACG-3 ) and RD5 (5 -ATCTGGTTGATCCTGCCAGT-3 ) [9]. The 25 µL reaction system contained 2 µL of genomic DNA, 0.2 mM of dNTP mixture, 1.5 mM of MgCl2, 2.5 µL of 10× Ex Taq buffer, 1.25 U of TaKaRa Ex Taq®(Takara Bio Inc., Dalian, China), and 0.25 µL of primers (10 mol/µL). The PCR reaction conditions were set as follows: initial denaturation at 94 • C for 5 min; 35 cycles at 94 • C for 45 s, 59 • C for 45 s, and 72 • C for 1 min; and an additional 72 • C extension for 3 min. Each reaction included a positive (DNA from Blastocystis sp.) and a negative control (reagent water). The final PCR products were identified by 2% (w/v) agarose gel electrophoresis and stained with ethidium bromide. Sequence Analysis Approximately 600 bp PCR product of each sample was recovered and purified by Tsingke Biotechnology Co., Ltd. for sequencing using the Sanger sequencing method. The subtypes of Blastocystis sp. were identified by aligning with obtained sequences and corresponding sequences available in the GenBank database (http://www.ncbi.lm.nih. gov/GenBank/, accessed on 21 March 2021) using Clustal X 2.1 (http://www.clustal.org/ clustal2/, accessed on 28 March 2021) [26]. The maximum likelihood (ML) method with a Kimura 2-parameter model in MEGA 7.0 (http://www.megasoftware.net accessed on 1 September 2021) [27] was used to establish a phylogenetic tree with 1000 repeats under a bootstrap method, and U21338 was set as the out-group (Figures 1 and 2). Statistical Analysis Data obtained in this study on the prevalence of Blastocystis sp. between different regions and age groups were systematically analyzed with a chi-square test (χ 2 ) using SPSS version 25.0 (IBM SPSS Inc., Chicago, IL, USA), and significant differences were considered only when the obtained p value was less than 0.05. Conclusions In the present study, a total of 1254 fecal samples from diarrheic pigs in three provinces in southern China were examined for the prevalence and subtypes of Blastocystis sp. This was the first report on Blastocystis sp. infection in pigs in Hunan and Fujian provinces. Three zoonotic subtypes (ST1, ST3, and ST5) and one potential zoonotic subtype (ST14) were identified, and ST14 was detected for the first time in pigs in China. Compared to previous reports of healthy pigs infected with Blastocystis sp., the great differences found in the present study were reflected in the increased frequency of ST1 and ST3, the significantly higher Blastocystis sp. infection in fattening pigs rather than in sows, and the detection of ST14. These findings may help understand the genetic structure of Blastocystis sp. in pigs, providing useful data for effective prevention and control of Blastocystis sp. in southern China in the future. Informed Consent Statement: Not applicable. Data Availability Statement: For reasonable requests, the data obtained for this study can be obtained by contacting the corresponding author. The sequences of the Blastocystis sp. obtained from this study were deposited in the NCBI GenBank database under the accession numbers MW767060-MW767075. Conflicts of Interest: The authors declare no conflict of interest.
2021-09-28T05:31:37.489Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "3f15ffe869000d18bfe19aed65a3aab5ca0e421a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/10/9/1189/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f15ffe869000d18bfe19aed65a3aab5ca0e421a", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221878815
pes2o/s2orc
v3-fos-license
Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups Automated gender classification has important applications in many domains, such as demographic research, law enforcement, online advertising, as well as human-computer interaction. Recent research has questioned the fairness of this technology across gender and race. Specifically, the majority of the studies raised the concern of higher error rates of the face-based gender classification system for darker-skinned people like African-American and for women. However, to date, the majority of existing studies were limited to African-American and Caucasian only. The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups. To this aim, we investigate the impact of (a) architectural differences in the deep learning algorithms and (b) training set imbalance, as a potential source of bias causing differential performance across gender and race. Experimental investigations are conducted on two latest large-scale publicly available facial attribute datasets, namely, UTKFace and FairFace. The experimental results suggested that the algorithms with architectural differences varied in performance with consistency towards specific gender-race groups. For instance, for all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates. Middle Eastern males and Latino females obtained higher accuracy rates most of the time. Training set imbalance further widens the gap in the unequal accuracy rates across all gender-race groups. Further investigations using facial landmarks suggested that facial morphological differences due to the bone structure influenced by genetic and environmental factors could be the cause of the least performance of Black females and Black race, in general. I. INTRODUCTION Automated facial analysis (FA) includes a wide range of applications, including face detection [1], visual attribute classification such as gender and age prediction [2], and actual face recognition [3]. Among other visual attributes, gender is an important demographic attribute [2], [4]. Gender classification refers to the process of assigning male and female labels to biometric samples. Automated gender classification has drawn significant interest in numerous applications such as surveillance, humancomputer interaction, anonymous customized advertisement system, and image retrieval system. In the context of biometrics, gender can be viewed as a soft biometric trait [5] that can be used to index databases or to enhance the recognition accuracy of primary biometric traits such as face and ocular region. Companies such as IBM, Amazon, Microsoft, and many others have released commercial software containing automated gender classification system. According to ISO/IEC 22116 [6], the term gender is defined as the state of being male or female as it relates to social, cultural or behavioural factors, the term sex is understood as the state of being male or female as it relates to biological factors such as DNA, anatomy, and physiology. Therefore, the term sex would be more appropriate instead of gender in the context of this study. However, in consistency with the existing studies [2], [4], [7], [8], the term gender is used in this paper instead of sex. Over the last few years, the fairness of the gender classification system has been questioned [8]- [10]. Fairness is the absence of any prejudice or favoritism toward an individual or a group based on their inherent or acquired characteristics [11]. Thus, an unfair (biased) algorithm is one whose decisions are skewed towards a particular group of people. The problem of unequal accuracy rates has been highlighted in gender classification from face images for dark-skinned people and women [8], [9]. Specifically, a research study by the MIT Media Lab [8] uncovered substantial accuracy differences in face-based gen-der classification tools from companies like Microsoft, IBM, Face++, and Amazon [12], [13], with the lowest accuracy for dark-skinned females. The underlying cause of the unequal misclassification rates in gender classification is not investigated in this study. Muthukumar [9] analyzed the influence of the skin type on gender classification accuracy and concluded that the skin type has a minimal effect on classification decisions. However, the dataset used [9] consisted only of African-American and Caucasian. Some of the limitations of the published research [8], [9] in relation to the fairness of the face-based gender classification are as follows: • Limited investigation: There is a lack of understanding of the cause(s) of demographic variation in the accuracy of the gender classification system. • Limited dataset evaluation: Mostly limited size datasets consisting of a limited number of races, mostly African-American and Caucasian, are used for evaluation. • Black-box evaluation: Commercial SDKs from IBM, Face++, and Amazon are used for the fairness evaluation of face-based gender classification system. Therefore, sources of bias may not be ascertained. It is still not clear how the error propagates across multiple gender-race groups for different gender classification algorithms. It is also unknown if the errors are due to skewed training dataset or algorithmic bias (caused by the inherent structure of the algorithm). Figure 1 highlights the problems in current gender classification algorithms. With the widespread use of gender classification system, it is essential to consider fairness issues while designing and engineering this system. The fairness is a compelling social justice as well as an engineering issue. In order to address the bias issue in the gender classification system, it is important to investigate its source. A. Our Contribution In order to further improve understanding of the fairness of the face-based gender classification system across races. Our contributions are the following: • Investigating the sources of bias: The impact of training set imbalance and architectural differences in algorithms are analyzed. Further, the facial morphological differences obtained using 68 facial landmark coordinates [1] are analyzed in understanding the cause of differential accuracy for specific gender-race groups (i.e., Black females). • Thorough evaluation on large-scale datasets: All the analyses are conducted on the latest UTKFace [14] and FairFace [10] facial attributes datasets consisting of four and seven race groups, respectively. Apart from accuracy values, false positives and false negatives are also analyzed. • White-box evaluation: Open-source deep learning based gender classification algorithms are evaluated for full access to algorithms and training data. This paper is organized as follows: Section II discusses the prior work in deep learning-based algorithms for gender classification and the study on its fairness analysis. Section III discuss the CNNs used in this study for gender classification. Experimental evaluations and the obtained results are discussed in section IV. Conclusion and future work are discussed in section V. II. PRIOR WORK This section discusses the recent literature on deep learningbased gender classification from facial images and the related study on its fairness analysis. A. CNNs for Gender Classification from Facial Images A Convolution Neural Network (CNN) is a type of feedforward artificial neural network in which the connectivity pattern between its neurons, that have learnable weights and biases, is inspired by the organization of the visual cortex. The efficacy of CNNs has been very successfully demonstrated for large scale image recognition [15], pose estimation, face recognition, and face-based gender classification [2], to name a few. In [2], an end-to-end CNN model was evaluated on the Adience benchmark. The average gender classification accuracy of 88.1% was reported. Further, studies used finetuned [16] VGG, InceptionNet, and ResNet (pretrained on ImageNet dataset [15]) for gender classification from facial images. Specifically, pretrained ImageNet models are finetuned on the datasets annotated with gender labels. Gender classification accuracy in the range [87.4%, 92.6%] was obtained on Adience dataset. The authors concluded that different CNN architectures obtained different results. Finetuned CNNs obtained better results over those trained from scratch. In [7], transfer learning was explored using both VGG-19 and VGGFace for gender classification on the MORPH-II dataset. Accuracy of 96.6% and 98.56% was obtained for VGG19 and VGGFace, respectively. The higher performance of VGGFace was attributed to pretrained weights obtained from facial images. In [17], authors proposed a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. The maximum of 94% accuracy was obtained on CelebA dataset. The above-mentioned studies evaluated the overall accuracy. The fairness of the gender classification model across males and females was not evaluated. In fact, the datasets such as Adience [2], [16] and CelebA [17] often used in the existing studies revealed over-representation of lighter and underrepresentation of darker individuals in general. For instance, 86.2% of the subjects in the Adience benchmark [2] consists of lighter-skinned individuals. B. Fairness of the Gender Classification System Buolamwini and Gebru [8] evaluated fairness of the gender classification system using three commercial SDKs from Microsoft, Face++, and IBM on Pilot Parliaments Benchmark (PPB) developed by the authors. The dataset consists of 1270 individuals from Africans and European races, and the female and male contribution was 44.6% and 55.4%, respectively. The accuracy differences of 23.8%, 36.0%, and 33.1% was obtained for dark-skinned females using Microsoft, Face++, and IBM, respectively. Muthukumar [9] analyzed the influence of the skin type for understanding the reasons for unequal gender classification accuracy on face images. The skin type of the face images in the PPB dataset was varied via color-theoretic methods, namely luminance mode-shift and optimal transport, keeping all other features fixed. The open-source convolutional neural network gender classifier was used for this study. The author concluded that the effect of skin type on classification outcome is minimal. Thus, the unequal accuracy rates observed in [8] is likely not because of the skin type. However, only African American and Caucasian are used in this study. Worth-mentioning that both the above studies [8], [9] used the PPB dataset consisting of 1270 subjects from Africans and Europeans. Studies in [18]- [20] also proposed data augmentation, two-fold transfer learning and measuring bias in deep representation to mitigate its impact in biometric attribute classifier (such as gender and age). In an attempt to advance the state-of-the-art in the fairness of facial analysis methods, face attribute dataset for the balanced race, gender, and age classification was assembled in 2019 [10]. The authors showed the performance of the ResNet model trained on this dataset for gender, age, and race classification. The average accuracy of 94.4% was obtained on the gender classification model when tested on an external testbed. III. CONVOLUTIONAL NEURAL NETWORK (CNN) MODELS USED This section discuss the deep-learning based CNN models fine-tuned for gender classification. These CNN models are pre-trained on large scale ImageNet [15] dataset comprising of 1.2 million training images and have become the standard benchmark for large-scale image classification. Figure 2 shows architecture of these CNN models. 1) VGG: The VGG architecture was introduced by Visual Graphics Group (VGG) research team at Oxford University [21]. The architecture consists of sequentially stacked 3 × 3 convolutional layers with intermediate max-pooling layers followed by a couple of fully connected layers for feature extraction. Usually, VGG models have 13 to 19 layers. We used VGG-16 and VGG-19 in this study which has 138M and 140M number of parameters. We also evaluated VGGFace model which is basically VGG-16 trained on VGGFace2 dataset [22]. 2) ResNet: ResNet is a short form of residual network based on the idea of identity shortcut connection where input features may skip certain layers [23]. In this study, we used ResNet-50 which has 23.5M parameters. 3) InceptionNet: The hallmark of this network [24] is its carefully crafted design: the depth and width of the network is increased while keeping the computational requirements constant. The architecture has a total of 9 Inception modules, which allow for pooling and convolution operation with different filter sizes to be performed in parallel. In this study, we used InceptionNet-v4. Network Implementation and Fine-tuning: We used pytorch (https://pytorch.org/) implementation of these pretrained networks (VGG-16, VGG-19, VGGFace, ResNet-50 and InceptionNet-v4) along with their weight files for finetuning them. These networks were fine-tuned for gender classification using training set of facial images annotated with gender labels (male and female). Fine-tuning was done by extracting all the layers but the last fully connected layers from aforementioned pre-trained networks and adding new fully connected layer(s) along with softmax. Based on empirical evidence on validation set, fine tuning of the VGG architectures and ResNet was performed by an additional two 512-way fully connected layers and one 2way output layer (equal to the number of classes) along with softmax layer. For InceptionNet-v4, all the layers were extracted until the fully connected layer followed by additional 4096-way, 512-way and one 2-way output layer along with softmax. The fine-tuning was performed using Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of 0.0001 for 1000 epochs using early stopping mechanism. IV. EXPERIMENTAL EVALUATION In this section, datasets used, experiments conducted and the results obtained are discussed. We used cropped face images obtained using Dlib face detection utility [1]. A. Datasets UTKFace [14]: UTKFace [14] is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of total of 20, 000 face images scrapped from the web and annotated with age, gender, and race labels. The images cover large variation in pose, facial expression, illumination, occlusion, and resolution. The four race groups included are as follows: White, Black, Indian, and Asian. The training portion of the UTKFace dataset consist of 41% females and 59% males, therefore is skewed towards males. Table I shows the complete sample distribution of training subset of UTKFace dataset used in our experiments. Sample images from UTKFace dataset are shown in Figure 3. FairFace [10]: The facial image dataset consisting of 108, 501 images, with an emphasis on balanced race composition in the dataset [10]. The seven race groups defined in the dataset are as follows: White, Black, Indian, East Asian, Southeast Asian, Middle East, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups. The dataset was released via https://github.com/joojs/fairface. The training portion of the FairFace dataset consist of 47% females and 53% males. Table II shows the complete sample distribution of training\test subset of FairFace dataset used in our experiments. Sample images from FairFace dataset are shown in Figure 4. B. Results Following the recommendation in [7], [16], we used finetuned models for gender classification. The deep learning models were fine-tuned on training subset of UTKFace (Table I) and FairFace (Table II) datasets. 70% of the training data was used for fine-tuning the models, and the rest 30% was used as a validation set. Subjects did not overlap between training and validation sets. We also trained AdienceNet model [2] but due to very low accuracy of 0.65 on the validation set, this model is not used used for further investigation. InceptionNet-V4 obtained training and validation accuracy of 0.90. We even tried training these models from scratch; however, accuracy rates were much lower in comparison to those obtained using fine-tuning, confirming the observation in [16]. The fine-tuned models are evaluated on the test subset of the FairFace dataset (Table II) for fairness evaluation across gender-race groups. Next, we discuss the experiments conducted and the results obtained. Exp #1: Training on Gender and Race Balanced Dataset: The goal of this experiment is to evaluate and compare the fairness of the different CNN architectures used for gender classification. The hypothesis is different CNN architectures may obtain different accuracy rates due to feature representation differences emerging owing to their unique architecture. For this experiment, all the CNN models are fine-tuned on gender and race balanced training subset of FairFace (Table II). The accuracy, false positive, and false negatives of all the models were evaluated and recorded on test subset of FairFace dataset. Table III shows the male, female and overall accuracy of the CNN models in gender classification. It can be seen that for most of the models, the overall accuracy of about 91% was obtained on the test set. However, ResNet-50 had a higher male accuracy rate over other models. VGG-16 and VGG-19 obtained higher accuracy rates for females over males. The InceptionNet performed poorly over all other networks. Although InceptionNet obtained training and validation accuracy values of 90%, the reason for poor performance on the test set could be over-fitting. Among VGGs, VGGFace has higher male accuracy over VGG-16 and VGG-19, this could be attributed to the fact that VGGFace is pretrained on VGGFace2 dataset [22] which is skewed towards male population (59.7% males and 40.3% females). Therefore, bias could have been propagated from pretrained weights. Rest other models used pretrained ImageNet weights obtained from general object classification. There is minimal chance of gender-related bias propagation from ImageNet dataset. Table IV shows the accuracy values of all the deep learning models on gender-race groups on FairFace test distribution. It can be seen that despite average accuracy values being equivalent, all the algorithms varied across gender-race groups. For instance, ResNet-50 obtained higher accuracy rates for males for all the races. VGG-16 and VGG-19 consistently obtained higher accuracy rates for females for all the races except Black females with an average difference of 0.037 over Black males. VGGFace (which is VGG-16 pretrained on VGGFace2 dataset) obtained higher rates for males except for Latino. InceptionNet-v4 obtained the major difference in the accuracy values between males and females. The least standard deviation of 0.031 in the accuracy values is obtained by VGG-16 (Table V). In Table V, difference in the average is the mean male and female accuracy values. Overall, Middle Eastern males obtained the highest accuracy values followed by Indian and Latino. These results are in accordance with those reported in [10]. This also suggests that the general notion that White males perform better than others may be incorrect. Latino females obtained the highest accuracy, followed by Middle Eastern females. White and East Asian females obtained equivalent accuracy values overall. All the models obtained the least accuracy rates for Black females (average accuracy being 0.749). Further, Table VI shows the false positives and false negative of the gender classification system for all the CNN models. False positives are females classified as males, and false negatives are males classified as females. In accordance with Table IV, VGG-16, and VGG-19 obtained lower false positives in general, except for Black females. Inception-V4 obtained higher false positives and false negatives. The black race has higher false negatives for most of the models, which means that black females are misclassified as males more often than other females. ResNet-50 maintained a better balance between false positives and false negatives over other models. The highest false negatives are obtained for black, followed by Southeast Asian males meaning that they are more likely to be classified as females. Overall, CNN models with architectural differences varied in performance with consistency towards specific gender-race groups. For instance, all the algorithms obtained the least accuracy for Black females and higher accuracy rates for Middle Eastern males. Therefore, the bias of the gender classification system is not due to a particular algorithm . Study in [25] also suggest that gender balanced training set did not improve face recognition accuracy for females. Exp #2: Training on Un-balanced (Skewed) Dataset: The goal of this experiment is to evaluate the impact on the fairness of the gender classification algorithms on gender-race groups when the training dataset is skewed towards certain sub-groups. To this aim, the training subset of the UTKFace dataset is used for fine-tuning the gender classification algorithms, which is skewed towards the male population and did The models are evaluated on the testing part of the FairFace dataset containing gender-balanced seven gender-race groups. As both UTKFace and FairFace datasets are scraped from the web, the cross-dataset impact may not be applicable. Table VII tabulates the overall performance of the models when fine-tuned on the UTKFace dataset and tested on the FairFace test set. It can be seen that the overall performance of all the models dropped. The reason is the under-representation of races and over-representation of the male population in the training set. All the models performed equivalent with an overall accuracy of 0.789, 0.762, 0.780, and 0.850 for ResNet-50, InceptionNet, VGG-16, and VGGFace, respectively. The overall gap between male and female accuracy rates have increased to 0.181 from 0.07 (obtained when a balanced training set was used). VGG-19 obtained almost equal accuracy rates in comparison to VGG-16. Table VIII, shows the gender classification accuracy across gender-race groups for all the models when trained on the UTKFace dataset. In this case, all the models obtained higher accuracy rates for males over females. This is in contrary to results obtained in Table IV, where VGG-16 obtained higher accuracy rates for females from all the race groups, except Black females, over males. For each model, the standard deviation in the accuracy rates across gender and races has increased by at least 0.43 (Table IX). Middle Eastern males still obtain higher accuracy rates followed by Indian, Latino, and White males. On an average, Latino females outperformed all other females. This is followed by East Asian and Middle Eastern. The average accuracy for Black females further reduced by 0.143 and remains the least (0.606). Table X shows the false positives and false negatives of the gender classification system when trained on the UTK-Face dataset. The highest false positive was obtained for the Black race, which suggests that Black females are most often misclassified as males. This is followed by Southeast Asian females. The highest false negative was obtained by Middle Eastern. These results suggest that a skewed training dataset can further escalate the difference in the accuracy values across gender-race groups. However, architectural differences and skewed training datasets are not the only reasons for bias in the gender classification system. In fact, for both the exper- , in particular, we studied the difference in the facial morphology between the Black race and others. To this aim, we randomly selected 500 male and 500 female face images for each of the seven-race groups from the FairFace dataset using python script and extracted 68 facial landmarks using Dlib [1] library. Figure 5 shows the indexes of the 68 landmark coordinates visualized on the image. The 68 landmark locations for each face images were appended together into a one-dimensional feature vector of 128 dimensions. The feature vectors are clustered using K-means clustering for understanding differences in facial morphology. Figure 6 shows the plot obtained on clustering the facial landmarks from all the races into two clusters. Among all, 92% of the Black males and females were clustered together. 62.8% of the facial landmarks belonging to other races were clustered together into the second group. Figure 7 shows the clustering of facial landmarks from females of all the races into two clusters. 96.8% of the Black females were grouped together in a single cluster. 63.16% of the facial landmarks belonging to females from other race were clustered together into the second group. Figure 8 shows the plot obtained on clustering the facial landmarks of Black males and females into two clusters. Among all the samples, only 11.7% and 35.2% of the landmarks belonging to Black females and males, respectively, were grouped together into a single cluster, which suggests a high facial morphological similarity between Black males and females. The above plots suggest that significant facial morphological differences are the result of consistent low accuracy rates of the Black race. These results also suggest that high morphological similarity between Black males and females are the potential cause of the least accuracy rates for Black females. V. CONCLUSION AND FUTURE WORK In this paper, we investigated the source of bias of the gender classification algorithms across gender-race groups. Experimental investigations suggested that algorithms with architectural differences may vary in performance even when trained on race and gender-balanced set. Therefore, the bias of the gender classification system is not due to a particular algorithm. For all the experiments conducted, Black Race and Black females in specific obtained least accuracy rates. Middle Eastern males and Latino females obtained highest accuracy rates, also observed in [10]. The reason could be skin-tone reflectance property in varying illumination combined with facial morphology. The skewed training set can further increase the inequality in the accuracy rates. Further, the analysis suggested that facial morphological differences between Black females and the rest females, and high similarity with the Black males could be the potential cause of their high error rates. Only 11.7% and 35.2% of the landmarks belonging to Black females and males, respectively, were grouped together into another cluster which suggest high morphological similarity between Black males and females. As a part of future work, statistical validation of the results will be conducted on other datasets. The impact of other covariates such as pose, illumination, and make-up on unequal accuracy rates will be studied. Experiments on facial morphological differences will be extended using deep learning-based landmark localization methods [26] on all gender-race groups. The reason for a specific gender-race group outperforming others will be investigated based on skin-tone reflectance property, facial morphology, and the impact of other covariates.
2020-09-25T01:00:31.486Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "81e568ba0006e6a36d6fa5992bf74f4bd6f957aa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.11491", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "81e568ba0006e6a36d6fa5992bf74f4bd6f957aa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257093995
pes2o/s2orc
v3-fos-license
Virtual Returns: Colonial postcards online and digital ‘nostalgérie’ among the former European settlers of Algeria This article analyses how certain former European settlers of Algeria (pieds-noirs) have created a digital space of remembrance online using scans of colonial-era postcards. Tracing the role of colonial-era postcards in pied-noir memory narratives, from the phototexts of the 1980s to websites from the mid-2000s onwards, I suggest these digital sites of memory attempt to maintain a connection to an imagined Algerian homeland during the so-called ‘memory wars’. By collecting, scanning, and reproducing postcards and photographs of colonial landscapes, pieds-noirs websites aim to reconstruct a lost topography of houses, shops, streets, and towns that have been renamed and rebuilt since independence. These ‘virtual returns’ to Algerian urban topographies rely predominantly on affective responses to ‘nostalgérie’ or nostalgia for Algeria. However, in relying on colonial-era postcards they ultimately recreate the ‘visual economy’ (Welch and McGonagle) of French Algeria in the early 20th century. I argue that, despite the radical ‘connectivity’ presented by the internet, these websites remain primarily focused on creating a homogenous collective memory for an imagined audience of pieds-noirs online. Nonetheless, I conclude by suggesting that this online model of colonial nostalgia has permeated, in limited but influential ways, how other groups interpret visual ‘nostalgérie’. Introduction A s the articles in this special issue suggest, scholarship on digital technologies and the remembrance of postcolonial migrations has been primarily focused on Anglophone linguistic and cultural frameworks. However, the online 'francosphère' 1 also engages with digital practices of remembrance and contestation of France's colonial past. Particularly contentious are the legacies and 'ruinations' (Stoler, 2013) of France's settler colonial project in Algeria (1830Algeria ( -1962. On 5 July 1962, Algeria celebrated its independence after 132 years of French colonial occupation. Algerian independence also marked the official end of the bloody Algerian Revolution or Algerian War of Independence (1954Independence ( -1962. In a matter of few months over the summer of 1962, around 800,000 people left the newly independent Algeria, mostly for metropolitan France. These were the European settlers and naturalised French Jewish Algeriansby then collectively called the pieds-noirsfor whom 1962 continues to represent the moment of profound rupture and the start of their exile from what they perceive to be an Algerian homeland. This article contributes to growing scholarship on pied-noir memory and commemoration (Barclay, 2018;Eldridge, 2016;Hubbel, 2011Hubbel, , 2015Phaneuf 2012;Scioldo-Zürcher, 2012;Sims, 2016;Slyomovics, 2020) by tracing some of the ways in which selfidentified pied-noir individuals disseminate images of French Algeria online. The 8 years of the Algerian War for Independence remain the 'fulcrum of Algeria's 20th-century history' (McDougall, 2017, p. 4), to the detriment to other longue durée approaches. The shadow of the war looms heavily over any public discussion of life in colonial Algeria, crystallising the different camps and factions that had existed in French-Algerian settler colonial society; with the 'French' settlers-or pieds-noirs-on one side and the 'Arabs' and 'Algerians' on the other. Frantz Fanon famously described the colonial world of Algeria as a Manichean one (Fanon, 1963, p. 41), of divided camps supported by the racial hierarchies of empire. In the simplest terms, the position of pieds-noirs in colonial Algerian society was underpinned by their privilege in a rigid colonial system. The words of Elaine Mokhtefi, American translator for the Algerian National Liberation Front, bluntly summarise perceptions of the pieds-noirs by the late colonial period: in Algeria, the pieds-noirs were 'greedy' and complacent, while in France, they lined 'up for indemnities from the French government' (Mokhtefi, 2018, pp. 55-56). Unsurprisingly, the narrative presented by pieds-noirs activists and associations after 1962 are diametrically opposed to suggestions that pieds-noirs either contributed to the violence of colonial Algeria or benefited materially from the French state in their exile. Rather, their narratives centre the pieds-noirs as doubly victimised and abandoned by both the métropole and post-colonial Algerian state (Eldridge, 2016). Since the profound rupture of Algerian Independence and the subsequent exodus of pieds-noirs, Algerian Jews, and harkis (Algerian soldiers who fought for France), different communities on both sides of the Mediterranean have politicised memory in an ongoing struggle 'to be "heard" at different times and in distinct ways' (Aissaoui and Claire, 2017, p. 2). Memory narratives have been associated with particular groups, forming different camps in the so-called 'memory wars' (Stora, 2007 my translation) or 'memory struggles' (Harchi, 2017, p. 83 my translation) which raged in the absence of any commemorative consensus by the state or majority actors. It is in this context that pieds-noirs groups and activists in France have successfully produced and circulated a 'meta-memory' of Algeria 'premised on a canon of historical and cultural narratives' which continues to resonate in postcolonial identity politics today (Eldridge, 2016, p. 48). By the 1990s, memories of colonial atrocities on metropolitan soil were gaining more and more public recognition with the high-profile trial of former Chief of Parisian Police and Vichy civil servant, Maurice Papon (1997Papon ( -1998 who orchestrated the deportation of Jews from Bordeaux, as well as the violent massacre of peacefully protesting Algerians in Paris on 17 October 1961 (House and MacMaster, 2006). Thus, in the late 20th century, colonial memory in France is partially constituted by various histories of victimisation that are mediated via repeated traditions and cultural articulations by particular identity groups (Assmann, 2011, p. 8). In this crowded and competitive memory landscape, pied-noir memory narratives contributed to a competitive logic of 'victim one-upmanship' (Stora, 2007, p. 46 my translation) in order to lobby for their particular voices and perspectives in the present. The visibility of these different memory narratives also coincides with growing access to the internet and personal computers in the late 20th century. Predating the labelling of these 'memory wars' in the early 2000s, individuals and collectives alike were already taking to the internet to advocate for their commemorative perspectives. Throughout the 1990s, 'websites of memory' could function alongside more familiar 'sites of memory' (Nora, 1984), such as physical monuments, news media, and cultural production (Smith, 2013). As Laura Jeanne Sims argues 'since French Algerians come from an absent place, a geopolitical entity that no longer exists, their connection to the land has been irrevocably severed', they experienced a particular 'need for new, local sites' (2016, p. 132). 2 Personal websites offered a relatively cheap and practical solution for this need, with little oversight or perceived censorship. Online, individuals can record their personal memories on behalf of a collective pied-noir identity, reinforcing this 'meta-memory' of severance from an Algerian homeland and their experiences of victimisation during the final stages of the war. This article focuses on the prevalent role played by colonialera postcards in websites published by self-identified pied-noir internet users in this period of the 'memory wars' (mid-2000s onwards). Specifically, it examines how the visual culture represented in these personal and amateur websites of remembrance is linked to particular forms of colonial nostalgia for Algeria ('nostalgérie'). By digitising the images of colonial postcards depicting Algerian peoples and landscapes, these websites support and reinforce efforts by pied-noir activists and associations to perform a collective pied-noir identity and mythology. In examining how pied-noir memory and identity is represented and performed via French-language websites and social media platforms, we can better understand how these websites attempt to represent pieds-noirs as a distinct community, intimately connected to and yet severed from both metropolitan France and Algeria. While this article discusses a small sample of active pied-noir websites, it argues that their shared visual representations of colonial space via postcards tells us how these websites aim to be understood as digital 'sites of memory' (Matos et al., 2013) but also function as vehicles for an imagined return (Hubbell, 2011). The visual language of these websites reinforces the idea of Algeria as a geographical site of impossible return, while also reiterating pied-noir identity in relation to a particular kind of colonial 'Frenchness', oscillating between an imagined Algerian homeland and France. In reproducing the colonial imagery of early 20th century urban Algeria, these pieds-noirs websites create a picture of historically fixed and homogenous pied-noir identity as part of what Eric Savarese calls an 'identity strategy seeking to transform a million dissimilar people into an active and politically influential structured group of individuals' (2006, p. 459). Pied-noir identity and metropolitan Frenchness While lobbying by pied-noir activists and associations often departs from the principle of pied-noir victimisation at the end of the Algerian War for Independence (Phaneuf, 2012), historians such as Claire Eldridge (2016) have shown that the settlers were well accommodated by the French state which undertook their housing and jobs in a relatively short amount of time. Although exiled from their homelands, the Frenchness of the settlers and Jewish Algerians (who had been naturalised as French in 1870) was never officially in question. Nonetheless, from the 1970s, with much of their material needs addressed, a number of piednoir associations were established to preserve and foster the notion of a pied-noir culture, history, and identity. Despite the now ubiquitous appellation of pieds-noirs, the European settlers of Algeria were a relatively diverse and heterogenous group in colonial society. Like many settler societies, the European settler society of Algeria was constituted through immigration. These migrants originated from across the Mediterranean basin, including Spain, Italy, and Malta. In other words, not all piedsnoirs families originated from France but were largely naturalised as French. The Jewish populations of Algeria were naturalised as French citizens by the 1870 Crémieux decree, creating a legal and cultural distinction between Jewish and Muslim Algerians that was interrupted by the temporary abrogation of the decree during the Second World War. As Benjamin Stora (2006) and Judith have both noted, the Crémieux decree both exiled Jewish Algerians from Algerianness in the adoption of a French identity, and also vivisected the possibilities of Jewish and Arab-Berber fraternity found in the shared term of 'Semite'. Despite their unique identity in French colonial society, many Jewish-Algerian families in colonial France also departed with the pied-noir exodus from Algeria. By the late colonial period and the Algerian Revolution or War of Independence, nonetheless, the linguistic and cultural diversity of the settlers and Jewish Algerians tended to be marginalised in favour of a distinct, unified pied-noir identity in Algeria. It would be an oversimplification to suggest that the European settlers and naturalised Jewish Algerians had a straightforward relationship to Frenchness. The community of former communities of settlers would be intimately shaped by the métropole's policy and narrative of what had taken place in Algeria during the 8 years of the War of Algerian Independence. For Fiona Barclay, once pieds-noirs had been 'returned' to metropolitan France, they were subjected to a socially constructed melancholia when faced by a society that had little interest in lingering on the events of the Algerian War of Independence (2018, pp. 248-249). Without widespread public discussion or consensus on the events of the late colonial period and the Algerian War, the community itself was able to fill in the gaps with their own narratives about who they are in France and who they were in Algeria. In this way, 'settler colonial culture [outlasts] the temporal bounds of the settler colony', one in which pieds-noirs were melancholically transformed into a subject 'caught within the intersecting matrices of scapegoat, victim and executioner' (Barclay, 2018, p. 259). One way in which the European settlers of Algeria defined their identity in relation to both an idealised Algerian homeland and the relatively foreign metropole is through language and cultural expression. Indeed, the particularity of pied-noir Frenchness has predominantly been explored from literary and anthropological perspectives (Hubbell, 2015;Lorcin, 2012;Smith, 2003;Slyomovics, 2020), emphasising the plurality of the piedsnoirs as a liminal group that did not neatly fit in with French, metropolitan society. Oral histories with pieds-noirs tend not to consider the extent to which whiteness and French citizenship afforded privileges for this community exile. Instead, emphasis is laid on their differences from the metropolitan French society by identifying their common yet diverse European ancestry (especially Spanish), distinct accents, unique pied-noir slang and sense of humour (Pied-noir Stories, 2019). Nonetheless, French remains the lingua franca of this community. Pied-noir community and associational practices reclaim their place as French citizens at the same time as they differentiate themselves as having a distinct and pluralised provincial history and identity (Phaneuf, 2012). But what happens to the diverse origins of settler society in Algeria via the digital representation of pied-noir memory? Through the following discussion of nostalgérie and visual culture, it becomes clear that online representations of pied-noir memory narratives do little to reflect the plurality of North African, or the self-stylised 'Latin', identity (Barclay, 2018, p. 245) of those who left in 1962. Nostalgérie and visual culture Pied-noir nostalgia is so prevalent as a cultural phenomenon that it boasts its own neologism: 'nostalgérie', or 'nostalgeria'. The genealogy of the 'mot-valise' 'nostalgérie' predates 1962 3 and the mass exodus of European settlers from Algeria. Indeed, according to Seth Graebner (2007) nostalgia characterised the relationship between France and Algeria throughout colonisation. Eldridge suggests that 'nostalgérie' is more than an affective mode at the level of each individual settler but can 'be read as a consciously formulated counter-history that, irrespective of its accuracy, poses questions about the dominant official narrative' (2016, p. 128), a counter-history which pieds-noirs memory advocates (broadly defined as promoters of particular memory narratives) and associations quickly solidified in the decades following repatriation. However, Eldridge also suggests that this 'counter-history' is also a conscious commemorative strategy that is based on a 'lack of self-awareness among certains pieds-noirs which renders them unwilling or unable to acknowledge the privileges they enjoyed and their complicity in the colonial system' (2016, p. 128). Part of this commemorative strategy involved reproducing some of the nostalgic imagery and orientalist visual representations of Algeria that have buttressed what Edward Welch and Joseph McGonagle (2013) have called the 'visual economy' of French Algeria. Indeed, the online expressions of 'nostalgérie' explored in this article focus on the experiences of loss of an idealised Algerian space by privileging images of Algeria from the turn of the 20th century. These websites use colonial postcards for the digital recreation of an urban topography that no longer exists, reproduced so that visitors, namely pieds-noirs and their descendants, may virtually 'return' to a space and time from which they have been irrevocably removed. These colonial-era postcards played an important role in the visual economy of French Algeria long before 1962. At the turn of the 20th century, France witnessed a boom in the production, circulation, and collection of postcards. According to David Prochaska, postcard production grew from 8 million in 1899 to 60 million in 1902 (1990, p. 375). As modern France began to represent itself photographically to the world, the role of colonised territories in the visual reproduction of the nation came into particular focus. The impetus for this mass production of postcards, with a particular focus on the colonies, was nonetheless a metropolitan economic endeavour. Paris-based photographic studies, such as the Neurdein Studios (or ND Studios) were funded by the French government to produce images for travel guides and historic records, but also to stimulate economic investment in French Algeria: tourism was expanding the colonial infrastructure 'and postcard publicity stimulated private investment' (DeRoo, 1998, p. 145). Profiting from a boom in the production of postcards at the turn of the century, these images of Algeria circulated in France and beyond during the colonial period in order to advertise French Algeria to investors and wealthy tourists alike. Today, these images continue to play an important role in 'selling' particular perspectives, attitudes, and understandings of colonial society in Algeria but to new audiences and with very different purposes. As in other colonial contexts, the proliferation of photography in the 19th and early 20th centuries is integral to the production of an 'Algérie imaginaire' (Prochaska, 1990) for those living both in and outside French Algeria. The postcards popularised forms of imperial knowledge concerning racial hierarchies and orientalist fantasy all the while being relatively cheap and highly collectable. Welch and McGonagle suggest that the ubiquity of colonial photography means that the circulation of these postcards was 'not just symptomatic of colonial activity, but constitutive of it' (2013, p. 14). Two distinct genres of colonial images emerge through this visual production. On the one hand, cards portrayed the distinct landscapes and architectures of colonised territory, focused on scenes of colonial modernity incorporating images of 'European' architecture and infrastructure in the northern cities of Algiers, Oran, and Constantine. On the other hand, through the popular 'scènes et types' series, the cards also recreated scenes of orientalist fantasy depicting the bodies, clothes, and habits of colonised peoples, fetishizing women in particular in overtly racialized and sexualised forms. Postcards of colonial Algeria have lived various afterlives after 1962. Reproduced throughout the 20th century, some have been the subject of some reinventions by Algerian writers and artists. For example, Malek Alloula's Le harem colonial (1981) and Leïla Sebbar's, 2002 essay 'Les femmes du peuple de mon père', both sought to reappropriate the exoticized and eroticised images of Algerian women and girls, although to very different ends. Around the 1980s, a publishing tradition in France emerged where pied-noir writers would produce illustrated books about their hometowns, in which stories about their childhoods (roughly from the 1940s onwards) would often be illustrated with turn of the century images of an Algeria long before they were born. One reason for this is practical: with the rushed and disrupted departure from Algeria, many families did not bring photographs with them, let alone a range of photographs of their homes and streets. Colonial-era postcards, with their focus on marketing French Algeria on behalf of the Empire, stand in for absent family memories. The postcard is used both as historical source and as a personal tool for the preservation of a lost homeland among pieds-noirs themselves. Phototexts that collect and reframe these postcards also have the aim of facilitating transmission of this imagined space to new readers and generations of keepers of the pied-noir postmemory (Hirsch, 1997). However, in attempting to recreate a memory of this lost homeland, by drawing on early turn of the century post-cards produced by the colonial metropole, these text also uncritically reproduce the visual economy of colonial Algeria. In her discussion of Paul Azoulay's La Nostalgérie française (1980) Mary Vogl notes: Azoulay is praised in the preface for reproducing these old postcards, which can supposedly help the reader understand an era that witnesses the existence of two separate societies, one privileged and the other exploited. The separation between the two societies is indeed evident in the book, but what is missing is an explanation of how and why this came to be. Instead, Azoulay offers only "the Memory I keep of my ALGERIA": sunshine and happy times for the French colonials. (2003, p. 175) While Welch and McGonagle also critically assess pied-noir phototexts as being firmly within a narrative that confirms, rather than challenges, colonial perspectives, they do not dismiss 'nostalgérie' or overlook its effect on collective and historical engagement with the European settlement in Algeria and their sense of dispossession since 1962: Nostalgia should be taken seriously as a mode of remembrance and historical understanding […] we need to get to grips with its forms of expressions, its politics and ethics. Such issues are all the more timely given […] both the persistent presence of images of French Algeria in the broader public sphere in France and its increasing visibility in French culture. (Welch and McGonagle, 2013, p. 17) Here, they suggest that the recirculation of these postcards, among other visual artefacts, are more than 'vehicles of nostalgia' (2013, p. 38) and can do more than simply reproduce colonial-era racism, but also reinforce the historical agency of the pied-noir communities. Indeed, as Katharine Niemeyer points out rather than something to be dismissed, 'nostalgia connects people and this is equally where its danger lies. We should not only ask what nostalgia is good for or what it is not good for as it can be used in terms of rhetoric political manipulations' (2016). For the piedsnoirs in the late 20th century, 'nostalgérie' is a way to construct a collective identity that is both outside the French national identity and post-colonial Algeria identity formation. It enforces their liminality as both French and exiles from Algeria. Pied-noir websites and virtual returns With the advent of the internet, pieds-noirs memory advocates used similar visual strategies to represent lost Algerian homelands online. However, without a centralised or majority acting association to represent all pieds-noirs, many of these websites remain personal, non-professional, and relatively isolated. e-Diasporas is a web archive project which includes a study by Yann Scioldo-Zürcher, who has collated and mapped websites run by and for anyone falling into the broad bracket of 'Français rapatriés' (repatriated French). This includes European settlers from Algeria, Morocco, Tunisia, as well as French Jews who were expelled from Egypt in 1957, Tunisia and Morocco in 1967, and French settlers in Vietnam leaving in 1975. He notes that there are, surprisingly, no websites dedicated to 'repatriatés' from sub-Saharan Africa. According to the e-Diaspora project, pieds-noirs from Algeria are by far the dominant group, reflecting Algeria's status as a settler colony but also the weight of the pied-noir memory. The project established that, as of November 2011, there were 259 sites by repatriates from Algeria. 76% served commemorative purposes, only 4% mention both colonial and postcolonial Algeria, and only one site makes reference to present-day Algeria. In other words, the websites are overwhelmingly concerned with the pre-1962 past (Scioldo-Zürcher, 2012). The study has also found that the repatriate sites about Algeria very rarely create connections with other French settler groups. The websites are divided, firstly, along lines of geography, and secondly, along lines of religion, with dedicated sites to Jewish repatriates that are not well linked to other repatriate sites. In other words, piedsnoirs websites are predominantly focused on colonial Algeria and on themselves as pieds-noirs from Algeria. The objective and function of these websites, therefore, is to communicate a precisely Algerian and pied-noir perspective and identity, rather than investigate other experiences, common or differentiated, of colonial societies related to the French empire. This focus on the specificity of Algeria, rather than forging connections with other colonial contexts, is demonstrated in pieds-noirs websites representation of the topography of Algerian cities under colonialism. Indeed, like the phototexts from the 1980s, the websites documented by the e-Diaspora project in 2011 share a common characteristic; both, privilege images depicting the urban landscapes over the 'scènes et types' portraits of Algerian Arabs, Berbers, and Jews (Scioldo-Zürcher, 2012). Reproducing images of colonial modernity amplifies the presence and activities of the European settlers, while marginalising representations of the colonised. If Algerian Arabs, Berbers, and Jews are represented, they are decontextualised, frozen in a nonspecific historical time-evidence of a generalised, exotic, and pacified Other. With these visual references, the websites create narratives that exist between history and memory, in which individual experience of loss, nostalgia, and exile is held up to contest other perceived 'official' narratives surrounding the end of empire in Algeria. The representation of the predominantly urban spaces of European modernity in the northern Algerian towns and cities, the postcards are reproduced as authoritative historical documents that can support the pied-noir counter-narrative. Furthermore, these images accompany the narratives of lived experience of the places depicted in the postcards, albeit from a different era. Thus, the authors of the websites attempt to create individualised yet authoritative sites of memory. Taking the website 'Ville d'Oran' as a case study, we can see how individual websites function as sites of memory for specific places and lost topographies. First published in 2007, 'Ville d'Oran' gathers images and postcards of colonial Oran as a way to commemorate the city's distinct identity in Algeria. The page 'En flänant … dans nos souvenirs' ['Strolling … through our memories'] implies that these websites can enable the user the opportunity to return, virtually, to the past. Visiting the website becomes an act of memory, the digital equivalent of walking through the streets of Oran. The author of this site therefore makes the connection between the computer as a mnemonic technology, declaring "J'ai acquis une machine à remonter le temps: un ordinateur!" ("I have acquired a machine that can turn back time: a computer!"). He describes visiting other websites dedicated to Oran, and 'les années se sont effacées d 'elles-mêmes, 2007, 2006, 2005, …2000, …1990, …1980 … et 1963′′ (and the years melted away, 2007, 2006, 2005, … 2000, … 1990, … 1980 … and 1963). The stated aim of the website is therefore to erase the intervening decades between the present and 1963, transporting the visitor who is presumably disinterested in the post-colonial developments of their old neighbourhoods in Algeria. The website also invites the visitor to impose their own nostalgic interpretation on the colonial-era postcards used to illustrate this 'virtual return'. Overwhelmingly, the images of postcards are only scanned on one side, that of the image. Any inscriptions on the reverse and the idea that these are images of material, textual artefacts with their own histories and trajectories are not recorded. In other words, the postcard is reproduced primarily for its value as a visual document, as a photographic representation of a real place rather than as a text or message. Instead of recording the textual record of the postcard as a historically situated form of communication, the authors of these websites inscribe their own textual associations through captions or longer textual inscriptions such as that found on the 'En flânant …' page described above. Like the original users of the postcards, the website creators inscribe their own textual messages alongside the image, but with very different objectives; not, 'Wish you were here', but rather 'Wish I was there' (McGonagle and Welch, 2013, p. 13). Websites dedicated to specific places at a particular time (pre-Independence) tend not to acknowledge the geographic transformations that these cities and streets have undergone since 1962. With Algerian independence, the streets and boulevards of the northern Algerian towns and cities were renamed as part of the reparative act of nation building in the wake of 132 years of French occupation (Boumedini and Dadoua Hadria, 2012). Pieds-noirs websites ignore the Algerian street names (frequently baptised after the martyrs of the Revolution), providing instead French street names in order to guide the online visitor through a virtual tour of colonial Algeria. The website 'Algeroisement Votre' (first published 2010) goes as far as to recreate street plans and identify the businesses and spaces (and potential families) who lived at specific addresses. Obviously, there is a practical reason for this choice that also speaks to the website's intended audiences: pieds-noirs in France seeking out information about their old neighbourhoods are more likely to remember the colonial street names, than recognise the Algerian ones. However, this very practical choice also produces the particular effect of reproducing a virtual representation of a lost urban topography that ceased in the years following independence. By publishing scans of early 20th century postcards alongside these directions and maps from the decades prior to 1962, visitors to 'Algeroisement Votre' take part in a virtual tour of the city, ostensibly through the eyes of the exiled pieds-noirs themselves, but are anachronistically guided by images of the city some 40-50 years earlier. Other online advocates of pied-noir memory have taken the concept of a 'virtual return' one step further by dramatising images of colonial Algeria as YouTube videos. For example, the YouTube video entitled 'T'en souviens tu avant 1962' (34baimo 2011) ('Do you remember it before 1962') is voiced and edited by a former resident of Algiers. A work of self-curated 'nostalgérie', the creator of the video narrates a guided tour of Algiers 'before 1962', illustrated by grainy reproductions of digitised post-cards and photographs varying from the late 19th century to the modern day. While the individual websites dedicated to recreating lost neighbourhoods and cities can exist in relative isolation, YouTube's comment function prevents this particular work of 'nostalgérie' from existing in a vacuum. The video sharing platform's comment function means that piedsnoirs posters are confronted by the concept of 'context collapse', described in social media research as the incongruencies between singular or plural 'imagined audiences' and the multiplicities of real-life internet users (Marwick and boyd, 2011). Indeed, the 161 comments below 'T'en souviens tu avant 1962' (as of 31 October 2021) demonstrate a broad range of reactions that both confirm and contest its content. While not representative of all possible reactions and receptions of the 10-min video, the comments here offer an insight into how pied-noir memory narratives can travel and be contested outside the demarcated 'sites of memory' represented by villedoran.com and algeroisementvotre.free.fr and their 'imagined audiences'. For example, a number of comments aim to correct the date of construction of the Saint-Philippe cathedral, pointing out it was built on the site of Ketchaoua mosque. Other commenters contest the nostalgic content of the video by linking out to other YouTube videos with contesting narratives and perspectives on life under French colonialism. Another user posts three links to YouTube videos with alternative histories of the city of Algiers and an amateur documentary on the 8 May 1945 Sétif massacre. While offering additional historical context to the crimes of French colonialism, these comments also insist on a longue durée interpretation of Algerian architecture and urban topography that contest the pied-noir perspective which focuses on the late 19th and early 20th century. Nonetheless, many comments react on the emotive and affective intention of the video, such as the following comment; 'Ya hasra enfance de mon pere' ('The good old days, my father's childhood'). These comments could be interpreted as a microcosmic visualisation of the so-called 'memory wars', in which the comment section under one YouTube video is transformed into the discursive battleground for competing factions struggling for commemorative recognition. Alternatively, we could read this in terms of what Michael Rothberg, Debarati Sanyal, and Max Silverman have called 'noeuds de mémoire' (knots of memory) in 2010, suggesting that ′′knotted′′ in all places and acts of memory are rhizomatic networks of temporality and cultural reference that exceed attempts at territorialisation (whether at the local or national level) and identitarian reduction' (Rothberg et al., 2010, p. 7). In other words, while these digital sites of pied-noir memory attempt to establish nostalgic iterations of French Algeria, the platforms themselves (especially social media and video hosting platforms such as You-Tube) allows these memories to exceed their target audiences and encounter counter-narratives and perspectives. Through 'context collapse', pied-noir users of social media cannot post their virtual returns without running the risk of their vision of 'nostalgérie' being contested or contextualised. These contested encounters with pied-noir commemorative content online also disturb the predominance of the Frenchlanguage as the lingua franca of colonial nostalgia. Other comments respond in Arabic, or a mix of both French and Arabic. For example, 'ya hasra' (translated as 'the good old days' in English) is a common refrain for some Arabic-speaking visitors of websites representing images of late 19th and early 20th century North Africa. These multilingual responses point to the multi-layered pasts that are being recalled through the video, even if its primary objective is to preserve and transmit pied-noir memories above any others. In this respect, these videos do seem to exceed their target audiences, i.e. other pieds-noirs. In their preliminary study of Moroccan Jewish, Christian and Muslim online communities, Ouaknine and Aharony (2020, p.106) suggest that '[n]ostalgia proneness has an effect on the intention to share cultural heritage'. Idealising a lost, but shared past, is a way for these communities in Morocco to agree on a desire to repair community relations (2020, p. 113). The experiences of Moroccan and Algerian communities (diasporic or otherwise) under colonialism and war are not the same and cannot be conflated. Nonetheless, expressions of nostalgia across French and Arabic in reaction to the circulation of colonialera postcards online demonstrate that the visual economy of colonial Algeria does not belong exclusively to pied-noir communities. Indeed, as was the case with the phototexts of the 1980s and 1990s, Algerian creators and collectors engage with colonial postcards (physically and digitally) as a way to reclaim history on their own terms. An expert in Algerian photography, Awel Haouati (2016) has examined the use and reappropriation of colonial images in contemporary, Algerian visual culture. She points out that despite their initial orientalist and colonial production, these images are not necessarily received as such by some Algerian collectors. Since 2011, notes Haouati, several Facebook pages have been created, dedicated to these images of Algeria in the early 20th century. Public Facebook groups expose these images to a wider audience and the postcards are once again recirculated for different forms of consumption. However, Haouati (2016) also notes that the authors of these posts tend to censure the kind of photographs that they repost, refusing, understandably, to reproduce the images of nudity and the distressing poverty inflicted on Algerians during French colonialism. In other words, like the pied-noir websites, images of the systemic violence conveyed by the colonisers gaze are withheld from this reproduction, although for very different reasons. Haouati (2016) calls this the oscillation between the idealisation of the past and the obliteration of its violence. By examining how both pied-noir and other groups draw on the same corpus of images, we can trace the migrations and reiterations of an idealised vision of colonial space. In this regard, the cross-pollination and knotting of memory-or 'noeuds de mémoire' (Rothberg et al., 2010)-seems to be a more helpful way of thinking about the kinds of unexpected memory connections produced by reproductions of colonial postcards online. While, the pied-noir websites themselves may not be spaces to participate in these rhizomatic networks of cultural references, the images of the postcards can exceed and escape the initial 'nostalgérie' frame of reference that is supposed to target the pied-noir community only. Conclusion This article has argued that the visual culture of pied-noir memory sites demonstrates some of the continuities in the 'visual economy' of French Algeria across the colonial and post-colonial period. While colonial postcards from the turn of the 20th century served to market an image of French Algeria which would be attractive to investors and tourists, these images find new audiences online in the late 20th century as vectors of colonial nostalgia. In reproducing these postcards online as authoritative representations of a lost Algerian homeland, pied-noir websites also contribute to the homogenisation of pied-noir memory and identity through the lens of what were essentially metropolitan French perspectives of an exotic yet familiar Algerian urban topography. As the lived memory of colonial Algeria fades, the possibility for memory transmission to new generations of pieds-noirs has been scrutinised for some time now (Albert-Llorca, 2004). Eldridge identifies a pied-noir memory strategy in anchoring 'their historical interpretation in physical sites external to the community so as to better facilitate the preservation of this past beyond the lifespan of living witnesses (2017, p. 228). The 'websites of memory' identified in this article perhaps also serve a similar function of preserving memory connected to lost physical spaces outside of the community. However, as Scioldo-Zürcher (2012) has shown, the websites themselves have limited reach and therefore call into question the transmissibility of these memories to their target audiences online. In the post-scarcity digital age of memory, pied-noir sites of memory are competing with other, more creative, post-memory strategies articulated by groups connected to the Algerian War of Independence. Therefore, this article has also argued that online visual content produced by pied-noir activists and individuals are by no means stable sites of memory, despite efforts to reinforce a collective and homogenous pied-noir identity online. The images of colonial-era postcards are fluid visual signifiers that acquire new meanings and interpretations as they are shared across different and diametrically opposed memory narratives elsewhere in the digital 'francosphère', namely in French-language Algerian social media (Haouati, 2016). Often made by motivated but non-professional individuals and advocates, the pied-noir websites themselves are increasingly populated by degrading and broken links and images. Pied-noir content that endures on more stable social media platforms, such as YouTube, are then open to contestation from commenters who question their nostalgic and orientalist representations, or perhaps reclaim this nostalgia for their own purposes. In other words, the pied-noir websites might be made with the intention to form a collective memory in the service of a piednoir cultural heritage and shared identity, but once online, these images escape that initial intention. Colonial-era postcards find themselves in new networks and spaces online and therefore open to new interpretations of 'nostalgérie'. Data availability This study did not analyse or generate any datasets. Received: 31 October 2020; Accepted: 21 February 2022; Notes 1 Here, I employ the term 'francosphère' to refer to the realms and spheres of influence where the 'ghosts of French culture still haunt the landscape for many […] complex political or historical reasons' including but not limited to French colonialism (Hussey, 2012, p. i). 2 On the affective reverberations of sudden exile at the end of empire, see also Piera Rossetto's work on 'emotional sites' [lieux d'émotion] (2016) and the forging of an Italian identity among exiled Libyan Jews (2021) which maps the connections between affect, loss, and mourning among the North African Jewish diaspora. 3 It is sometimes stated that 'nostalgérie' was coined by Gaston Guigon in his autobiographical book Nostalg…erie (1971, Salon de Provence). Amy L. Hubbel notes that Dr. Guigon derived the term form his medical experience observing depression among pieds-noirs in France. However, Hubbell also cites the 1938 Marcello Fabri poem (2015, p. 27). Philip Dine (1994) traces 'nostalgérie' in artistic production back to 1899. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
2023-02-23T14:31:09.602Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "a6e0bece5f1141ab73305b69ef28132045006174", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41599-022-01134-3.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a6e0bece5f1141ab73305b69ef28132045006174", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
247322966
pes2o/s2orc
v3-fos-license
Validation of the coping self-efficacy scale: Vietnamese version for adolescents Background This study aimed to examine the construct validity of the Coping Self-Efficacy Scale-Vietnamese Version (CSES-V) among Vietnamese adolescents. Methods This study selected Grade 10 students from eight schools in Hanoi using a multiple-stage sampling method. Multiple aspects of the construct validity were examined including: factorial structure (evaluated using exploratory factor analysis); internal consistency (tested using Cronbach’s alpha coefficient); measurement invariance between male and female participants and longitudinal measurement invariance (tested by employing multiple group confirmatory factor analysis) and external aspect (tested using Pearson’s correlation coefficients between CSES-V and the Depression Anxiety and Stress Subscales of Depression (DASS21-D), Anxiety (DASS21-A), and Stress (DASS21-S) and a measure of mental well-being, Mental Health Continuum Short Form (MHC-SF)). Results A total of 1082 adolescents (aged 14–16 years) was included in this study. Data supported a three-factor structure (comprising 24 items) that explained 97.6% of the total variance of the CSES-V. Cronbach’s alpha coefficients of all three factors were acceptable. All levels of measurement invariance between male and female participants and longitudinal measurement invariance were well-supported. The three factors of the CSES-V were positively correlated with MHC-SF and were negatively correlated with the DASS21 subscales at a low or moderate level, supporting the external aspect of the construct validity. Conclusions CSES-V is recommended to assess coping self-efficacy among Vietnamese adolescents who are attending school. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-022-00770-3. Background Stress is a person-environment relationship that arises when a person perceives it as exceeding their ability to cope with the threats or demands being made on them [1,2]. There are many life events or situations that can become stressors; for example, interpersonal conflict, death of loved ones, illness, heavy workload or excessive responsibility. Stress is common in every stage of life [3]. Stress, especially if severe and prolonged, can be a triggering factor for many diseases and pathological conditions including cognitive and mental health problems [4][5][6]. Lazarus & Folkman's Stress and Coping Theory [1] defines coping as the thoughts or actions employed to manage stressful situations. Developing a coping strategy for each stressful event is the result of an individual cognitive evaluative process (appraisal) of the circumstance. Two stages are involved: (1) the prediction of adverse Open Access *Correspondence: Thach.Tran@monash.edu † Thach Tran and Nga La: Equal first authors 1 Global and Women's Health, Public Health and Preventive Medicine, Monash University, Level 4 -553 St Kilda Road, Melbourne, VIC 3004, Australia Full list of author information is available at the end of the article outcomes (primary appraisal), and (2) consideration of different ways to respond (secondary appraisal). Coping strategies can be categorised into problem-focused and emotion-focused coping. Problem-focused coping concentrates on changing the stressor itself and its physical impact. Emotion-focused coping centres around managing emotional responses to stressful events. However, some coping responses do not fit completely into either category. For instance, seeking social support can be a problem-focused effort (change the situation) or an emotion-focused action (soothe the distressing emotion). When the coping strategies fit the stressful events/situations well, people can diminish the influence of stressful experiences, and in turn reduce immediate and future psychological and physical health impacts [7]. Coping with stress self-efficacy is a construct that was recently formulated by integrating the Lazarus & Folkman's Stress and Coping Theory [1] and Bandura's Selfefficacy Theory [8]. In general, perceived self-efficacy is the belief an individual has about their ability to adequately perform a specific behaviour. Specific self-efficacy for coping with stress is an individual's subjective judgement about their own ability to handle stressful situations effectively [9]. Self-efficacy for coping with stress affects both appraisal stages in the Stress and Coping Theory. First, an individual's degree of belief that they can solve the problem or regulating their emotions determines the adverse outcome they predict. Second, perception of self-efficacy plays a crucial role in the choice and implementation of coping strategies. Individuals will choose, organise and carry out actions that they believe are useful and effective for dealing with the situation. Therefore, high self-efficacy for coping with stress can prevent or reduce the stress as well as its health impacts. Research interest in stress and coping became widespread in 1970s and 1980s [10], leading to the development of a number of instruments to assess coping with stress (for instance, the Miller Behavioral Style Scale [11]; the Ways of Coping Questionnaire [12]; the COPE Inventory [13]; the Coping Strategy Indicator [14]; the Mainz Coping Inventory [15]; and the Coping Inventory for Stressful Situations [16]). However, all of these instruments assess coping strategies per se, rather than self-efficacy for coping. Chesney and colleagues, in collaboration with Dr. Albert Bandura from Stanford University, who postulated a self-efficacy theory [8], developed the Coping Self-Efficacy Scale (CSES), one of very few scales to measure perceived self-efficacy for coping with challenges and threats [7]. A total of 26 behaviours is asked about in the CSES and grouped into three categories of coping strategies: problem-focused (12 items), emotion-focused (9 items), and get support from friends and family (5 items). Chesney et al. [7] was the first to empirically examine the construct validity of the CSES. The 3-factor structure (i.e., problem-focused, emotion-focused, and getting social support) yields strong internal consistency and test-retest reliability supported by the data. Concurrent validity (correlations between the CSES and measures of psychological distress and well-being, ways of coping, and social support) and predictive validity (change scores in using problem-and emotion-focused coping skills were predictive of reduced psychological distress and increased psychological well-being over time) were established. A shortened version with 13 items was proposed. However, Chesney et al. suggested using the full version of 26 items to recheck the construct validity, because the first validation study included a sample of participants who were homosexual men infected with HIV and diagnosed with depressed mood, and were thus not representative of the general population. There were some other attempts to validate the CSES [17][18][19][20]. Among those, the most outstanding study was conducted by Colodro et al. [18] with a communitybased sample of 182 adults from 18 to 66 years of age in the UK. Overall, the findings in the Chesney et al. 's study [7] including the factorial structure, concurrent validity and predictive validity of the full version of CSES were confirmed with data from the community-based sample in Colodro et al. 's study. However, three of the 26 items with lowest loadings were suggested as not 'sufficiently suitable for inclusion in the scale' . Another validation study, conducted in South Africa by van Wyk [20], included a convenience sample of 2214 people aged from 16 to 46 years. Chesney's original factorial structure of the CSES fitted van Wyk's study data well. Van Wyk's data also support high internal consistency (Cronbach alpha of 0.87) and good criterion-related validity of the CSES. Cunningham et al. [17] validated the CSES among a clinical sample of military service members receiving mental health or substance abuse treatment in the USA. The original three-factor model was supported in Cunningham et al. 's study. Finally, Tol et al. [19] validated an Iranian version of the CSES for use among people with type 2 diabetes mellitus. The original factorial structure of the CSES was not supported; instead two items were omitted and a four-factor model was found to fit the data well. Common mental health problems, especially depression and anxiety, are prevalent worldwide at almost every stage of life including adolescence [21]. For many people with common mental health problems, the first onset occurs during adolescence [22,23]. Mental health problems during this period are associated with higher risks of subsequent mental health problems in adulthood [24]. Therefore, public health interventions for adolescents' mental health are urgently needed not only for current adolescents' well-being but also for future adults' quality of life and productivity. Mental health problems among adolescents have been recognised in public health research in Vietnam for more than a decade. In 2011, Amstadter and colleagues [25] reported data from a large-scale community-based study that 9.1% of adolescents aged 11-18 years were considered to have a mental health problem. Recent studies found that up to 22.9% of adolescents experienced clinically significant symptoms of depression [26], 22.8% had clinically significant symptoms of anxiety [27], and 14.1% had suicidal thoughts [28]. Although common mental health problems including depression and anxiety are being recognised increasingly by policy makers, there are a lack of public health interventions to support adolescent mental health in Vietnam. High coping self-efficacy is a protective factor for depression and anxiety disorders [29,30]. Coping selfefficacy is increasingly recognised as being changeable through psycho-educational programs [31]. There have been a number of recent attempts to develop programs aiming to promote positive coping self-efficacy for addressing mental health problems [7]. An instrument to assess coping self-efficacy is necessary for these interventions and research. To our knowledge, there is no coping self-efficacy scale that has been validated for use among adolescents. This study aimed to examine the construct validity of the CSES for use among high school students in Vietnam. The CSES was selected for several reasons: (1) it covers all major domains of coping strategies (i.e. problem-focused, emotion-focused, and help seeking), (2) it has 26 items, meaning it is not too brief and not too long, (3) the 11-point scale for each item means that the scale can provide detailed data, and (4) it evaluates the person's confidence regarding implementing coping strategies, and changes in scale scores reflect changes in the individual's confidence regarding their ability to cope. The CSES holds great promise for use in public health and research to inform effective interventions to help adolescents better handle both acute and chronic stress [7]. The objectives of this study were to evaluate the (1) factorial structure, (2) measurement invariance, (3) internal consistency, and (4) concurrent validity of a Vietnamese version of the CSES. We used data collected from an intervention study (hereafter called the main study) of a school-based psycho-educational program for adolescents' mental health conducted in Hanoi, Vietnam [32]. We hypothesised that the data would support all aspects of construct validity of the CSES. Settings Vietnam is Southeast Asian country with a population of 96 million. The average national per capita income in 2019 was USD2,590, and Vietnam is classified as a lowermiddle income country [33]. Children and adolescents account for a third of the population. Nationally, about 8.3% of school-age children (6-18 years old) are out of school [34]. Hanoi, the capital city, is one of the two largest cities in Vietnam. Of the 8 million people living in Hanoi, the population is split equally between those living in urban and rural areas. Participants A multiple-stage sampling method was used in the main study to select the participants. In the first stage, two districts were randomly selected from a total of 12 urban districts and another two districts were randomly selected from a total of 18 rural districts in Hanoi. In the second stage, in each of the selected districts, two high schools were randomly selected and four grade 10 classes from each of the selected schools were randomly chosen. Finally, all students in the selected classes were eligible and invited to participate. An independent statistician conducted the selection process. A total of 1084 (552 controls and 532 interventions) adolescents aged 15-16 years participated in the main study. All participants of the main study were eligible and included in this validation study. Procedures In the main study, data were collected at baseline (at recruitment) and endline (about two months after recruitment) using a self-completed questionnaire at school during a usual 45-min class. In each session, two research assistants from the Hanoi University of Public Health (HUPH) gave instructions on how to complete the questionnaire and supervised the students to ensure the privacy and confidentiality. Students returned the questionnaire in a sealed envelope which was provided at the beginning of the session. Students who did not want to participate or did not have parental consent to participate were invited to go to do their homework at the school library (44 students, 3.9%). Coping self-efficacy scale-Vietnamese version The Vietnamese Version of the original 26-item version of the Coping Self-Efficacy Scale (CSES-V) developed by Chesney and colleagues was used in this study [7]. For each item, students are asked to rate on an 11-point scale the extent to which they believe they could perform a behaviour when things aren't going well, or when they are having problems (0 'cannot do at all' to 10 'certain can do'). The translation into Vietnamese was performed using a standardised procedure (translate, culturally verify and back-translate) established and used in previous studies [35][36][37]. Depression anxiety and stress scales (DASS 21) The symptoms of depression, anxiety and stress were assessed using the DASS 21 [38] which includes 21 items in three sub-scales (each has seven items): Depression (DASS21-D), Anxiety (DASS21-A), and Stress (DASS21-S). Each item has four short response options reflecting the severity of the symptom and scoring from 0 = "Did not apply to me at all" to 3 = "Applied to me very much, or most of the time". Higher subscale scores indicate more symptoms of the mental health problem measured by the subscale. Evidence for the factorial structure and internal consistency of DASS 21 for use among Vietnamese adolescents has been established [39] (Cronbach alphas of 0.835 for the Depression, 0.737 for the Anxiety and 0.761 for the Stress subscale). Mental health continuum short form (MHC-SF) General mental well-being was assessed using the MHC-SF [40,41]. The MHC-SF comprises 14 items and each item is scored from 0 = "Never" to 5 = "Every day". All item scores are summed to yield a global well-being score from 0 to 70. Higher global well-being scores reflect better mental well-being. Ha et al. confirmed the construct validity of the MHC-SF for use in adolescents in Vietnam [42]. Statistical analyses In this study, we examined two aspects of construct validity of the CSES-V, namely structural and external validity [39]. Structural aspect The factorial structure of the CSES-V was examined using exploratory factor analysis with principal factor extraction (free of distribution assumptions). The number of factors selected was decided based the scree plot (plot of the eigenvalues of factors) and meaningful factors. After the number of factors was determined, we use an oblique rotation (promax) to reach a simple structure if more than one factor was found. We omitted from the final version the items with factor loadings < 0.3, as they were interpreted as being not salient. For every item cross-loading into two or more factors, it was assigned to the factor with the highest factor loading value. Measurement invariance (measuring the same construct(s) in the same way across the subgroups of participants) of the CSES-V was examined between male and female participants using multiple group confirmatory factor analysis (MGCFA) in three levels: configural; metric; and scalar invariance [43,44]. The lowest level of measurement invariance, configural invariance, requires the number of factors and loading pattern to be the same across groups. The configural invariance holds if the overall MGCFA model fits the data well (the root mean square error of approximation (RMSEA) value of < 0.05, comparative fit index (CFI) > 0.95, and Tucker-Lewis index (TLI) > 0.95) [44,45]. The second level of measurement invariance is the metric invariance level in which the factor loadings of the items of the instrument must be equivalent across groups. The fit of the metric model was compared with the fit of the configural model to assess metric invariance. The highest level of measurement invariance is scalar invariance, which requires the item intercepts to be equivalent across groups, in addition to the metric invariance. If the metric invariance is achieved, the fit of the scalar model is compared with the fit of the metric model to assess scalar invariance. We used the criteria: the decreases of CFI values of less than or equal to 0.01 and increases in RMSEA values of less than or equal to 0.015 from the compared model indicating that there is no difference between the models and invariance at that step is supported [46][47][48]. We did not use Chi-square tests to test model fit differences between models, because Chi-square tests are heavily influenced by the sample size [44]. Longitudinal measurement invariance (measuring the same construct in the same metric across time points) of the CSES-V was examined using the same statistical approach (MGCFA) in the three levels as in the examination of the measurement invariance between participants' sexes. The internal consistency of the scale was assessed using the Cronbach's alpha coefficient. The coefficient > 0.7 indicates acceptable internal reliability [49]. External aspect Concurrent validity (whether the CSES-V correlates with the measures of related constructs) was examined using Pearson's correlation coefficients between CSES and DASS21-D, DASS21-A, DASS21-S, and MHC-SF scores. Stronger coping self-efficacy is negatively associated with depressive, anxiety and stress symptoms and positively associated with mental well-being [50][51][52]. It was expected that the CSES-V would be correlated with all measures at low or moderate levels (correlation coefficients around 0.3 to 0.5). We used data collected from all participants at baseline in all analyses, except for the assessment of longitudinal measurement invariance. For the longitudinal measurement invariance, we used data collected at baseline and endline from participants of the control group only. Several methods for treating missing data were used in this study. First, the cases with more than 20% of CSES-V data items missing were excluded. Second, missing data in the scales (CSES, DASS21-D, DASS21-A, DASS21-S, or MHC-SF) were imputed if a case had missing data for less than or equal to 20% of the number of items of that scale. Regression imputation was used; all other items of these scales and sociodemographic characteristics (school, sex, and age) were used as predictors to impute the missing data. Thirdly, the remaining missing data were treated using full information maximum likelihood estimation under missing at random assumption in the MGCFA. Finally, we used the pairwise deletion approach in other analyses. MGCFA were conducted in Mplus Version 7.4 [53]. All other analyses were carried out using Stata Version 16 [54]. The data, analytic methods (code) used in the analysis, and materials used to conduct the research will be made available to any researcher for purposes of reproducing the results or replicating the procedure on reasonable request to the corresponding author. Samples Among the 1084 students who participated in the main study, 13 (1.2%) had missing data in any CSES item at baseline. We excluded two cases (0.2%, one in control and one in intervention group) who had more than 20% CSES-V data items missing. There were 76 participants (7.0%) missing any data in items in the DASS scales, and 59 (5.4%) missing any MHC-SF data. Among the 551 students in the control group included in this validation study, 541 (98.2%) were followed up and provided complete data at endline. A total of 657/1082 participants (60.7%) were girls. The mean (standard deviation) age of the participants was 15.3 years (0.3). Exploratory factor analysis The scree plot of the exploratory factor analysis of the CSES-V (Additional file 1: Fig. S1) shows that eigenvalues seem to level off between three and four factors, suggesting that the optimal number of factors is three. The three factors with eigenvalues of approximately 1 or higher and together explained 97.6% of the total variance (Additional file 1: Table S1). There were two items (items 21 and 23) that did not load into any of the three factors after the rotation (Table 1). Items 18 and 22 cross-loaded into two factors and were assigned to Factor 1. Finally, nine items loading into Factor 1 were emotion-focused coping strategies; the 10 items loading into Factor 2 were problem-focused; and the five items loading into Factor 3 were social support/interaction coping strategies. Measurement invariance The MGCFA of the three-factor models ( Table 2) supported all three levels of measurement invariance between sexes and longitudinal measurement invariance between baseline and endline. The overall MGCFA (the configural models) models fitted the data well and the fitting indices of the metric and scalar models were almost identical to those of the configural models. Correlations and internal consistency The three factors of the CSES-V were correlated with each other at moderate levels (Table 3), which supports that the three factors are different facets of the same construct, namely coping self-efficacy. All three factors were positively associated with the MHC-SF and negatively associated with the DASS21 sub-scales at a low or moderate level, as hypothesised. Discussion This study established the evidence of the construct validity of the CSES-V for use among adolescents in Vietnam. The findings strongly confirm the factorial structure of the CSES-V with three factors. All levels of measurement invariance between males and females and longitudinal measurement invariance were strongly supported. All three factors were found to have acceptable internal consistency and were correlated with several mental health measures, as expected. Like the original validation study of the CSES [7], this study found the same three-factor structure: emotion-focused, problem-focused, and social support/interaction coping strategies. We suggested the exclusion of items 21 'Visualize a pleasant activity or place' and 23 'Pray or meditate' as they had factor loadings lower than the cut-off. These items also had lowest factor loadings among the items loaded into the Table 1 Factor loadings from the exploratory factor analysis of the Coping Self-Efficacy Scale-Vietnamese Version (CSES-V) Blanks represent factor loading < 0.3 problem-focused factor in the original validation study [7]. Positive imagery is a technique commonly used in psychotherapy for stress reduction [55]. It might be not commonly used among general population, including adolescents, because some guidance and practice may be needed in order to integrate it as an individual coping strategy. 'Pray or meditate' was also one of three items suggested for exclusion by Colodro et al. 's study in the general population in the UK [18], but this item had a good correlation with the total scale score in a study in Iran [19]. 'Pray or meditate' is a coping strategy that may be more commonly used by people who are spiritual or who practice meditation. In Vietnam, Buddhism has historically been the dominant religion. However, nowadays many people, especially adolescents, are not showing as much commitment to religious beliefs. This may explain why 'pray or meditate' was not a strategy widely endorsed among Vietnamese adolescents. There are a few inconsistencies between the findings of this study and the original validation study of the CSES [7]. Item 2 'Talk positively to yourself ' loaded into the problem-focused factor and item 18 'Do something positive for yourself when you are feeling discouraged' loaded into social support factor in the original validation study, but both loaded into the emotion-focused factor in our study. Problem-focused coping strategies concentrates on changing the stressor itself while emotion-focused coping centres around managing emotional responses to stressful events [1]. 'Talk positively to yourself ' cannot directly modify the stressor itself but it is a regulative effort to diminish the emotional consequences of stressful events. Therefore, item 2 is more relevant to emotion regulation than problemfocused strategies. 'Do something positive for yourself when you are feeling discouraged' can be related to social support strategies if the individual gets support from friends and/or families that is also positive for themselves. However, in the data of this study, this item is more relevant to emotion-focused strategies and it makes sense as 'do something positive for yourself ' can directly improve their emotional status. Item 9 'Develop new hobbies or recreations' and item 24 'Get emotional support from community organisations or resources' had low factor loadings in the original validation study which only included adults, but acceptable factor loadings in our study. These results suggest that these two items may be more relevant to adolescents than to adults. It is known that the ability to learn new things peaks in early childhood and adolescence and reduces gradually in adulthood [56]. Therefore, adolescents might be more likely than adults to develop new hobbies or skills to respond to stressful events or situations. Adolescents are often still dependent on their parents/carers, and thus may be accustomed to seeking help from their immediate family, or being supported by their parents/carers to seek external support. In contrast, adults are more often living independently, and may therefore have less support, or find it more challenging to seek help from services or community organisations. This is, to our knowledge, the first attempt to validate a coping self-efficacy measure for use among adolescents. We provide evidence on multiple aspects of the construct validity using a large sample size. However, we acknowledge several methodological limitations of this study. First, we included adolescents attending school in Hanoi and in a narrow age range. This specific sample may affect generalisation of the findings to the all Vietnamese adolescents. Criterion validity (how well the scale scores agree with a 'gold standard' measure), which is an important aspect of construct validity, was not evaluated in this study. We were not able to find a gold standard measure of coping self-efficacy. Implications and conclusions The evidence of the construct validity of the CSES-V in Vietnamese adolescents is established. This scale may Table 3 Correlations between the Coping Self-Efficacy Scale-Vietnamese Version (CSES-V) and mental health scales be useful for school counsellors or clinical psychologists who work with adolescents, school mental health programs, primary health care, and research on adolescents' stress and coping. We recommend that the continuous scores of this scale are used rather than any categories, because no cut-off points have been validated to date.
2022-03-10T14:20:29.370Z
2022-03-09T00:00:00.000
{ "year": 2022, "sha1": "a350677c8b60a1b24cd89b56217a0ee8b3f85611", "oa_license": "CCBY", "oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-022-00770-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9902aed934687c7ae17db88e269311c17a8c17f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252543419
pes2o/s2orc
v3-fos-license
Rubiarbonol B induces RIPK1-dependent necroptosis via NOX1-derived ROS production The activation of receptor-interacting protein kinase 1 (RIPK1) by death-inducing signaling complex (DISC) formation is essential for triggering the necroptotic mode of cell death under apoptosis-deficient conditions. Thus, targeting the induction of necroptosis by modulating RIPK1 activity could be an effective strategy to bypass apoptosis resistance in certain types of cancer. In this study, we screened a series of arborinane triterpenoids purified from Rubia philippinesis and identified rubiarbonol B (Ru–B) as a potent caspase-8 activator that induces DISC-mediated apoptosis in multiple types of cancer cells. However, in RIPK3-expressing human colorectal cancer (CRC) cells, the pharmacological or genetic inhibition of caspase-8 shifted the mode of cell death by Ru–B from apoptosis to necroptosis though upregulation of RIPK1 phosphorylation. Conversely, Ru–B-induced cell death was almost completely abrogated by RIPK1 deficiency. The enhanced RIPK1 phosphorylation and necroptosis triggered by Ru–B treatment occurred independently of tumor necrosis factor receptor signaling and was mediated by the production of reactive oxygen species via NADPH oxidase 1 in CRC cells. Thus, we propose Ru–B as a novel anticancer agent that activates RIPK1-dependent cell death via ROS production, and suggest its potential as a novel necroptosis-targeting compound in apoptosis-resistant CRC. Introduction The death domain kinase, receptor-interacting protein kinase 1 (RIPK1), is an essential mediator of the activation of programmed cell death (PCD) via apoptosis and necroptosis and exerts its effects by integrating signaling complexes following the ligation of cell surface receptors, such as tumor necrosis factor receptor 1 (TNFR1) and toll-like receptor 3 (Humphries et al. 2015;Meylan et al. 2004;Silke 2011;Witt and Vucic 2017). Upon ligation with TNFR1, RIPK1 can transduce the signal to either cell survival or PCD, depending on the engagement of activated adapter proteins and/or the cellular context. Recently, it has been proposed that post-translational modifications of RIPK1 in spatially distinct TNFR1 complexes (complex-I and -II) play an important role in determining cell fate (Kang et al. 2019;Ting and Bertrand 2016). The conjugation of non-degradable poly-ubiquitin chains to RIPK1 bound within complex-I maintains the survival function of RIPK1 by acting as a scaffold for the recruitment of pro-survival kinases, such as IκB kinase, which are essential for the activation of nuclear factor-κB (NF-κB) (Dynek et al. 2010;Fritsch et al. 2014;Wertz 2014). Conversely, de-ubiquitination of RIPK1 by loss of ubiquitin ligase such as cellular inhibitor of apoptosis 1 and 2 (cIAP1/2) and linear ubiquitin chain assembly complex (LUBAC) dissociates from complex-I and recruits caspase-8 and its adapter protein FADD to form a cytosolic death-inducing signaling complex (DISC, also termed as complex-II) that elicits characteristic RIPK1-dependent apoptosis (RDA). (Brenner et al. 2015;Micheau and Tschopp 2003;Dickens et al. 2012;Annibaldi and Meier 2018). More recent studies have reported that the ubiquitination-dependent phosphorylation of RIPK1 by IKKα/β, transforming growth factor β activated protein kinase (TAK1) and TANKbinding kinase 1 (TBK1) in complex-I protects cells from RDA by counteracting the assembly of complex-II (Dondelinger et al. 2015;Lafont et al. 2018;Geng et al. 2017;Xu et al. 2018). During RDA, RIPK1 is rapidly cleaved by the activated caspase-8, which in turn suppress the further activation of RIPK1 (Newton et al. 2019). Consequently, genetic or pharmacological inhibition of caspase-8 activity greatly increases the cytotoxic potential of RIPK1 via Ser166-autophosphorylation, which promotes the switch to RIPK1-dependent necroptosis by inducing the recruitment of RIPK3 and mixed lineage kinase-domain-like (MLKL) (Kaiser et al. 2011;Li et al. 2012;Newton 2015). In various human cancers, genetic alterations occur that play an important role in the evasion of apoptosis, a hallmark of cancer that represents a major mechanism of cellular resistance to current cancer treatments including radiation and chemotherapeutic drugs (Croce and Reed 2016;Hanahan and Weinberg 2000). Caspase-8 is often inactivated by somatic mutations or epigenetic methylation in multiple types of human cancer, including colorectal cancer (CRC) (Hopkins-Donaldson et al. 2003;Kim et al. 2003;Teitz et al. 2000). In caspase-8-deficient CRC, the use of Smac mimetics can reduce cellular inhibition of apoptosis protein(cIAP)-mediated RIPK1 ubiquitination, which overcomes apoptosis resistance by inducing RIPK1-dependent necroptosis (He et al. 2017). Furthermore, DNA damaging compounds such as etoposide and doxorubicin induce apoptosis or necroptosis (depending on the cellular context) without the involvement of TNFR1 ligation via the Ripoptosome, a cytosolic complex containing RIPK1, FADD and caspase-8 (Bertrand and Vandenabeele 2011;Koo et al. 2015;Tenev et al. 2011). Thus, the discovery of a substance capable of inducing the necroptotic mode of cell death via the Ripoptosome could lead to an effective chemotherapeutic strategy for eradicating apoptosisresistant cancer cells. Triterpenoids comprise the largest group of plant natural products and possess a diverse range of pharmacological activities (Gill et al. 2016). Pentacyclic triterpenoids in particular exhibit promising antitumor activity, regulating multiple cellular pathways related to apoptosis, the cell cycle and angiogenesis (Markov et al. 2017;Patlolla and Rao 2012). Arborinane-type triterpenoids constitute a rare group of pentacyclic triterpenoids. Recently, arborinanetype triterpenoids such as rubiarbonol G and myrotheols A have attracted attention from chemists and pharmacologists due to their potential to induce apoptosis and cell cycle arrest in various cancer cell types (Basnet et al. 2019;Zeng et al. 2018). However, the activity of arborinane-type triterpenoids toward necroptotic inducers and Ripoptosome formation is largely unknown. The genus Rubia is a rich source of arborinane-type triterpenoids. In a previous phytochemical study, we isolated a series of arborinanetype triterpenoids from Rubia philippinesis (R. philippinesis) (Quan et al. 2016). In the present study, we show that a novel arborinane triterpenoid isolated from R. philippinesis, rubiarbonol B (Ru-B), elicited apoptotic and necroptotic cell death via Ripoptosome formation in RIPK3-expressing CRC cells. When apoptosis was blocked, Ru-B triggered a shift from apoptotic to RIPK1-dependent necroptotic cell death. The RIPK1-dependent cell death was mediated by NADPH oxidase 1 (NOX1)-derived reactive oxygen species (ROS) generation, which led to TNFR1independent RIPK1 phosphorylation. Our findings provide insight into the interplay between necroptotic cell death and ROS-mediated RIPK1 phosphorylation that underlies the cytotoxic potential of Ru-B and offer a potential therapeutic strategy for the treatment of refractory CRC, which is resistant to proapoptotic stimuli. Extraction of rubiarbonol B Arborinane-type triterpenoids, rubiarbonol B (Ru-B) was isolated from our previous chemical investigation on R. phillippinensis (Quan et al. 2016 CRISPR/Cas-9 mediated KO cells generation For the depletion of RIPK1, RIPK3, caspase-8, and FADD in HT-29 cells, oligos were synthesized and inserted into the px330-puro vector through a standard protocol to generate gRNA with hCas9 protein. gRNA sequences were designed using the open-access software provided at http:// chopc hop. cbu. uib. no/. gRNA sequences were as follows: RIPK1-CTC GGG CGC CAT GTA GTA GA; RIPK3-CGG GCG CAA CAT AGG AAG TG; caspase-8-CAC CGA ACG AGA TAT ATC CCG GAT G; FADD-ACA CGC TCT GTC AGG TTG CG. The targeting plasmid was transfected into HT-29 cells using Lipofectamine 2000 reagent according to the manufacturer's instructions (Invitrogen Life Technologies, Franklin, MA, USA). After 24 h, cells were exposed with 3 μg/ml puromycin for two days, and clones propagated from single cells were picked out. The depletion of target genes was confirmed by both immunoblotting and genomic DNA sequencing. Caspase-8 activation assay HCT116 cells were plated in 96-well plates and treated with a series of constituents (10 μM) derived from R. phillippinensis for 24 h. Caspase-8 activity was measured using a Caspase-Glo 8 assay kit (Promega, USA) that utilizes luminogenic caspase-8 substrates, following the manufacturer's instructions. The luminescence intensity of each sample was measured in a plate-reading luminometer (Infinite 200pro, Tecan, Switzerland). Determination of cell death After treatment as described in the figure legends, a cell viability assay was conducted utilizing Cell Titer-glo Luminescent Cell Viability Assay kit (Promega, USA), which measures cell viability based on ATP levels present in live cells. Luminescent measurements were taken on a microplate leader (Infinite 200pro, Tecan, Switzerland). Representative images were also taken by an inverted microscope (EVOS M5000, Thermo Fisher Scientific, USA). For the measurement of early/late apoptotic or necrotic cell death, cells were stained with 10 μM fluorescein isothiocyanate (FITC)labeled annexin V and propidium iodide (PI), in a Ca 2+ -enriched binding buffer (10 mM HEPES, pH 7.4, 140 mM NaCl, and 2.5 mM CaCl 2 ), and analyzed by two-color flow cytometry. The fluorescence of cells was analyzed by NovoCyte Flow Cytometer (ACEA Biosciences, USA). Immunoblot analysis and immunoprecipitation After treatment as described in the figure legends, cells were collected and lysed in M2 buffer (20 mM Tris, pH 7.6, 0.5% NP-40, 250 mM NaCl, 3 mM EDTA, 3 mM EGTA, 2 mM dithiothreitol, 0.5 mM PMSF, 20 mM β-glycerol phosphate, 1 mM sodium vanadate, and 1 µg/mL leupeptin). Cell lysates were fractionated by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and visualized by enhanced chemiluminescence (Thermo Fisher, USA). For immunoprecipitation assay, the lysates were precipitated with the relevant antibodies and protein A-or G-sepharose beads overnight at 4 °C. The beads were washed three times with M2 buffer, and the bound proteins were resolved in 10% SDS-PAGE for immunoblot analysis. Determination of ROS production Production of intracellular ROS was measured using a fluorescent dye dihydroethidium (DHE) in HT-29 cells. After treatment, as described in the figure legends, cells were incubated with 10 μM DHE in phosphate-buffered saline (PBS) solution containing 10% FBS for 30 min. The stained cells were analyzed with flow cytometry (NovoCyte Flow Cytometer, USA), and the mean fluorescence intensity (MFI) was calculated after correction for autofluorescence is presented. For the quantification of the mitochondrialderived superoxide (O 2 − ) production, cells were incubated with a mitochondria-target probe, Mito-SOX Red (5 μM) with a MitoTracker green (200 nM) for 10 min, and the images were captured using a fluorescent microscope (EVOS M5000, Thermo Fisher Scientific, USA). Statistical analysis Data are expressed as the mean ± SE from at least three separate experiments performed triplicate. Statistical analysis was carried out using one-way analysis of variance, followed by the Bonferroni t test for multi-group comparison tests. The difference between two groups was analyzed using the Student′s t test. P < 0.05 is considered statistically significant. Ru-B induces caspase-8-mediated apoptosis To identify novel small molecules with cytotoxic potential that act on caspase-8, we first screened a set of arborinane-type triterpenoids purified from R. phillippinensis by conducting a protease activity assay using a luminogenic substrate specific for caspase-8. In initial an investigation, Ru-B was found to be a potent caspase-8 activator, exhibiting modest cytotoxicity in HCT116 human CRC cells (Table 1, Fig. 1A). To determine whether the cytotoxic potential of Ru-B is due to caspase-8 activation, we compared the effects of the irreversible caspase-8 and caspase-3 inhibitors z-IETD-fluoromethyl ketone (z-IETD) and z-DEVD-fluoromethyl ketone (z-DEVD) against Ru-B-induced cell death, respectively. Pretreatment Table 1 Screening for caspase-8 activation and cytotoxicity in a series of arborinane-type triterpenoids isolated from R. philippinesis with z-IETD effectively abrogated cell death in response to Ru-B treatment in multiple types of human cancer cells, including HCT116, HeLa and MCF7 cells, while z-DEVD had a marginally preventive effect against cell death (Fig. 1B, 1C). To examine the mode of cell death triggered by Ru-B, HCT116 cells treated with Ru-B were subjected to annexin V and propidium iodide (PI) staining, followed by flow cytometry. Ru-B treatment resulted in significant increases in both the early and late phases of apoptosis (52.8% and 9.14%, respectively), while very few cells were stained exclusively with PI (2.51%) (Fig. 1D). Consistently, such an increased population of cell stained with Annexin V following Ru-B treatment was significantly reduced by pretreatment with z-IETD, but not with z-DEVD. To further investigate the signaling pathway underlying Ru-Binduced apoptosis, we analyzed the sequence of activation processes in the caspase cascades. In a kinetic analysis, treatment of Ru-B resulted in sequential activation of caspase-8 and the resultant cleavage of RIPK1, Bid, caspase-3 and PARP, which was completely inhibited by pretreatment with z-IETD ( Fig. 1E). Caspase-8 is a key initiator of death receptor (DR)-mediated apoptosis upon DR ligation by associating with RIPK1 and FADD to form the DISC. However, catalytic activation of caspase-8 triggers the cleavage of DISC-associated RIPK1, resulting in destabilization of the DISC (Tummers and Green 2017). Accordingly, an immunoprecipitation assay using an anti-caspase-8 antibody showed that no evident DISC formation was observed in HCT116 cells following treatment with Ru-B only (Fig. 1F, first to third rows). However, treatment of cells with Ru-B in the presence of z-IETD led to drastic recruitment of RIPK1 and FADD to the isolated caspase-8 (Fig. 1F, fourth and fifth rows). Taken together, these results indicate that caspase-8 activation is a major element Table 1 (continued) † RLU, relative luminescence unit ‡ HCT116 cells were treated with a series of arboriane-type triterpenoids (10 μM) for 24 h, and cell death was quantified using the cell viability assay kit (Promega) and the results were expressed as mean ± SE for Ru-B-induced apoptosis via DISC formation, even though it also serves in the process of destabilization of the DISC by the cleavage of RIPK1. Ru-B triggers a shift from apoptosis to necroptosis in RIPK3-expressing cancer cells Given that caspase-8 inhibits RIPK3-MLKL-mediated necroptosis (Shalini et al. 2015;Newton et al. 2019), it is hypothesized that RIPK3 expression in cancer cells may be play a role in determining the cell fate (apoptosis or necroptosis) in response to Ru-B treatment. To investigate this hypothesis, we compared the effects of z-IETD against Ru-B-induced cell death in pairs of CRC cells either lacking or harboring RIPK3 expression. In line with previous findings, z-IETD pretreatment drastically suppressed Ru-B-induced cell death in HCT116, DLD1 and Caco2 cells, all of which lack RIPK3 expression ( Fig. 2A). Conversely, in RIPK3-expressing cells (HT-29 and SW620 cells), Ru-B-induced cell death was significantly enhanced in the presence of z-IETD ( Fig. 2A). Important to note, pretreatment of HT-29 cells with a RIPK1-specific inhibitor necrostatin-1 (Nec-1) almost completely protected against Ru-B/z-IETD-induced cell death, which was accompanied by impaired phosphorylation of RIPK1, RIPK3 and MLKL (Fig. 2B, Fig. 2C). These and z-DEVD-fmk (10 μM) for 30 min, and then treated with Ru-B (10 μM) for the indicated times. Cell death was quantified by using Cell Titer-glo Luminescent cell viability assay as described in Materials and Methods. The data represent as mean ± S.E. of three experiments carried out in triplicate. *P < 0.05, compared with Ru-B treated group. (C-E) HCT116 cells were pretreated with z-IETD-fmk and z-DEVD-fmk for 30 min, and then treated with Ru-B for the indicated times. (C) After 24 h, cells were visualized using an inverted phasecontrast microscope. (D) Cells were stained with FITC-labeled annexin V and PI and analyzed by flow cytometry as described in Materials and Methods. (E) Whole cell extracts were subjected to immunoblotting with the indicated antibodies. (F) HCT116 cells were pretreated with z-IETD-fmk for 30 min, and then treated with Ru-B for the indicated times. Cell extracts from each sample were subjected to immunoprecipitation (IP) with anti-caspase-8 antibody. Immunoprecipitates were analyzed by immunoblotting with the indicated antibodies. A total of 1% of the cell extract volume from each sample was used as input control results suggest that Ru-B facilitates RIPK1-dependent necroptosis in RIPK3-expressing cells under the caspase-8-blocked condition. Consistent with this notion, cell death induced by Ru-B/z-IETD was drastically abolished by RIPK3-and MLKLspecific inhibitors GSK872 and necrosulfonamide (NSA) pretreatment, respectively (Fig. 2D). Since caspase-8 activation decreases the stability of DISC by RIPK1 cleavage, Nec-1 has been shown to partially block RDA, while it completely blocks RIPK1-dependent necroptosis (Degterev et al. 2019;Xu et al. 2018;Kang et al. 2020). Consistently, in the absence of z-IETD, the extent of cell death by Ru-B was significantly but not completely inhibited by Nec-1 pretreatment (Fig. 2B), despite Nec-1 efficiently preventing Ru-B-induced caspase-8 cleavage (Fig. 2C), and thus it is believed that RIPK1 kinase activation at least partially contribute to Ru-B-induced apoptosis. Consistent with previous reports (Degterev et al. 2019;Xu et al. 2018; Cells were treated with Ru-B (10 μM) in the absence or presence of z-IETD-fmk (20 μM) for the indicated times, and cell death was quantified as in Fig. 1B (right). The data represent as mean ± S.E. *P < 0.05, compared with Ru-B treated group. (B, C) HT-29 cells were untreated or pretreated with Nec-1 (50 μM) for 30 min and then treated with Ru-B or in combination with z-IETD-fmk for the indicated times and 24 h, respectively. (B) Cell death was quantified as in A. The data represent as mean ± S.E. *P < 0.05, compared with Ru-B treated group. # P < 0.05, compared with Ru-B/z-IETD-fmk treated group. (C) Whole cell lysates from each sample were subjected to immunoblotting with the indicated antibodies. (D) HT-29 cells were treated with Ru-B or in combination with z-IETDfmk for 24 h in the absence or presence of necroptosis inhibitors Nec-1 (50 μM), GSK-872 (3 μM), and NSA (2 μM). Cell death was quantified as in A. The data represent as mean ± S.E. *P < 0.05, compared with Ru-B treated group. # P < 0.05, compared with Ru-B/z-IETD-fmk treated group. (E) WT, CASP8and FADD-KO HT-29 cells were treated with Ru-B for the indicated times. Whole cell lysates were subjected to immunoblotting with the indicated antibodies. (F, G) WT, CASP8-and FADD-KO HT-29 cells were treated with Ru-B or in combination with z-IETD-fmk for the indicated times. Cell extracts from each sample were subjected to IP with anti-caspase-8 (F) and anti-RIPK3 (G) antibodies, respectively. Immunoprecipitates were analyzed by immunoblotting with the indicated antibodies. A total of 1% of the cell extract volume from each sample was used as input control et al. 2020), although Nec-1 completely abolished RIPK1-dependent necroptosis by TNF/SM164/z-IETD (TSZ), RDA by TNF/SM164 (TS) could only be partially protected ( Supplementary Fig. S1). To further assess the functional relationship between apoptosis and necroptosis following Ru-B treatment, we used CRISPR-Cas9 to knock out caspase-8 or FADD in HT-29 cells and examined the modes of cell death. As expected, caspase-8 signaling cascades were activated without triggering necroptosisrelated events after Ru-B treatment in wild-type (WT) HT-29 cells (Fig. 2E, left panel). By contrast, RIPK1, RIPK3 and MLKL were markedly phosphorylated in both caspase-8-and FADD-deficient HT-29 cells upon Ru-B treatment (Fig. 2E, middle and right panels). To gain further insight into the molecular mechanisms underlying Ru-B-induced RIPK1/3 and MLKL phosphorylation, we examined whether Ru-B induces the necrosome formation under the condition of pharmacological or genetic blockade of DISC-mediated apoptosis. As expected, RIPK1, RIPK3, and MLKL were associated with caspase-8 in HT-29 cells after treatment with Ru-B in the presence of z-IETD (Fig. 2F). In parallel, we observed the Ru-B-induced association of necrosome components including RIPK1, RIPK3, and MLKL in caspase-8 deficient HT-29 cells (Fig. 2G). Our results suggest that under physiological conditions where both apoptosis and necroptosis are preserved, cells preferentially undergo apoptosis in response to Ru-B treatment, but can be converted to necroptosis through necrosome formation under apoptosis-limiting condition. RIPK1 phosphorylation plays an essential role in Ru-B-induced necroptosis RIPK1 and its phosphorylation play an essential role in inducing necroptosis by forming an RIPK3-containing necrosome under apoptosis-deficient conditions (Newton 2015). To directly determine whether Ru-B-induced PCD is achieved by targeting RIPK1 or RIPK3, we examined the cytotoxic efficacy of Ru-B and Ru-B/z-IETD treatment in WT, RIPK1and RIPK3-knockout (KO) HT-29 cells. Consistent with both RIPK1 and RIPK3 not being involved in the TNF/cycloheximide (CHX)-induced cell death pathway (Kang et al. 2020;Lin et al. 2004), the cytotoxic effects of TNF/CHX in RIPK1-KO and RIPK3-KO cells was comparable to that of WT HT-29 cells ( Fig. 3A-3C). Of note, Ru-B-induced apoptotic cell death was almost completely abolished in RIPK1-KO, but not in RIPK3-KO HT-29 cells (Fig. 3B, 3C), as evident by caspase-8 cascade activation (Fig. 3D), suggesting that RIPK1 plays an essential role in Ru-B-induced apoptosis. In addition, necroptotic cell death accompanied by RIPK1, RIPK3, and MLKL phosphorylation was completely abolished in RIPK1-KO and RIPK3-KO HT-29 cells treated with Ru-B or TS in the presence of z-IETD (Fig. 3B-3D). Importantly, the phosphorylation of RIPK3 and MLKL was completely abrogated in RIPK1-KO HT-29 cells by Ru-B/z-IETD treatment (Fig. 3D, fourth to sixth rows), suggesting that RIPK1 functions as an upstream kinase responsible for RIPK3 activation to induce Ru-B/z-IETD-induced necroptosis. Similar effects were found in RIPK1-deficient Jurkat T cells (Fig. 3E), confirming that RIPK1 indeed plays a critical role in Ru-B-induced apoptotic and necroptotic cell death. Previously, RIPK3 has been reported to contribute to RDA in mouse embryonic fibroblasts through yet unknown mechanisms (Dondelinger et al. 2013). However, Ru-B-and TS-induced apoptosis observed in WT HT-29 cells was not affected by either RIPK3 deficiency or NSA pretreatment (Fig. 3B, Fig. 3C, Supplementary Fig. S1), thus excluding the possible involvement of RIPK3 and MLKL in RDA process. Consistently, we found that pretreatment of Nec-1 significantly suppressed Ru-Band TS-induced cell death in RIPK3-deficient cells including HCT116, HeLa and MCF7 cells, confirming that RIPK3 unlikely involves in Ru-B-induced RDA ( Supplementary Fig. S2). Because the phosphorylation of RIPK1 at serine residue 166 (Ser 166 ) triggers RIPK1 kinase activity to trigger the downstream cell death signaling (Kang et al. 2019), we next investigated whether Ru-B could induce RIPK1 phosphorylation. We found that in response to Ru-B, RIPK1 was phosphorylated in WT HT-29 cells, peaking at 1 h after Ru-B treatment (Fig. 4A, left panel). Important to note, Ru-Binduced RIPK1 phosphorylation was markedly prolonged and enhanced by z-IETD pretreatment, which was subsequently accompanied with the enhanced phosphorylation of RIPK3 and MLKL (Fig. 4A, right panel). These results suggest that persistent RIPK1 phosphorylation promotes RIPK3/MLKL-mediated necroptosis when the apoptotic pathway is blocked. Consistent with this notion, Ru-B-induced RIPK1 phosphorylation was persistent in caspase-8-KO and FADD-KO HT-29 cell, but transient in WT HT-29 cells (Fig. 4B). Moreover, Ru-B-induced RIPK1 and RIPK3 phosphorylation was almost completely inhibited by Nec-1 (Fig. 4C). Subsequent immunoprecipitation assays revealed that Ru-B treatment led to the recruitment of RIPK1 and MLKL into the isolated RIPK3 in caspase-8-KO HT-29 cells, and this necrosome formation was abrogated by Nec-1 (Fig. 4D). These results indicate that the increased RIPK1 phosphorylation triggered by Ru-B treatment likely occurs upstream of RIPK3 and actively drives necroptosis via necrosome formation under apoptosis-deficient conditions. Previously it has been reported that, in the presence of caspase inhibitor, some anti-cancer chemicals including 5-fluorouracil and Smac mimetics induces RIPK1-dependent necroptosis via autocrine TNF-α production (Oliver Metzig et al. 2016; Gerges et al. with Ru-B for the indicated times, and whole cell extracts from each sample were subjected to IP with anti-RIPK3 antibody. Immunoprecipitates were analyzed by immunoblotting with the indicated antibodies. A total of 1% of the cell extract volume from each sample was used as input control 2016). To explore the possibility that enhanced necroptosis triggered by Ru-B/z-IETD is caused by autocrine TNF-α production, we analyzed the mRNA expression of TNF-α in HT-29 cells. As expected, treatment of SM-164 and z-IETD led to a marked increase of TNF-α expression whereas SM-164 alone had only a marginal effect. By contrast, no detectable transcriptional induction of TNF-α was observed in HT-29 cells upon Ru-B alone or Ru-B/z-IETD treatment (Supplementary Fig. S3A). Furthermore, enhanced RIPK1 phosphorylation by Ru-B/z-IETD was not affected by the cycloheximide pretreatment ( Supplementary Fig. S3B). These results suggest that the enhanced RIPK1 phosphorylation and necroptosis by Ru-B/z-IETD does not require de novo TNF-α synthesis. Consistent with these results, we found that the degree of cell death by Ru-B/z-IETD occurred at a similar level in TNFR1-knockdown HT-29 cells when compared to control cells ( Supplementary Fig. S3C). Hence, these data confirm that Ru-B-induced RIPK1-dependent necroptosis under caspase-8 inhibited conditions is independent of TNFR1 signaling. NOX1-derived ROS production induced by Ru-B is required for RIPK1 phosphorylation and RIPK1-dependent cell death Previous in vitro and in vivo experimental studies reported that ROS derived from superoxide (O 2 − ) are involved in RIPK1-dependent necroptosis (Goossens et al. 1995;Kim et al. 2007;Roca and Ramakrishnan 2013). Furthermore, ROS function as a positive feedback loop to enhance necrosome formation via RIPK1 autophosphorylation at Ser 161 (Schenk and Fulda 2015;Zhang et al. 2017). Therefore, we investigated whether intracellular O 2 − levels was increased following Ru-B treatment using dihydroethidium, an oxidative fluorescent dye. Ru-B and Ru-B/z-IETD treatment caused a dramatic increase in O 2 − levels within 30 min in HT-29 cells, which peaked at 1 h after treatment (Fig. 5A). This increase was attenuated when the cells were pretreated with either antioxidants such as butylated hydroxyanisole (BHA) and apocynin or a NOX inhibitor diphenyleneiodonium (DPI); however, intracellular O 2 − levels were not reduced by pretreatment with the mitochondriatargeting antioxidant Mito-TEMPO (Fig. 5B). These results suggest that Ru-B induces non-mitochondrial ROS production, potentially via NOX. To exclude the possibility that the ROS production triggered by Ru-B treatment was mitochondrial, we used a mitochondriatargeting hydroethidine analog, MitoSOX Red, to monitor mitochondrial O 2 − production. Treatment of HT-29 cells with carbonyl cyanide chlorophenylhydrazone, a mitochondrial uncoupler, dramatically enhanced the MitoSOX Red oxidation signal, being consistent with the well-established mitochondrial uncoupling effect (Fig. 5C, bottom panel). By contrast, Ru-B treatment did not induce MitoSOX Red oxidation (Fig. 5C, middle panel), indicating that Ru-B-induced ROS production occurs independently of the mitochondria. To determine whether increased ROS production plays a role in Ru-B or Ru-B/z-IETD-induced cell death, we pretreated HT-29 cells with various antioxidants. Pretreatment with BHA or the NOX inhibitors, but not with Mito-TEMPO, significantly prevented cell death in response to Ru-B and Ru-B/z-IETD treatment; this was correlated with their ROS quenching efficiencies (Fig. 5D). Moreover, the sequential phosphorylation of RIPK1, RIPK3 and MLKL upon Ru-B/z-IETD treatment was markedly attenuated in the presence of apocynin (Fig. 5E). These results indicate that ROS generated by NOX enzymes play an important role in RIPK1 phosphorylation, which subsequently leads to RIPK1-dependent apoptosis and necroptosis in response to Ru-B and Ru-B/z-IETD, respectively. Of the known NOX enzymes, NOX1 is expressed in several types of non-phagocytic cells, while NOX2/ gp91 is mainly found in phagocytic cells (Geiszt et al. 2003;Suh et al. 1999). Next, we investigated whether the expression of various NOX isoforms was responsible for Ru-B-induced ROS production. As shown in Fig. 5F, NOX1 and NOX2, were constitutively expressed in various types of CRCs whereas the expression of NOX4 and NOX5 was variable depending on the cell types. Notably, the mRNA levels of NOX1 were high compared to those of the other four NOX isoforms (Fig. 5F). Knockdown of NOX1 led to a significant decrease the ROS production following Ru-B treatment (Fig. 5G), suggesting that NOX1 is the major NOX responsible for Ru-B-induced ROS production in CRCs. Furthermore, knockdown of NOX1, but not NOX2, caused a marked attenuation of cell death against to both apoptotic (Ru-B) and necroptotic (Ru-B/z-IETD) triggers, respectively (Fig. 5H), which was accompanied by decreased cleavage of caspase-8 cascades and reduced phosphorylation of RIPK1 and RIPK3 (Fig. 5I). These data suggest that NOX1derived ROS production plays an essential role in RIPK1mediated cell death in CRC cells. Three cysteine residues on RIPK1 play a crucial role in triggering necroptosis by regulating the ROS-mediated RIPK1 phosphorylation induced by Ru-B It is noteworthy that the three cysteine residues (C257, C268, and C586) in RIPK1 sense ROS signals and thus play a crucial role in RIPK1 autophosphorylation by forming oxidized disulfide bonds and causing RIPK1 to aggregate (Zhang et al. 2017). Thus, it is estimated that ROS induced by Ru-B may function as an upstream signaling component for inducing RIPK1-dependent necroptosis. To explore the underlying upstream regulatory mechanisms leading to RIPK1 phosphorylation and necroptosis by Ru-B/z-IETD, we reconstituted RIPK1 expression in RIPK1-KO HT-29 cells with WT or three cysteine mutants (3CS) RIPK1 expression vector. Fig. 1B. The data represent as mean ± S.E. *p < 0.05, compared with Ru-B treated group. # p˂0.05, compared with the Ru-B/z-IETDtreated group. (E) HT-29 cells were pretreated with apocynin (20 μM) and then treated with Ru-B/z-IETD for the indicated times. Whole cell lysates were performed immunoblotting with the indicated antibodies. (F) Total RNA was prepared from the indicated cell lines, and RT-PCR was performed with the primers specific to human NOX isoforms. After PCR amplification, the products were analyzed by agarose gel electrophoresis and visualized using ethidium bromide staining. (G-I) HT-29 cells were transfected with either a nonspecific control siRNA or siRNA specific for NOX1 and NOX2 for 48 h. (G) Cells were treated Ru-B for 1 h, and the superoxide production was analyzed in A. (H) Cells were treated Ru-B or Ru-B/z-IETD for 24 h; cell death was quantified as in D. (I) Cells were treated Ru-B or Ru-B/z-IETD for the indicated times. Whole-cell lysates were performed immunoblotting with the indicated antibodies Consistent with a previous report (Zhang et al. 2017), no significant differences in the recruitment of ubiquitinated-RIPK1 and TRADD into TNFR1 were detected between RIPK1-KO HT-29 cells reconstituted with either WT or 3CS RIPK1 (Fig. 6A). We also observed that treating cells with TNF showed no obvious difference in NF-κB activation between these cells, as evidenced by phosphorylation of IKK and p65 or the degradation of IκBα (Fig. 6B), confirming that residues of these cysteine on RIPK1 are unlikely to be involved in upstream NF-κB activation. By contrast, following Ru-B/z-IETD treatment, RIPK1 phosphorylation (Fig. 6C), as well as the association of the RIPK1, RIPK3 and MLKL with caspase-8 (Fig. 6D), was dramatically reduced in RIPK1-KO HT-29 cells expressing 3CS-RIPK1 compared to those expressing WT-RIPK1. This suggests that the modification of the RIPK1 cysteine residues serves a critical role in Ru-B-mediated RIPK1 phosphorylation and necrosome formation. To further investigate whether cysteine residues on RIPK1 are specifically involved in the necroptosis process, the cell death induced by various stimuli was compared in RIPK1-KO cells reconstituted with WT-RIPK1 and 3CS-RIPK1. As expected, no differences were observed between RIPK1-KO HT-29 cells expressing WT or 3CS-RIPK1 following TNF-related apoptosisinducing ligand (TRAIL) treatment. (Fig. 6E). By (E-G) RIPK1-KO HT-29 cells reconstituted with the indicated RIPK1 constructs were treated with TRAIL (100 ng/ml) or the indicated combination of compounds (10 μM Ru-B; 20 μM z-IETD-fmk, 15 ng/ml TNF; 100 nM SM-164) for 24 h. (E) Cell death was quantified as in Fig. 1B. The data represent as mean ± S.E. *p < 0.05, compared with RIPK1 KO HT-29 cells expressing WT-RIPK1. (F) Cells were visualized using an inverted phase-contrast microscope. (G) Whole cell lysates from each sample were subjected to immunoblotting with the indicated antibodies contrast, necroptotic cell death triggered by either Ru-B/z-IETD or TNF/SM/z-IETD was significantly lower in RIPK1-KO cells reconstituted with 3CS-RIPK1 compared with those expressing WT-RIPK1, as evidenced by cell viability, cell morphology and the phosphorylation of RIPK1, RIPK3 and MLKL (Fig. 6E-6G). Taken together, these results suggest that the three cysteine residues on RIPK1 are essential for Ru-B to induce ROS-mediated necroptosis via amplifying RIPK1 phosphorylation. Discussion Given the pivotal role of RIPK1 in triggering necroptosis, small molecules capable of activating RIPK1 kinase activity and RIPK1-dependent PCD in human cancer cells present an alternative means of eradicating cancer cells, by inducing the necroptotic mode of cell death in apoptosis-resistant cancer cells. As part of our search for PCD-inducing bioactive compounds at the DISC level, the novel arborinane triterpenoid Ru-B, which was isolated from R. philippinesis, was identified as a potent inducer of dual RIPK1-dependent modes of apoptosis and necroptosis in RIPK3expressing CRC cells. In this study, we found that Ru-B markedly enhances necroptosis through upregulation of RIPK1 phosphorylation by NOX1-derived ROS production under apoptosis-limiting conditions. Thus, we propose that Ru-B is a novel RIPK1 activator that can provide an efficient strategy for inducing necroptosis to overcome CRC cells resistant apoptosis. Caspase-8 activation is triggered by RIPK1-associated DISC formation following DR ligation, and functions as an initiator caspase that induces the extrinsic apoptotic signaling pathway (Tummers and Green 2017). The anti-cancer properties of several bioactive compounds, including pentacyclic triterpenoids, are known to be closely related to DISC-independent activation of executor caspases (e.g. caspase-3) by inducing mitochondrial dysfunction (Fulda 2010;Fulda and Kroemer 2009;Markov et al. 2017). However, in this study, we found that Ru-B induces caspase-8 activation and DISC formation without affecting the mitochondrial pathway in multiple types of CRC cells (Fig. 1). In addition, pretreatment with the caspase-8 inhibitor significantly protected Ru-B-induced apoptosis in CRC cells lacking RIPK3 expression (Fig. 1). Moreover, we provide evidence that genetic or pharmacological inhibition of caspase-8 not only accelerates cell death by Ru-B, but also can shift the balance of cell death to necroptosis in RIPK3-expressing CRC cells (Fig. 2). Therefore, we propose that Ru-Binduced caspase-8 activation at the DISC level is the major determinant of cell death type (apoptotic or necroptotic). In this sense, the mechanism driving caspase-8 activation and DISC formation in response to Ru-B treatment is a question that remains yet largely unresolved. It has been reported that, in human cancer epithelial cells, including CRC cells, certain pentacyclic triterpenoids can activate caspase-8 by upregulating DRs such as DR5 and FAS in cell surface (Byun et al. 2018;Mou et al. 2011;Sung et al. 2014). However, we observed that the mRNA and protein expression levels of DR5 and FAS were not significantly affected by Ru-B treatment (data not shown). Thus, Ru-B-induced caspase-8 activation via DISC formation is unlikely associated with DR signaling pathway. ROS actively participate in the execution of necroptosis, which is induced by a variety of stimuli including TNF (Vanden Berghe et al. 2010;Vanlangenakker et al. 2011;Zhang et al. 2009), FAS ligand (Chen et al. 2009) and plant-derived natural products (Sun et al. 2019;Zhao et al. 2021). However, the signaling pathways governing the crosstalk between ROS and RIPK1 activation are still under debate. For example, RIPK1 plays an essential role in TNF-induced ROS generation, which is required for the initiation of necroptosis (Kim et al. 2007;Lin et al. 2004); this suggests that the ROS production driving necroptosis occurs downstream of RIPK1. On the other hand, ROS promotes RIPK1 phosphorylation at Ser161 via its three cysteine sites, which leads to RIPK1 oligomerization and promotes RIPK1/RIPK3 interaction (Zhang et al. 2017); this suggests that ROS functions as a positive feedback loop of necrosome formation at the upstream level of RIPK1. In this study, we found that ROS accumulated after Ru-B treatment, presumably via NOX1 (Fig. 5), and both Ru-B-induced apoptosis and necroptosis were abrogated in RIPK1-deficient HT29 cells (Fig. 3). In this regard, it was interest whether ROS induced by Ru-B plays a role in controlling the cytotoxic potential of RIPK1. An important finding from this study is that Ru-B-induced RIPK1 phosphorylation was markedly prolonged and enhanced under necroptotic conditions, such as caspase-8 inhibition and FADD deficiency (Fig. 4). This indicated that the upregulation of RIPK1 phosphorylation functions as a positive effector that enables RIPK3 phosphorylation, thus facilitating necrosome formation. We also found that the ROS scavenging activity of BHA and two NOX inhibitors, diphenyleneiodonium and apocynin, correlated well with the inhibition of RIPK1 phosphorylation and Ru-B/z-IETD-induced necroptosis (Fig. 5). Thus, we propose that ROS production by NOX1 likely functions upstream of Ru-B-induced RIPK1 phosphorylation, and thus, it can switch the cell death mode from apoptosis to necroptosis in RIPK3-expressing CRC cells. In this sense, whether the NOX1-mediated RIPK1 activation by Ru-B triggers a selective cytotoxicity in cancer cells is a critical question to be further studied. It has been proposed that dysregulation of ROS production has long been implicated as a risk factor in cancer development (Liou and Storz 2010;Meitzler et al. 2014;Perillo et al. 2020). Furthermore, NOX1 is mainly expressed in the colon and has been shown to induce malignant transformation and cancer cells growth (Mitsushita et al. 2004;Suh et al. 1999). Indeed, activation of NOX1 or enhanced level of NOX1 expression has been commonly observed in several malignant cancers (Banskota et al. 2015;Juhasz et al. 2017;Laurent et al. 2008;Rudolf et al. 2018). Thus, it is possible that NOX1-mediated cytotoxic activity of Ru-B will be of clinical value for selective therapeutic approach in malignant cancer cells harboring abundant NOX1 activity, rather than non-transformed normal cells. Nevertheless, the findings from this study also raise several questions that should be addressed. Although the results including ours showed that extramitochondrial ROS production by NOX1 is responsible for necroptosis by TNF and some anti-cancer compounds (Kim et al. 2011(Kim et al. , 2007, it has been also reported that mitochondrial involvement in this process (Schenk and Fulda 2015;Zhang et al. 2017). Although our knowledge regarding mitochondrial structure in specific cancer types is limited, it has been established that malignant transformation disturbs redox homeostasis in cancer cells (Gorrini et al. 2013). Thus, this discrepancy may depend on the cell type and/or cellular level of molecular context such as the NOX family members. Further research is needed to elucidate the dynamic interactions between Ru-B and NOX1, as well as the mechanism by which Ru-B targets NOX1 to induce ROS generation. Further in vivo studies investigating the anti-cancer efficacy of Ru-B in mice lacking caspase-8 or FADD are also necessary for the development of Ru-B as a cancer chemotherapeutic to overcome apoptosis resistance.
2022-09-28T06:18:29.807Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "438dbb0f0c55647d03488f20fdbefef6b84e6d8e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1616051/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "634d4cc0075578d0a58df5d10fb56518e88f420e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
216560078
pes2o/s2orc
v3-fos-license
Varlociraptor: enhancing sensitivity and controlling false discovery rate in somatic indel discovery Accurate discovery of somatic variants is of central importance in cancer research. However, count statistics on discovered somatic insertions and deletions (indels) indicate that large amounts of discoveries are missed because of the quantification of uncertainties related to gap and alignment ambiguities, twilight zone indels, cancer heterogeneity, sample purity, sampling, and strand bias. We provide a unifying statistical model whose dependency structures enable accurate quantification of all inherent uncertainties in short time. Consequently, false discovery rate (FDR) in somatic indel discovery can now be controlled at utmost accuracy, increasing the amount of true discoveries while safely suppressing the FDR. S1 Why naive approaches to compute the likelihood function fail. To understand why efficient computation of equation (1) is difficult, consider that each of the reads Z h i , Z t j could (a) not stem from the particular variant locus, (b) stem from the locus, but is not affected by the variant, (c) stem from the locus, and is indeed affected by the variant. We recall that it can be particularly difficult to be certain about (a), (b) or (c) when dealing with reads being associated with midsize indel loci (30-250 bp; sometimes termed the "NGS twilight zone"). Let k = |Z t | and l = |Z h | be the read coverage of the locus in the tumor and the healthy sample. Since there are 3 different possibilities-namely (a), (b) or (c)-for the overall k + l reads, we obtain that there are 3 k+l different scenarios that could reflect the truth, all of which apply with a particular probability. For computing equation (1) following a fully Bayesian approach to inverse uncertainty quantification [1]which is the approved and canonical way to quantify uncertainties in our setting-one needs to integrate over all the possible k +l choices. In a naive approach, this translates into computing a sum with 3 k+l summands. Because k + l amounts to at least 60 to 70 in standard settings, naive approaches fail to compute the integral in human feasible runtime. This is further aggravated because one usually needs to consider hundreds of thousands of putative indel loci. So, methodical efforts are required for uncertainty quantification in our setting. S2 Uniqueness and computation of the maximum likelihood estimate The likelihood function of θ h , θ c , and β given the data Z h and Z t as shown in equation (1) is a higherorder polynomial, which makes it infeasible to derive its maximum analytically. We show in this section, however, by proving Theorem 3.2 that under weak conditions the likelihood function attains a unique global maximum on the unit interval for each value of θ h and β. We, in addition, show that the loglikelihood function is strictly concave, which simplifies the numerical maximization. Proof. The likelihood function with θ h and β fixed can be written in the form where C is the constant In the case that theorem condition 1 is not met, C = 0. The likelihood L(θ h , θ c , β | Z h , Z c ) equals zero for all θ c and, therefore, does not attain a unique global maximum. Suppose theorem condition 1 is met (C > 0). Let us consider theorem condition 2. Note that L(θ h , θ c , β | Z h , Z c ) = 0 when θ c ∈ I, since for those θ c 's there exists an observation for which the P (Z t j | θ h , θ c β) = 0. The likelihood L is by definition strictly larger than zero when θ c ∈ I. Since the function in equation (41) is an l-th order polynomial and, therefore, continuous, it must attain a global maximum on the interval I. Suppose theorem condition 2 is met. The point θ c is a maximum of L(θ h , ·, · | Z h , Z c ) if and only if it is a maximum of the loglikelihood function (with θ h , β fixed and θ c ∈ I) since the logarithm is a monotonic transform. (Note that is only defined on the subset I). The second order derivative of the loglikelihood with respect to θ c is found to be indicating that the loglikelihood function is concave. Note that it is strictly concave, i.e., ∂ 2 /∂θ 2 c < 0, iff there exists an observation z t j for which This inequality holds only when α = 0, π t j = 0 and p t j = a t j , which constitutes theorem conditions 3 and 4. Suppose I is the non-empty closed set [a, b] on the unit interval. Since the loglikelihood is strictly concave when theorem conditions 3 and 4 are met, it attains a unique global maximum θ c on I. Because the logarithm is a monotonic transformation, θ c must be a unique global maximum of the likelihood function as well. A similar reasoning holds when I is open or half-open. The maximum must lie on the interior of I, since the likelihood function is zero for those endpoints not in I. For example, when I is the open interval Below the diagonal, the control is conservative. Above the diagonal, the FDR would be underestimated. Importantly, points below the diagonal mean that the true FDR is smaller than the threshold provided, which means that FDR control is still established; in this sense, points below the diagonal are preferable over points above the diagonal. Fig. S11: Recall and precision for calling somatic insertions on synthetic data (mixture rate 5%). Results are grouped by deletion length, denoted as interval at the top of the plot. For our approach (Varlociraptor+*) curves are plotted by scanning over the posterior probability for having a somatic variant (for readability, each curve is terminated by a square mark). For other callers that provide a score to scan over (e.g. p-value for Lancet) we plot a dotted line. Ad-hoc results are shown as single dots. Results are shown if the prediction of the caller did provide at least 10 calls.
2020-04-28T14:34:36.299Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "324992cc86afbd6a0d5c1569a22f92af6a8a13ea", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-020-01993-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cfbc111e4f993146086a00475ad7d8a3358b26c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13181652
pes2o/s2orc
v3-fos-license
Parametric Study of Double Link Flexible Manipulator In this work, finite element method based on Lagrangian formulation is used for obtaining the equations of motion of the double link flexible revolute-jointed robotic manipulator. Both the links are considered as Euler-Bernoulli beams. A parametric study is carried out for the double link flexible robotic manipulator through linear modeling technique. A comparative study for dynamic response is carried out for the uniform beam manipulators under various types of excitations. Introduction In conventional robotic system, links are rigid giving small static deflection and hence it is possible to obtain high positional accuracy. However, most of the power is used to overcome the inertia of the system. On other hand, lightweight and large dimension robotic manipulators have become popular due to their higher manipulation speed, less weight/overall cost, transportability, better energy consumption, enhanced payload capacity etc in comparison to conventional robotic system. There is a wide range of applications of such new generation robotic systems in many areas nowadays. It is desired to design lighter robots to carry out heavier payloads as well as to operate it at higher speeds. Thus, flexible link manipulators are a subject of intensive research. Dwivedy and Eberhard (2006) presented a wide review on dynamic analysis of flexible manipulator done by various researchers. Book et al. (1975) linearized the equations of motion about a nominal configuration for a two-link flexible manipulator. Chang and Hamilton (1991), and Usoro et al. (1986) presented a Lagrangian finite element approach for the mathematical modeling of the manipulators with flexible links. Yigit (1994) modeled a two-link rigid-flexible manipulator and derived the equations of motion by applying the Hamilton's principle. Ankarah and Diken (1997) used the Euler-Bernoulli beam theory and solved the transient vibration theorem with the mode summation method to control the residual vibration of a single flexible link. The dynamics of a flexible arm and flexible joint manipulator carrying a payload with rotary inertia was studied by Bedoor and Almusallam (2000). Meghdari and Fahimi (2001) derived the improved elastic generalized coordinates. Kane's equation of motion for arbitrary number of rigid and elastic bodies is presented. Also, equations of motion are de-coupled in first order terms. Zhang and Bai (2012) established Lagrangian dynamic equations of two-link flexible manipulator through integrated model and multi body dynamics method. Dynamic response reliability is analyzed by using Monte Carlo and extremum surface method. Most of the published work focuses on modelling and pays less attention for its optimal design. Asada et al. (1991) presented optimum structure along with control aspect of flexible robot arms. Coordinates used by finite element model are treated as design variables, which are optimized for obtaining the optimal shape and structure of the arm mechanism. Wang (1994) addressed optimum design of a single link manipulator to maximize its fundamental frequency. He formulated the design problem as a nonlinear eigenvalue problem and used variational method. He demonstrated the increase of fundamental frequency as a result of optimization by considering a few numerical examples. In the present work a linearized model for small rigid body motion and small flexural deflection is used. Based on this model, complete parametric study is done to predict the dynamic behaviour of the system due to the variation of various design parameters. In addition, shape optimization is done to increase the fundamental frequency and dynamic response of the optimized links is studied. Obtaining Elemental Equation of Manipulator Rotating flexible beams have significant transverse deflections. They behave as a nonlinear elastic beams and exhibit vibratory motions in both chord wise and flap wise directions. However, Robotic manipulators usually work at moderate peak speed. Induced transverse force in the chord wise direction due to the applied excitation torque is much higher compared to the gravity force in flap wise direction and vibrations are predominantly in chord wise directions. In this work, model of Usoro at el. (1986) is adopted. However, formulations are consistently linearized for small angular/transverse deflections under linear beam theory to reduce the complexity of the system modeling. Fig.1(a) shows single link flexible manipulator in which XOY and X 1 OY 1 represents the stationary and moving co-ordinate frames respectively. Motion of the link is represented by fixed XOY co-ordinate frame. The link is considered slender. Hence, transverse shear and rotary inertia effects are neglected allowing it to be treated as an Euler-Bernoulli beam. Beam is assumed to vibrate predominantly in horizontal plane (XOY), neglecting gravity effects. Modeling of First Link In the FEM formulation the manipulator is divided into finite elements, each element having five degrees of freedom. Detail of th i element of the first link is shown in Fig. 1 P with respect to inertial system XOY for smaller angular displacement and small flexural deflection is given by In finite element method, variables are converted to nodal variables. 2.1.1. Kinetic energy computation of the i th element of the 1 st link: Kinetic energy of th i element of the first link is given by We have Substituting Equation 5 in Equation 4, we get r r (6) Thus elemental mass matrix is given by where, Thus, elemental stiffness matrix is given by Modeling of Second Link Hermitian shape functions are expressed byS i . In the FEM formulation the manipulator is divided into 10 elements, each element having eight degrees of freedom. Detail of th j element of the second link is shown in Figure 2(b). In the Figure 2b and Kinetic Energy Computation of the j th element of the 2 nd link: Kinetic energy of the second link of the th j element is Thus, mass matrix of the element become All the constants of the above matrix may be obtained by integrating 2 j M for different vector elements of Z 2 . Elastic potential energy of the j th element of 2 nd Link The potential energy of the j th element of the 2 nd link due to elastic deformation is given as Thus, the elemental stiffness matrix is given by Lagrange'S Equation of Motion in Discretized Form The kinetic energy and the potential energy of the system are obtained by computing the kinetic energy and potential energy of the each element of the system and summing over all the elements. where q F are the generalized forces. Being linear system, global mass and stiffness matrix is constant and equation of motion comes as Effect of hub mass and payload mass is incorporated in the global mass matrix and stiffness matrix using Dirac-delta function as described by Dixit et al. (2006). Hub mass and tip mass/payload is defined in terms of β (ratio of hub mass to beam mass), 2 µ (ratio of tip mass at link 2 to total beam mass), 1 µ (ratio of tip mass at link 1, mass of the motor at the link joint, to total beam mass) and α (ratio of length of link 2 to length of link 1). Neglecting load vector, Eq. 23 becomes standard eignvalue problem, which is solved to obtain natural frequencies of the system. Numerical integration of Eq. 23 is carried out by using Newmark's integration scheme (Dixit, 2009) to obtain transverse deflections u & v, rigid body motions θ 1 & θ 2 and their derivatives. Results and Discussion In this section, a comparative analysis has been carried out for uniform as well as shape optimized double link flexible revolute manipulator. For the numerical study, a manipulator having uniform diameter 0.01 m, length 1.0 m, mass per unit length 217.3 gm/m, Young's modulus of elasticity 69 GPa is considered for both the links. Damping of the system is neglected. Most of the numerical simulations are done subjected to a sinusoidal torque given in Eq. 24 about the axis of rotation, The dynamic behaviors depend upon many parameters of double links flexible manipulator and also dynamic response consists of several desired objectives viz higher hub angle, less static deflection, less residual vibration, less response and settling time, etc. Improved dynamic response is a multi objective problem. Here some parametric study is done to analyze the dynamic behaviour of double link flexible manipulator. Dynamic Response due to Different Payloads Dynamic behaviour of the double link flexible manipulator changes with respect to the change of payloads at the tip of second link as shown in Fig. 4. It is observed that with the increase of payloads, the magnitude of hub angle and joint angle reduce. Residual vibration is considerably more at tip of first link or second link depending upon the input torque at hub joint or link joint respectively. It is also observed that there is very small effect in dynamic response with the increase of hub inertia. Similar trend is observed in dynamic response due to the variation of motor mass (tip load at link1) and hub mass. For the sake of brevity, these results are not tabulated here. Effect of Link Lengths on Dynamic Response Dynamic response of the double link flexible manipulators also depends upon the links length ratios. The lesser the length of link 2 with respect to Link 1, the better the dynamic response i.e. more hub/joint angle and lesser residual vibration for the given set of torque. Therefore, designer should not prefer the longer second link with respect to the first link. τ is plotted in Fig. 6. It is observed that there is a decrease in hub angle and increase of joint angle with the increase of torque amplitude. As the magnitude of applied torque 2 τ increases, there is considerable increment in the residual vibration of the tip of second link. Similar trend is also observed due to the variation of input torque 1 τ at the hub joint (result not shown here). Thus, tip vibration increase for a particular link with the increase of torque amplitude acting in that particular link. Overall angular displacement depends upon the set of input torques at the joints. Comparative Dynamic Response due to Different Torque Profile Different torque profiles shown in Fig. 7 are considered for the comparison of dynamic response of double link flexible manipulator. All the torque profiles have same amplitude i.e. 0.5 N.m and same duration of excitation i.e. 4 sec. Dynamic response due to different torque profile is shown in Fig. 8. Triangular torque profile gives the lesser hub angles to the links. Bang-bang torque gives high input energy to the system giving high hub angle as well joint angle. However due to sudden change, bang-bang torque produces high residual vibration to the system. Sinusoidal torque may be preferred for smooth operations of the system. Conclusions Dynamics of double link flexible manipulator is highly complex and nonlinear in nature. Model is linearized to reduce the complexity of the model and tried to predict the behaviour of the system under low amplitude of vibration due to excitation. Parametric study suggests that dynamic response of the double link manipulator depends upon system parameters viz. payloads at tip and link joint, link lengths, input torque magnitude & profile and hub inertia.
2019-04-21T13:02:49.173Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "844372a7cb303fbb8ab9a1c0095160b4917a25b8", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20140801/UJME1-15190030.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e893dd189fa43cfa0a1945d32a3433bf076c0952", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
55293171
pes2o/s2orc
v3-fos-license
Decolorization of Methyl Orange (MO) by Electrocoagulation (EC) Using Iron Electrodes Under a Magnetic Field (MF). II. Effect of Connection Mode This work aims to investigate the electrocoagulation (EC) of methyl orange (MO) using iron electrodes and examine the effect of magnetic field (MF) on EC performance focusing on electrodes connection mode. Experimentally, an electrochemical cell is made in a configuration as simple as possible to impose a MF parallel to the current density and to allow an evaluation of the performance of the EC coupled to the MF. After 12 min of treatment, at pH 7.25, and with a current density of 64 A/m 2 , the MO decolorization obtained by EC-MF reaches 95%; this rate is higher than that obtained by the EC alone, which does not exceed 70%. In the MF presence and under optimal conditions, the decolorization increases allowing a gain in energy consumption (36%) from 30 to 19 kWh/kg dye. The decolorization with the bipolar system in series (BP-S) reaches 98% while it reaches 64% and 74% for the mono-polar in series (MP-S) and the mono-polar in parallel (MP-P), respectively. Consequently, the BP-S is more efficient. Introduction Many industrial sectors (textiles, plastic industry, paper mills, tanneries, etc.) are heavy users of water and use soluble or pigmented synthetic dyes to color their products; but in the textile industry, the use of synthetic dyes is very important. Among the many families of synthetic dyes, azo dyes are widely used and account for 60 to 70% [1]. Clothing containing these toxic azo dyes will reach consumers, posing a real threat to the environment. On the other hand, the risk of contamination of ecosystems related to the use of this type of dye is more important. Indeed, some of these dyes, which are not removed during effluent treatment, become more toxic when dumped into streams. Regulations on the discharge of wastewater are becoming more stringent and oblige industrialists to treat their effluents. In addition, most synthetic dyes are not very biodegradable and can be a risk factor for health and harm to the environment [2,3]. It is therefore essential to limit pollution by setting up a suitable treatment system incorporating a decolorization unit. Among all possible remediation methods, the electrocoagulation (EC) process [4] is one of the most promising processes because of its high efficiency and allows the use of less toxic products with environmental compatibility [5][6][7][8][9][10][11][12][13][14]. Magnetic Field (MF). II. Effect of Connection Mode Following the study carried out in Part 1 of this work [1], the EC process seems to be easily ameliorable by coupling EC with magnetic field (MF) [15], which leads us to present a comparative analysis between the results of the EC alone and the EC coupled to the MF. More particularly, we focus on optimizing the operating conditions on the MO decolorization through electrodes connection mode. Liu et al. [16] and Ghernaout et al. [17,18] showed that the MF improves the EC effectiveness. They indicated that the generated Lorentz force is sufficient to cause sedimentation of colloids with MF strength of 40 mT [16]. Generally, a two-electrode EC cell is not always well appropriate for wastewater treatment, since the dissolution rate of the metal is not suitably exploitable [19][20][21]. The use of large area electrodes is therefore essential [22]. Improving EC performance is more than necessary for industrial applications or medium-scale installations [23]. This is usually done using electrochemical cells, connected in series or in parallel, alone or in combination with other types of process (hybrid processes) [24][25][26]. Mono-polar Connection in Parallel Figure 1 (a) shows an EC cell with a pair of anodes placed between two parallel cathodes which are connected to a DC source. The current is shared between all the electrodes as a function of the resistance of the individual cells. This type of process requires a small difference in potential compared to the series connection [25]. Mono-polar Connection in Series Figure 1(b) shows an EC cell with a pair of anodes interconnected from one to the other and does not interconnect with the outer electrodes. The difference in potential is greater because cell resistance is higher [25]. Bipolar Connection in Series As shown in Figure 1(c), there is no electrical connection between the inner electrodes, only the outer electrodes are connected by a power supply. The outer electrodes are single pole and the inner electrodes are bipolar. This connection mode has a simple configuration which facilitates maintenance during operation [25]. Our previous study [1] realized on the optimization of the operating parameters influencing the good functioning of the mono-polar EC, made it possible to determine the optimal operating conditions. The present study focuses on improving the efficiency of the treatment through increasing the active surface by connecting the electrodes in different connection modes: bipolar in series (BP-S), mono-polar in series (MP-S), and mono-polar in parallel (MP-P). This is done in order to compare the performance of different electrode connection modes in the presence and absence of the MF. The permanent magnets with 0.1 Tesla (T) were placed parallel to the cathode surface and the anode surface, respectively. The electrodes were connected to a direct current (DC) power supply (Elektrolyser, type Elyn1) with an ammeter and voltmeter used to controlling the current and the voltage during the EC process, respectively. The electrode plates were cleaned manually before every each run by abrasion with sand paper and by treatment with 15% HCl acid followed by washing with distilled water. Experimental Device The EC unit was made of Plexiglas with the dimensions of 60 mm × 80 mm. There are four electrodes used, each one with dimensions of (50 mm × 25 mm × 2 mm) and the distance between them in the EC was 1 cm (Figure 2). The schematic diagram of monopolar and bipolar electrodes in series and monopolar parallel connections is shown in Figure 1. Experimental Procedure Dye Orange III (abbreviated as methyl orange MO) is used for preparing wastewater solution by dissolving it in distilled water. The solution conductivity values were adjusted by adding NaCl as supporting electrolyte (SE) to the 200 mL solution of the synthetic wastewater. The pH of the tested solutions was measured by Hanna pH-meter and adjusted by adding HCL 0.05 (or 1) N or NaOH 0.05 (or 1) N. At the end of the EC experiments, all samples were filtered through a 0.45 µm pore size syringe filter. The MO concentration (C MO ) was measured using a UV/Vis spectrophotometer (SHIMADZU UV-1700 pharma Spec) at a wavelength corresponding to the maximum absorbance of the MO (λ max = 465 nm). The color removal efficiency R (%) was calculated using Eq. (1), where Abs i and Abs f are initial and final absorbance, respectively: Influence of Electrolyte Type and NaCl Concentration If the electric conductivity of the effluent is low, some SEs are usually added to the solution to ensure sufficient conductivity to conduct the electric current in the medium. The most commonly tested types of SE are: sodium (NaCl) or potassium (KCl) chloride, sodium (Na 2 SO 4 ) sulfate or (NaNO 3 ) nitrate, and calcium chloride (CaCl 2 ). The results obtained are shown in Figure 3. The results obtained show that the nature of the SE has a significant influence on the rate of removal of the MO. In the absence of MF, the rates of decolorization reach: 98.47%, 86.88%, 82.34%, 77.95% and 24.44% for CaCl 2 , KCl, NaCl, Na 2 SO 4 and NaNO 3 , respectively. In the presence of MF, the rates of decolorization reach 95.79%, 98.68%, 97.41%, 79.41% and 16.67%, for CaCl 2 , KCl, NaCl, Na 2 SO 4 and NaNO 3 , respectively. The consumption of energy increases considerably with the type of electrolyte used (Figure 3(a)). Indeed, the treatment with EC-MF in the presence of the NaCl as an electrolyte allows a significant reduction in energy consumption of the order of 21 kWh/kg dye, and the treatment with the EC requires approximately 28.46 kWh/kg dye. The energy consumed during the two treatments of the MO in the presence of CaCl 2 , KCl and Na 2 SO 4 is greater than 35 kWh/kg dye. To avoid this adverse effect, it would be appropriate to use sodium chloride (NaCl) as a SE because the chloride ions can significantly reduce the adverse effects of other anions [27][28][29][30]. Actually, the processes of EC and EC-MF become more performant when using NaCl as SE. In order to examine the influence of the conductivity of the solution on the Magnetic Field (MF). II. Effect of Connection Mode decolorization rate, the NaCl concentration is varied from 0.3 to 2 g/L (Figure 3(b)). Figure 3(b) shows that the increase in NaCl concentration causes a marked decrease in energy consumed by both EC-MF and EC treatments. The concentration of 1.6 g/L NaCl allows a significant reduction in energy consumption. Indeed, for EC treatment within 15 min, the decolorization rate increases from 74% to 89.87% and the energy consumption decreases from 96 to 20 kWh/kg dye for doses of 0.3 to 1.6 g/L, respectively. A stable decolorization rate is obtained after 15 min of treatment with EC-MF, which is between 96% and 98% for doses between 0.3 and 1.6 g/L. It is found that the increase in NaCl concentration in the presence of MF does not significantly affect the rate of decolorization (Figure 3(b)). Influence of Current Density and pH The most important parameters in the electrochemical processes are the current density and the electrolysis time t EC . The experimental results obtained by the EC treatment show that when the current density decreases from 56 to 32 A/m 2 , the decolorization rate observed for 15 min is less than 60% (Figure 4(a)). The results obtained by the EC-MF treatment show that when the current density increases from 56 to 72 A/m 2 , the discoloration rate is greater than 80%. When the current density decreases from 48 to 32 A/m 2 , the fading rate is less than 68% for a period of 15 min (Figure 4(a)). When the current density is greater than 64 A/m 2 , the decolorization rate does not change significantly. The judicious choice of initial conditions (electrolysis time t EC and the current density) will limit the excessive release of hydrogen [31][32][33][34][35]. As the current density increases, the energy consumption decreases ( Figure 4(a)). This will increase the temperature by the Joule effect and increase the rate of anodic dissolution [36]. The energy consumed during EC-MF process is less than 24 kWh/kg dye when the current density decreases from 72 to 40 A/m 2 . It reaches a maximum value when the density decreases to 32 A/m 2 . To illustrate the influence of the initial pH on the decolorization kinetics by the treatment of EC and EC-MF, a series of experiments on initial pH, varying it between 3 and 12, was carried out (Figure 4 (b)). The decolorization rate is particularly important at pH 7.25, when it is of the order of 92.72% and 70.48% for EC and EC-MF, respectively. The results show that both treatments contribute to the increase in energy consumption by increasing pH to 11 or decreasing it to 4; this increase is 28 kWh/kg dye and 18 kWh/kg dye for EC and EC-MF, respectively. The energies consumed by EC and EC-MF treatments at pH 7.25 reach 23 kWh/kg dye and 13 kWh/kg dye, respectively. It is also noted that the improvement of the EC by the MF is favored over a wide range of initial pH ranging from 3 to 10 ( Figure 4(b)). The EC process is accompanied by an increase in energy consumption at the neutral pH, which favors a better efficiency of the system [37]. Figure 5(a) shows that as the inter-electrode distance increases, energy consumption during EC and EC-CM treatments is increasing. When the inter-electrode distance was maintained constant at 2 cm, the EC and EC-CM treatments recorded for 12 min the highest decolorization levels, with 70% and 95.55%, respectively. When the electrodes of the EC-MF are kept at a distance of 2 cm, the treatment is more efficient with energy consumption around 19 kWh/kg dye. In addition, the efficiency of the treatment of the MO without MF decreases with high energy consumption, which is close to 30 kWh/kg dye. The energy consumption by the EC-MF is less than 19 kWh/kg dye with a distance comprised between 0.8 and 2 cm. When the inter-electrode distance is increased until 3 cm, the decolorization rate decreases to 84%. EC process is more energy consuming at a distance of 2 cm, representing 30 kWh/kg dye; and when the electrodes are kept at a distance between 0.8 and 2 cm, the energy consumption is less than 24 kWh/kg dye ( Figure 5(a)). The energies consumed during MO treatment by EC [38][39][40][41][42] or EC-MF increase up to 31 and 33 kWh/kg, respectively. Then, these energies are stabilized ( Figure 5(b)). It is found that the value of the maximum rate of decolorization by the treatment of EC-MF begins to decrease, when the initial concentration increases from 15 to 55 mg/L. The performance of the improvement by the MF decreases significantly to 63% for a concentration of 55 mg/L with the same degree of decolorization as that achieved by the EC (Figure 5(b)). Table 1 presents a comparison between EC and EC-MF in terms of efficiencies. Figure 6 shows the effect of the different connection modes on the MO decolorization as a function of time. The BP-S system ensures a removal rate of 98%; while the decolorization reaches 64% and 74% for the mode MP-S and MP-P, respectively. Coupling the different connection modes with the MF shows that the decolorization reaches a maximum of 95% for the BP-S mode; while it arrives at 57% and 69% for MP-S and MP-P, respectively ( Figure 7). It is found that the MF disturbs the decolorization of the MO with the connection modes MP-S and MP-P. In the absence of the MF, the BP-S system performs better than the other two systems. The study of the treatment of textile wastewaters using various connection modes shows that the MP-P mode is more appropriate than the other modes, to reduce the chemical oxygen demand value at more than 54% in a neutral medium [24]. Daneshvar et al. [43] showed that azo dye treatment by MP-S mode is more efficient than MP-P mode at 20 A/m 2 density and BP-S elimination efficiency is greater than 90% for density of 90 A/m 2 . Figure 8 shows that the energy consumption varies with the type of the connection. Indeed, it is found that coupling the BP-S system without and with MF consumes the same energy of a value of 29 kWh/kg dye, while the MP-S and MP-P systems consume the lowest energies, reaching 18 and 14 kWh/kg dye, respectively. Conclusion Coupling EC and MF has made it possible to obtain significant results. Indeed, it is found that during the combined treatment for 12 min, the decolorization rate reaches 91%, while the rate during the treatment with the EC reaches only 70% for a current density 64 A/m 2 , a salinity of 1.6 g/L and pH = 7.25. The energy consumption during the EC-MF process is lower compared with the EC alone. The EC-MF process could reduce energy consumption to 36% and therefore the cost of operation. On the other hand, increasing the dye concentration is unfavorable from an energy point of view. Studying the effect of the different modes of connection on the decolorization shows that the maximum of removal rate is reached at 98% with the BP-S system without the MF.
2019-04-10T13:11:44.343Z
2018-07-27T00:00:00.000
{ "year": 2018, "sha1": "215a24c04e4dbe83ed40ef5aa60ddabffe2e502a", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.wjac.20180302.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c97144e813eff15a1d87d315e5b6fea914437281", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
118885429
pes2o/s2orc
v3-fos-license
Exact Multifractal Exponents for Two-Dimensional Percolation The harmonic measure (or diffusion field or electrostatic potential) near a percolation cluster in two dimensions is considered. Its moments, summed over the accessible external hull, exhibit a multifractal spectrum, which I calculate exactly. The generalized dimensions D(n) as well as the MF function f(alpha) are derived from generalized conformal invariance, and are shown to be identical to those of the harmonic measure on 2D random walks or self-avoiding walks. An exact application to the anomalous impedance of a rough percolative electrode is given. The numerical checks are excellent. Another set of exact and universal multifractal exponents is obtained for n independent self-avoiding walks anchored at the boundary of a percolation cluster. These exponents describe the multifractal scaling behavior of the average nth moment of the probabity for a SAW to escape from the random fractal boundary of a percolation cluster in two dimensions. Percolation theory, whose tenuous fractal structures, called incipient clusters, present fascinating properties, has served as an archetypal model for critical phenomena [1]. The subject has recently enjoyed renewed interest: the scaling (continuum) limit has fundamental properties, e.g., conformal invariance, which present a mathematical challenge [2,3,4]. Almost uncharted territory in exact fractal studies is the harmonic measure, i.e., the diffusion or electrostatic field near an equipotential random fractal boundary, whose self-similarity is reflected in a multifractal (Mf) behavior of the harmonic measure [5]. Mf exponents for the harmonic measure of fractals are especially important in two contexts: diffusion-limited aggregation (DLA) and the double layer impedance at a surface. In DLA, the harmonic measure actually determines the growth process and its scaling properties are intimately related to those of the cluster itself [6]. The double layer impedance at a rough surface between a good conductor and an ionic medium presents an anomalous frequency dependence, which has been observed by electrochemists for decades. It was recently proposed that this is at heart a multifractal phenomenon, directly linked with the harmonic measure of the rough electrode [7]. In both the above contexts, percolation clusters have been studied numerically as generic models. In this Letter, I consider incipient percolation clusters in two dimensions (2D), and determine analytically the exact multifractal exponents of their harmonic measure. I use recent advances in conformal invariance (linked to quantum gravity), which allow for the mathematical description of random walks interacting with other random fractal structures, such as random walks [8,9], and selfavoiding walks [10]. A further difficulty here is the presence of a subtle geometrical structure in the percolation cluster hull, recently elucidated by Aizenman et al. [11]. Excellent agreement with decade-old numerical data is obtained, thereby confirming the relevance of conformal invariance to multifractality; the exact prediction for the anomalous exponent of a percolative electrode given here also corroborates the multifractal nature of the latter. As an illustration of the flexibility of the method, I also give the set of exact multifractal exponents corresponding to the average nth moment of the probability for a self-avoiding walk to escape from a percolation cluster boundary. Consider a two-dimensional very large incipient cluster C, at the percolation threshold p c . Define H (w) as the probability that a random walker (RW) launched from infinity, first hits the outer (accessible) percolation hull H(C) at point w ∈ H(C). We are especially interested in the moments of H, averaged over all realizations of RW's and C where n can be, a priori, a real number. For very large clusters C and hulls H (C) of average size R, one expects these moments to scale as where a is a microscopic cut-off, and where the multifractal scaling exponents τ (n) encode generalized dimensions D (n), τ (n) = (n − 1) D (n) , which vary in a non-linear way with n [12,13,14,15]. Several a priori results are known. D(0) is the Hausdorff dimension of the support of the measure. By construction, H is a normalized probability measure, so that τ (1) = 0. Makarov's theorem [16], here applied to the Hölder regular curve describing the hull [17], gives the non trivial information dimension τ ′ (1) = D (1) = 1. The multifractal formalism [12,13,14,15] further involves characterizing subsets H α of sites of the hull H by a Lipschitz-Hölder exponent α, such that their local H-measure scales as H (w ∈ H α ) ≈ (a/R) α . The "fractal dimension" f (α) of the set H α is given by the symmetric Legendre transform of τ (n) : Because of the ensemble average (1), values of f (α) can become negative for some domains of α [18]. This Letter is organized as follows: I first present in detail the findings and their potential physical significance and applications, before proceeding with the more abstract mathematical derivation. My results for the generalized harmonic dimensions for percolation are valid for all values of moment order n, n − 1 24 . The Legendre transform (3) of τ (n) = (n − 1) D(n) reads and [19], showing fairly good agreement. The slight upwards move from the theoretical curve at high values of n suggests a difference between annealed and apparent quenched averages, as in the DLA case [20]. The first striking observation is that the dimension of the support of the measure is the Hausdorff dimension of the standard hull, i.e., the outer boundary of critical percolating clusters [21]. In fact, D(0) = 4 3 is the dimension D EP of the accessible external perimeter [22,11], the other hull sites being located in deep fjords, which are not probed by the harmonic measure. Its exact value D EP = 4 3 has been recently derived in terms of relevant scaling operators describing path crossing statistics in percolation [11]. In the scaling continuous regime of percolation, the fjords do close, yielding a smoother (self-avoiding) accessible perimeter of dimension 4 3 . This is in agreement with the instability phenomenon observed numerically on a lattice: removing the fjords with narrow necks causes a discontinuity of the effective dimension of the hull from D H ≃ 7 4 to D EP ≃ 4 3 , whatever microscopic restriction rules are choosen [22]. In other respects, a 2D polymer at the Θ-point is known to obey exactly the statistics of a percolation hull [23], and the Mf results (4-6) therefore apply also to that case. An even more striking fact is the complete identity of Eqs. (4)(5)(6) to the corresponding results both for random walks and self-avoiding walks (SAW's) [10]. In particular, D (0) = 4 3 is the Hausdorff dimension of a SAW, common to the external frontier of a percolation hull and of a Brownian motion [8,9]. Seen from outside, these three fractal curves are not distinguished by the harmonic measure. As we shall see, this fact is linked to the presence of a universal underlying conformal field theory with a vanishing central charge c = 0. The singularity at α = 1 2 in the multifractal function f (α) is due to points on the fractal boundary where the latter has the local geometry of a needle. Indeed, by elementary conformal covariance, a local wedge of opening angle θ yields an electrostatic potential, i.e., harmonic measure, which scales as H(R) ∼ R − π θ ∼ R −α , thus, formally, θ = π α , and θ = 2π corresponds to the lowest possible value α = 1 2 . The linear asymptote of the f (α) curve for α → +∞, f (α) ∼ − α 24 corresponds to the lowest part n → n * = − 1 24 of the spectrum of dimensions. Its linear shape is quite reminiscent of the case of a 2D DLA cluster [24]. Define N (H) as the number of sites having a probability H to be hit. Using the Mf formalism to change from variable H to α (at fixed value of a/R), shows that N (H) obeys, for H → 0, a power law behavior with an exponent τ * = 1+ lim This τ * = 0.95833... compares very well with the result τ * = 0.951 ± 0.030, obtained for 10 −5 H 10 −4 [19]. Let us consider for a moment the different, but related, problem of the double layer impedance of a rough elec-trode. In some range of frequencies ω, the impedance contains an anomalous "constant phase angle" (CPA) term (iω) −β , where β < 1. It was believed that β would be solely determined by the Hausdorff dimension D (0) of the electrode surface. From a natural RW representation of the impedance, a different scaling law was recently proposed: β = D(2) D(0) (here in 2D), where D (2) is the multifractal dimension of the H-measure on the rough electrode [7]. In the case of a 2D porous percolative electrode, our results (4) give D (2) ≡ 11 12 , D (0) = 4 3 , whence β = 11 16 = 0.6875. This compares very well with a numerical RW algorithm result [25], which yields an effective CPA exponent β ≃ 0.69, nicely vindicating the multifractal description [7]. Let me now give the main lines of the derivation of exponents D (n) by generalized conformal invariance. We focus on site percolation on the 2D triangular lattice; by universality the results are expected to apply to other 2D (e.g., bond) percolation models. The boundary lines of the percolation clusters, i.e., of connected sets of occupied hexagons, form self-avoiding lines on the dual hexagonal lattice. They obey the statistics of loops in the O (N = 1) model, where N is the loop fugacity, in the so-called "lowtemperature phase", a fact we shall recover below [21]. By the very definition of the H-measure, n independent RW's diffusing away from the hull give a geometric representation of the n th moment H n , for n integer. The values so derived for n ∈ N will be enough, by convexity arguments, to obtain the analytic continuation for arbitrary n's. Figure 2 depicts n independent random walks, in a bunch, first hitting the external hull of a percolation cluster at a site w = (•) . As explained in ref. [11], such a site, to belong to the accessible hull, must remain, in the continuous scaling limit, the source of at least three non-intersecting crossing paths, noted S 3 , reaching to a (large) distance R. These paths are "monochromatic": one path runs only through occupied (light blue) sites; the other two, dual lines, run through empty (white) sites [11]. The definition of the standard hull requires only the origination, in the scaling limit, of a "bichromatic" pair of lines S 2 [11]. Points lacking additional dual lines are not accessible to RW's after the scaling limit is taken, because their (white) exit path becomes a strait pinched by other parts of the (light blue) occupied cluster. The bunch of independent RW's avoids the occupied cluster, and defines its own envelope as a set of two boundary lines separating it from the occupied part of the lattice, thus from S 3 (Fig. 2). Let us introduce the notation A ∧ B for two sets, A, B, of random paths, conditioned to be mutually avoiding, and A ∨ B for two independent, thus possibly intersecting, sets [10]. Now consider n independent RW's, or Brownian paths B in the scaling limit, in a bunch noted (∨B) n , avoiding a set S ℓ ≡ (∧P) ℓ of ℓ non-intersecting crossing paths in the percolation system. Each of the latter paths passes only through occupied sites, or only through empty (dual) ones. The probability that the Brownian and percolation paths altogether traverse the annulus D (a, R) from the inner boundary circle of radius a to the outer one at distance R, i.e., are in a "star" configuration S ℓ ∧ (∨B) n (Fig. 2), is expected to scale for R/a → ∞ as where we used S ℓ ∧ n ≡ S ℓ ∧ (∨B) n as a short hand notation, and where x (S ℓ ∧ n) is a new critical exponent depending on ℓ and n. It is convenient to introduce sim- for the same star configuration of paths, now crossing through the half-annulusD (a, R) in the half-plane. 2. An "active" site (•) on the accessible external perimeter for site percolation on the triangular lattice. It is defined by the existence, in the scaling limit, of ℓ = 3 non-intersecting, and "monochromatic" crossing paths S3 (dotted lines), one on the incipient (light blue) cluster, the other two on the dual empty (white) sites. The points ⊙ are entrances of fjords, which close in the scaling limit and won't support the harmonic measure. Point (•) is first reached by three independent RW's (red, green, blue), contributing to When n → 0, P R (S ℓ ) resp.P R (S ℓ ) is the probability of having ℓ simultaneous monochromatic(nonintersecting) path-crossings traversing the annulus in the plane (resp. half-plane), with associated exponents [3,11]. These exponents have been studied in ref. [11], and shown rigorously to be actually independent of the coloring of the paths, with the restriction in the bulk that there exist at least a path on occupied sites and one on dual ones, thus ℓ > 1. Here the exponents appear as analytic continuations of the harmonic measure ones (8) to n → 0, and should correspond to the definitions of ref. [11]. In terms of definition (8), the harmonic measure moments (1) simply scale as Z n ≈ R 2 P R (S ℓ=3 ∧ n) [18], which, combined with Eqs. (2) and (8), leads to Using the fundamental mapping of the conformal field theory (CFT) in the plane R 2 , describing a critical statistical geometrical system, to the CFT on a fluctuating abstract random Riemann surface, i.e., in presence of quantum gravity [26], I have recently shown that there exist two universal functions U, and V, depending only on the central charge c of the CFT, which suffice to generate all geometrical exponents involving mutual avoidance of random star-shaped sets of paths of the critical system [10]. For c = 0, which corresponds to RW's, SAW's, and percolation, these universal functions are: with V (x) ≡ U 1 2 x − 1 2 . Consider now two arbitrary random sets A, B, involving each a collection of paths in a star configuration, with proper scaling crossing exponents x (A) , x (B) , or, in the half-plane, crossing exponentsx (A) ,x (B) . If one fuses the star centers and requires A and B to stay mutually avoiding, then the new crossing exponents, x (A ∧ B) andx (A ∧ B) , obey the star algebra [8,10] x where If, on the contrary, A and B are independent and can overlap, then by trivial factorization of probabilities, [10]. The rules (11), which mix bulk and boundary exponents, can be understood as simple factorization properties on a random Riemann surface, i.e., in quantum gravity [8,10], or as recurrence relations in R 2 between conformal Riemann maps of the successive mutually avoiding paths onto the line R [9]. On a random surface, U −1 (x) is the boundary dimension corresponding to the valuex in R × R + , and the sum of U −1 functions in Eq. (11) represents linearly the juxtaposition A ∧ B of two sets of random paths near their random frontier, i.e., the product of two "boundary operators" on the random surface. The latter sum is mapped by the functions U , V , into the scaling dimensions in R 2 [10]. The structure thus unveiled is so stringent that it immediately yields the values of the percolation crossing exponents x ℓ ,x ℓ of ref. [11], and our harmonic measure exponents x (S ℓ ∧ n) (8). First, for a set S ℓ = (∧P) ℓ of ℓ crossing paths, we have from the recurrent use of (11) For percolation, two values of half-plane crossing exponentsx ℓ are known by elementary means:x 2 = 1,x 3 = 2 [3,11]. From (13) we thus find U −1 ( , which in turn gives We thus recover the identity, previously rigorously established in ref. [11], of L=ℓ+1 with the L-line exponents of the associated O (N = 1) loop model, in the "low-temperature phase". For L even, these exponents also govern the existence of k = 1 2 L spanning clusters [21,11], with the identity x C k = x ℓ=2k = 1 12 4k 2 − 1 in the bulk [21], andx C k =x ℓ=2k−1 = 1 3 k (2k − 1) in the half-plane [21,28,29]. The non-intersection exponents of k ′ Brownian paths are also given by x ℓ ,x ℓ for ℓ = 2k ′ [8], so we observe a complete equivalence between a Brownian path and two percolating crossing paths, in both the plane and half-plane. For the harmonic exponents in (8), we fuse the two objects S ℓ and (∨B) n into a new star S ℓ ∧ n (see Fig. 2), and use (11). We just have seen that the boundary ℓ-crossing exponent of S ℓ ,x ℓ , obeys U −1 (x ℓ ) = 1 2 ℓ. The bunch of n independent Brownian paths have their own half-plane crossing exponentx ((∨B) n ) = nx (B) = n, since the boundary dimension of a single Brownian path is triviallyx (B) = 1 [8]. Thus we obtain Specifying to the case ℓ = 3 finally gives from (10) (12) x (S 3 ∧ n) = 2 + 1 2 (n − 1) + 5 24 √ 24n + 1 − 5 , from which τ (n) (9), and D (n) Eq.(4) follow, QED. This formalism immediately allows many generalizations. For instance, in place of n random walks, one can consider a set of n independent self-avoiding walks P, which avoid the cluster fractal boundary, except for their common anchoring point. The associated multifractal exponents x (S ℓ ∧ ((∨P) n ) are given by the same formula (14), with the argument n in U −1 simply replaced by the boundary scaling dimension of the bunch of independent SAW's, namely [10]x ((∨P) n ) = nx (P) = n 5 8 ,
2019-04-14T02:18:42.264Z
1999-01-03T00:00:00.000
{ "year": 1999, "sha1": "815a263d03f1903d79bd5e7af44488cbe1db1475", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9901008", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "815a263d03f1903d79bd5e7af44488cbe1db1475", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235286217
pes2o/s2orc
v3-fos-license
Modeling of rotational oscillations in a diesel locomotive wheel-motor block This article highlights the importance of the effective movement of the rolling stock and the coefficient of friction between the wheels and rails in their use. Factors affecting wheel friction with the rails were considered. During the study, the dependence of the coupling coefficient between the wheel and the rails on the speed of movement was considered. Accordingly, it was found that the coefficient of adhesion decreases with increasing speed. Introduction Effective use of the movement remains a topical issue today. The problem of efficient movement and use (operation) of rolling stock is inextricably linked with the problem of interlocking of the wheel at the point of support of the rail. Great attention has been paid regularly to increasing the gravitational force on adhesion. Over the years, research has been conducted in various countries on the theory of adhesion and the study of its physical basis. The connection of wheels to the rails, the processes that take place in the movement of the locomotive have been studied by many scientists in their works, and their continuation remains a topical issue. Experience of foreign and domestic countries shows that [1-4], as a result of self-propulsion of traction forces in the traction transmissions of mainline locomotives, the coefficient of adhesion of rails and wheels decreases, leading to a certain reduction in diesel power. Therefore, in developed countries, including the United States, Great Britain, France, Spain, Germany, Japan, South Korea, China, the Russian Federation and other countries, one of the important factors in improving the efficiency of rolling stock is the introduction of traction modeling of locomotives. Methods To increase the efficiency of diesel locomotives, it is advisable to study the rotational vibrations that occur between traction motors-reducer-wheel pairs. To do this, it is necessary to create a dynamic model of the locomotive in question. Figure 1 shows a dynamic model of a locomotive [5]. 1. The following considerations should be made for mathematical modeling of this dynamic model: • When the traction generator rotates relative to the stator (diesel) -Jг; 2. The torque Mg consumed in the operation of all TEMs is the driving force. 3. The following rotational stiffnesses are taken into account for the model: • Kg is the elastic connection of the diesel with the generator armature. • К1, К2, К3, К4, К5, and К6 are the elastic couplings at 1, 2, 3, 4, 5, and 6 between the armature of the generator (figure 1) and traction electric motors. In order to simplify the following places, the equations were performed according to a computational scheme with one WMB, G 1.1 points, as shown in figure 1. 2) Download solutions are custom solutions that depend on and . The solution of systems of equations of the same type whose right parts are equal to zero is realized in the following view of the functions ( ) = cos , 1 ( ) = 1 cos , 12 ( ) = 12 cos (7) Here: , and are the amplitudes of the generator, 1-wheel pair, concentration traction motors and wheel pairs rotational frequency rotation vibrations. Results Solve the system of equations for the amplitude of rotational oscillations with the coefficients А11…А33 according to the formulas (8)-(38) according to the formula (11) and 50 rad/sec the stepwise calculation in the range from  = 50 rad/sec to  = 300 rad/sec [9,10]. Conclusion The results show that the thrust created by the traction forces of traction locomotives of mainline locomotives depends on the speed of rotation of the wheelsets.
2021-06-02T23:35:15.552Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d18cb43e36db46e23ae3641800c8ed34e404bc79", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1889/2/022017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d18cb43e36db46e23ae3641800c8ed34e404bc79", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
56323877
pes2o/s2orc
v3-fos-license
Tailoring of Microstructures and Tensile Properties in the Solidification of Al-11 Si (-x Cu ) Brazing Alloys Ternary Al-11wt %Si-(xwt %)Cu alloys are highly recommended as commercial filler metals for aluminum brazing alloys. However, very little is known about the functional inter-relations controlling the solidified microstructures characterizing processes such as torch and furnace brazing. As such, we evaluated two commercial brazing alloys, which are the Al-11wt %Si-3.0wt %Cu and Al-11wt %Si-4.5wt %Cu alloys: Cu contents typically trend in between the suitable alloying spectrum. We analyzed the effects of solidification kinetics over features such as the dendrite arm spacing and the spacing between particles constituting the eutectic mixture. Also, tensile properties were determined as a function of the dendrite microstructure dimensions. The parameters concerned for translating the solidification kinetics were either the cooling rate, or growth velocity related to the displacement of the dendrite tip, or the eutectic front. The relevant scaling laws representing the growth of these brazing alloys are outlined. The experimental results demonstrated that a 50% increase in Cu alloying (from 3.0 to 4.5 wt %) could be operated in order to obtain significant variations in the dendritic length-scale of the microstructure across the produced parts. Overall, the microstructures were constituted by an α-Al dendritic matrix surrounded by a ternary eutectic consisting of α-Al + Al2Cu + Si. The scale measurements committed to the Al2Cu eutectic phase pointed out that the increase in Cu alloying has a critical role on refining the ternary eutectic. Introduction Over a wide range of mechanical and thermal applications, multicomponent alloys pertaining to Al-Si systems are most commonly fabricated through processes such as foundry, brazing, and welding [1].The binary Al-Si alloys present a microstructure consisting of a primary phase, aluminum, or silicon, and a eutectic mixture of these two elements [2].Silicon is added to aluminum alloys to promote good wear resistance, high heat transfer coefficient, and low thermal expansion coefficient.The addition of Cu to Al-Si alloys is widely used in automotive engine components, such as engine blocks, cylinder heads, and pistons, because of the good castability and fluidity of Al-Si-Cu alloys [1]. Al-Si-Cu ternary alloys are becoming increasingly important in the aerospace and automotive industries, due to their low relative weight and good mechanical strength at relatively high temperatures, and good resistance to abrasion and weldability.In general, Al-Si-Cu ternary alloys have higher mechanical strength than Al-Si alloys and higher corrosion resistance than Al-Cu alloys [3][4][5].The increase in demand for materials with properties such as those of the Al-Si-Cu alloys, established the need to control the microstructure with more rigorous specifications.Therefore, good control of the solidification process is essential, since its influence can be noted even in the finished product [3,6].According to Zeren et al. [7] more research efforts are necessary for a better understanding of the mechanisms responsible for strength variations in Al-Si-xCu alloys. The solidification process and the intrinsic characteristics of the alloy to be solidified have a direct influence on the microstructure formation, which determines the final properties of a casting.The obtained casting parts exhibit mechanical characteristics that depend on inherent aspects occurring during solidification, such as: grain size, the scale of the phases forming the microstructure such as dendrite arm spacings and interphase spacings, the size and distribution of such phases, chemical composition heterogeneities, inclusions, and porosity.The understanding of solidification of aluminum alloys has fundamental importance for planning of manufacturing processes, since it allows a better understanding of the factors affecting microstructure, and consequently, the product quality [8][9][10][11][12][13]. For the manufacturing of castings with predefined local properties, a widely used process is foundry technology (metal/mold casting processes).The cooling rate related to the process of heat transfer from the casting into the mold is an aspect acting in parallel, which conditions the change of the size and distribution of individual components within the resulting microstructure.The kind of crystallizing phases and the process of nucleation and growth of grains are influenced by the cooling rate.Affecting the mechanical properties, particularly for the reduction of plastic deformation in Al-Si-Cu alloys, the presence of copper also induces the formation of intermetallic phases, such as Al 2 Cu [14].Additionally, intermetallic phases can be crystallized due to the presence of impurities such as Fe in these alloys, degrading the mechanical properties of castings [14].Commercial alloys such as 384.0 and A384.0 series, Al9Si3Cu(Fe) and ENAC 46000 are recommended for the die casting processes, and these alloys are in the range of between 10.5 to 12.0 wt % Si, and 3.0 to 4.5 wt % Cu [15]. Ceschini et al. [16] performed a study on the production of cast specimens of Al-10wt %Si-2wt %Cu alloy under controlled solidification conditions.The aim was to have samples that were associated with two values of secondary dendrite arm spacing (SDAS), of about 10 µm and 50 µm.The effect of the cooling rate and different Fe and Mn contents on the microstructure was evaluated, and consequently, the tensile and the fatigue properties of the Al-10wt %Si-2wt %Cu casting alloy were determined.The results showed that the cooling rate affected not only the SDAS values, but also the shape of the eutectic Si particles, and the size and volume fraction of the intermetallic compounds.A reduction of SDAS induced higher ultimate tensile strength (UTS) and elongation (EL) to the failure to be obtained.The highest UTS and EL reached 374 MPa and 12.1% for the alloy containing 0.5 wt % Fe and SDAS of 10 µm.On the contrary, in the samples with smaller SDAS, the EL degraded with increasing Fe and Mn contents, due to the larger volume fraction of Fe-rich intermetallic compounds [16].Wang et al. [17] proposed an alternative Al-13wt %Si-5wt %Cu-0.8wt%Fe alloy fabricated by metal/mold casting followed by a T6 solid solution heat treatment.The UTS and EL values reached 336 MPa and 0.72%, respectively.Even though obvious correlations between such properties and the microstructural features have been declared, no functional correlations have been proposed. By seeking available functional types of correlations between mechanical strength and dendritic spacing, one could refer to the research developed by Okayasu et al. [18].These authors reported excellent tensile properties for twin roll and Ohno continuous casting of Al-10.6wt%Si-2.5wt%Cu alloy samples, that is, UTS and EL at around 375 MPa and 10%, respectively.These properties have been associated with the fine round α-Al phase and tiny eutectic structures.A clear Hall-Petch relation has been derived, relating the yield tensile strength to the SDAS: σ y=0.2 = 6.1(SDAS) −1/2 + 48.5, where σ y (MPa) and SDAS (µm). Another important process technique is the joining process of aluminum alloys.Al-Si-Cu ternary alloys are also included among the alloys used for brazing [15].Brazing of aluminum alloys is considered to be difficult due to the low melting temperature of Al alloys and the high affinity of Al to oxygen.As a reliable and economical method for the bonding of aluminum alloys, brazing with Al-Si alloys has been adopted among a variety of joining techniques.Commercial aluminum brazes such as BAlSi-3 and BAlSi-4, with silicon contents between 7 and 13 wt % Si, have demonstrated some success in joining some Al alloys if the corrosion-chemical and mechanical strength aspects are considered.The working temperatures of these Al brazing alloys must be above 590 • C, due to the fact that these Al brazes having melting temperatures in the range of 575-610 • C. Hence, an important goal of the aluminum industry is the development of low-melting-point filler metals [19].The interaction between the joint components and growth of the intermetallic compounds at the joint interface can be prevented by shorter brazing cycles/lower brazing temperature [20]. The solidification path of any typical aluminum alloy is modified by the addition of major and minor alloying elements, and these have a significant impact on the final microstructure [21].With copper addition to the Al-Si alloy, the solute is rejected from both of the eutectic constituents.The equilibrium melting point changes locally from a low value at the interface in the direction of the higher liquidus temperature for the alloy, caused by solute segregation ahead of the eutectic interface.The melt in this boundary layer is constitutionally undercooled, when the actual temperature of the melt is less than the equilibrium liquidus temperature.Other changes might be expected by modifying the equilibrium liquidus temperature.This includes variations in the surface tension affecting the wetting angle between the nuclei present in the boundary solid layer and the melt, and changes in the chemical driving force for nucleation.Fundamentally, significant changes can be expected in the nucleation behavior of the Al-Si eutectic due to ternary solute segregation [22].Kaya and Aker [23], for instance, demonstrated, through experimental data, that the Si flakes forming an Al-Si eutectic alloy could change with alloying elements.Additions of Cu, Co, Ni, Sb, and Bi to an Al-12.6%Si eutectic alloy resulted in finer Si flakes. A study was carried out by Chang et al. [19], where Al-10.8wt%Si-10wt %Cu, and Al-9.6wt %Si-20wt %Cu filler metals were used for the brazing of 6061 aluminum alloy at 560 • C. The results demonstrated that the addition of 10 wt % of copper into the Al-12wt %Si filler metal lowered the solidus temperature from 586 • C to 522 • C, and the liquidus temperature from 592 • C to 570 • C. With the increase in copper content to 20 wt % into the Al-12wt %Si filler metal, the liquidus temperature decreased from 592 • C to 535 • C. The highest value obtained for the shear strength referred to the 6061 Al alloy brazed with the Al-10.8wt%Si-10wt %Cu filler metal, which reached 67 MPa for a 60 min brazing time.According to these authors, the higher hardness of the 6061 aluminum alloy subtract near the butt joint interface after brazing with Al-Si-Cu filler metal could be associated with the formation of Al 2 Cu intermetallic compounds, due to the fact that copper diffuses towards the 6061 Al alloy [19]. To further optimize the brazing processes associated with the aforementioned Al alloy, the development of a better understanding of the solidification characteristics of Al-Si-Cu alloys could assist with optimization of the brazing process parameters.The aim of the present work was to perform a detailed characterization of the microstructure of an Al-11wt %Si alloy with additions of 3.0 and 4.5 wt % of copper (Cu), directionally solidified (DS) under transient heat flow conditions.This means describing the morphologies, dimensions, and representative features of the eutectic phases for a wide range of solidification cooling rates.Both dendritic and eutectic length-scales of the microstructure were assessed.Correlations between the tensile properties, hardness, the dendritic arm spacing, and the spacing between particles constituting the eutectic were also investigated.Finally, advanced X-ray Diffraction (XRD) and Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM-EDS) techniques were performed to determine the main characteristics of the eutectic phases formed along the length of the DS castings. Experiment to Follow Solidification Kinetics The Al-11wt %Si-3.0wt%Cu and Al-11wt %Si-4.5wt%Cu alloy castings were generated using a transient directional solidification system.A quantity of 1200 g of commercial purity Al, Si, and Cu was firstly melted in a dense high purity graphite crucible by induction heating up to 750 • C in order to melt the Al and homogenize the other elements by diffusion.Then, the temperature was reduced to 700 • C and held for 30 min before the directional solidification procedure.A detailed description of processing by transient directional solidification is given elsewhere [24,25].Iron (Fe) was found to be the main impurity in the tested samples, which remained inside the suitable commercial Fe wt % spectrum of 0.205 ± 0.065.Detailed descriptions of extraction of samples from the DS castings for metallography and mechanical tests are given elsewhere [12]. Liquidus and eutectic temperatures were determined for the two Al-Si-Cu based alloys through experiments in which the alloy was slowly cooled in a well-insulated crucible, thus permitting the transformation temperatures to be determined, as illustrated in Figure 1. was firstly melted in a dense high purity graphite crucible by induction heating up to 750 °C in order to melt the Al and homogenize the other elements by diffusion.Then, the temperature was reduced to 700 °C and held for 30 min before the directional solidification procedure.A detailed description of processing by transient directional solidification is given elsewhere [24,25].Iron (Fe) was found to be the main impurity in the tested samples, which remained inside the suitable commercial Fe wt.% spectrum of 0.205 ± 0.065.Detailed descriptions of extraction of samples from the DS castings for metallography and mechanical tests are given elsewhere [12]. Liquidus and eutectic temperatures were determined for the two Al-Si-Cu based alloys through experiments in which the alloy was slowly cooled in a well-insulated crucible, thus permitting the transformation temperatures to be determined, as illustrated in Figure 1.The Al alloys were first melted in an induction furnace.After that, the molten alloy was poured into two cavities; that is, either a crucible dedicated to the determination of cooling curves or a split mold inserted into the solidification system.A remelting operation of the alloy was run inside the mold since radial electrical wiring heated up the cylindrical stainless steel split mold (see Figure 1).When the melt temperature achieved 3% above the liquidus temperature, the furnace windings were disconnected, and at the same time, the external water flow at the bottom of the container began the cooling down procedure, thus permitting the onset of solidification. The solidification system permits the placing of a number of fine K-type thermocouples along the length of the casting.Eight thermocouples were strategically spaced between each other until they were 96 mm from the cooled bottom of the casting.The frequency of temperature data acquisition was 1 Hz on each thermocouple.Postmortem examination regarding the determination of exact positions of the thermocouple tips was carried out. Microstructural Characterization and Tensile Tests The macrostructure of each directionally solidified (DS) casting was revealed after assessing and grinding the whole longitudinal middle section surface with #600 grid paper.The etching solution The Al alloys were first melted in an induction furnace.After that, the molten alloy was poured into two cavities; that is, either a crucible dedicated to the determination of cooling curves or a split mold inserted into the solidification system.A remelting operation of the alloy was run inside the mold since radial electrical wiring heated up the cylindrical stainless steel split mold (see Figure 1).When the melt temperature achieved 3% above the liquidus temperature, the furnace windings were disconnected, and at the same time, the external water flow at the bottom of the container began the cooling down procedure, thus permitting the onset of solidification. The solidification system permits the placing of a number of fine K-type thermocouples along the length of the casting.Eight thermocouples were strategically spaced between each other until they were 96 mm from the cooled bottom of the casting.The frequency of temperature data acquisition was 1 Hz on each thermocouple.Postmortem examination regarding the determination of exact positions of the thermocouple tips was carried out. Microstructural Characterization and Tensile Tests The macrostructure of each directionally solidified (DS) casting was revealed after assessing and grinding the whole longitudinal middle section surface with #600 grid paper.The etching solution used was composed of 95 mL of distilled water, 2.5 mL of HNO 3 , 1.5 mL of HCl, and 1 mL of HF, which was applied for a couple of seconds. Longitudinal and transverse samples at various positions from the cooled bottom of the DS castings were mounted (i.e., 3 mm, 8 mm, 13 mm, 18 mm, 23 mm, 33 mm, 48 mm, 68 mm, and 88 mm), polished, and etched with a solution of 0.5% HF in water over 20 s, and then examined using an optical microscope (Olympus Co., Tokyo, Japan).The length-scale of the dendritic matrix was characterized by the primary (λ 1 ), secondary (λ 2 ), and tertiary (λ 3 ) dendritic arm spacings.Forty measurements were performed for each microstructural spacing of each selected position when a certain alloy composition is considered. One of the targets of the present study was to correlate the dendrite spacing with the tensile properties measured in uniaxial tensile tests of the examined ternary Al-Si-Cu alloys.In order to obtain these measurements, several specimens were extracted along the length of the DS alloy castings.Each specific position chosen for tensile tests allowed three specimens to be extracted so that the average tensile properties regarding strength and ductility and their standard deviations could be determined.These specimens were subjected to tensile tests according to specifications of the ASTM Standard E 8M/04 at a strain rate of about 3 × 10 −3 s −1 .Microhardness tests were performed on the transversal sections of the DS samples, using a test load of 1000 g and a dwell time of 10 s.The adopted Vickers microhardness (Hardness tester, Shimadzu, Kyoto, Japan) was the average of at least 10 indentation tests on each sample. The sizes of the eutectic phases were examined through a scanning electron microscope (SEM).Back-scattered electron (BSE) examinations were carried on deep etched samples (HCl over three minutes).The instrument used was a Philips SEM (XL-30 FEG, FEI, Hillsboro, OR, USA) equipped with an energy dispersive X-ray spectrometer (EDS).An analysis of the SEM images allowed the measurements of the spacing of the Al 2 Cu particles, λ Al2Cu as well those between the eutectic Si particles, λ Si .Measurements of the eutectic-related spacings, λ Al2Cu and λ Si , were performed using the line intercept method [13,23].Considering that the Al-Si eutectic has an anomalous structure, approximately 20-30 minimum spacing, λ m , and maximum spacing, λ M , values were measured in the various positions along the length of the DS castings in order to obtain the average λ Si .The average spacing λ Si is the arithmetic average between λ m and λ M . The X-ray diffraction (XRD) patterns of phases formed along the length of the Al-Si-Cu alloy castings were acquired by a Siemens D5000 diffractometer (Siemens, Munich, Germany) with a 2-theta range from 20 • to 90 • , CuKα radiation and a wavelength, λ, of 0.15406 nm. Results and Discussion The as-cast macrostructures depicted in Figure 2 revealed the prevalence of very fine columnar grains after chemical etching of the DS Al-11wt %Si-3.0 and -4.5wt %Cu alloy castings.Such macro-morphologies prevailed along the entire length of the casting with a few equiaxed grains produced at the very top of the castings.These structures enabled a wide-ranging examination of the fabricated castings, with emphasis on the formed dendritic arrangements.The longitudinal sections could be assessed for the growth of secondary dendrite branches, whereas the cross sections along the length of the DS bodies were useful for the determination of either primary or tertiary dendritic spacing.The growth of well-aligned columnar structures also signifies that the heat flow during solidification remained unidimensionally driven. From the start of the water flow in the directional solidification experiment and considering the recording of experimental data, Figure 3 shows the variations of temperature that occurred for each thermocouple within the castings.It is worth noting that for thermocouples near the water-cooled surface, temperature changes quickly, whereas variation was much slower for positions monitored farther from the bottom cooled surface.The proper evaluation of these cooling curves provided the experimental variations of solidification cooling rates and growth velocities, as is presented next.Figure 4 shows the time evolutions of the liquidus front along the lengths of both the Al-11wt.%Si-3.0wt.%Cu and Al-11wt.%Si-4.5wt.%Cu alloy castings during cooling.These plots were generated by the previous information on the liquidus temperatures of each alloy, and by monitoring their transit on each of the engaged thermocouples at various positions (P) along the length of the casting.As a consequence of these plots, growth velocities (vL) experimental tendencies in Figure 5a,b were established for both evaluated alloys.These vL values are direct results of the time-derivatives of the experimental functions in Figure 4. vL represents the rate of displacement of the liquidus isotherm (vL).There is no significant change between the vL plots of both examined alloys.Analogously, rates of displacements of the Al-Si binary eutectic (vBE) and of the Al-Si-Al2Cu ternary eutectic (vTE) could be obtained from the cooling curve analyses of both alloys.The calculation of such velocities is very important, since their magnitude is directly related to the eutectic scales and the morphology developed from each eutectic reaction. The determination of the tip cooling rate, ṪL, as a function of position (P) in the casting was carried out by computing the time-derivative of each cooling curve (dT/dt) right after the passage of the liquidus isotherm by each thermocouple.A large spectrum of cooling rates can be seen in Figure 5c,d.Further, when comparing the cooling-down regimes of both alloys it can be observed that the alloy containing higher Cu content is associated with higher levels of cooling rates.Although the Cu contents here lie inside the commercial spectrum of braze Al fillers, the levels of the cooling rates of each alloy differ from each other.Figure 4 shows the time evolutions of the liquidus front along the lengths of both the Al-11wt %Si-3.0wt%Cu and Al-11wt %Si-4.5wt%Cu alloy castings during cooling.These plots were generated by the previous information on the liquidus temperatures of each alloy, and by monitoring their transit on each of the engaged thermocouples at various positions (P) along the length of the casting.As a consequence of these plots, growth velocities (v L ) experimental tendencies in Figure 5a,b were established for both evaluated alloys.These v L values are direct results of the time-derivatives of the experimental functions in Figure 4. v L represents the rate of displacement of the liquidus isotherm (v L ).There is no significant change between the v L plots of both examined alloys.Analogously, rates of displacements of the Al-Si binary eutectic (v BE ) and of the Al-Si-Al 2 Cu ternary eutectic (v TE ) could be obtained from the cooling curve analyses of both alloys.The calculation of such velocities is very important, since their magnitude is directly related to the eutectic scales and the morphology developed from each eutectic reaction. The determination of the tip cooling rate, ṪL , as a function of position (P) in the casting was carried out by computing the time-derivative of each cooling curve (dT/dt) right after the passage of the liquidus isotherm by each thermocouple.A large spectrum of cooling rates can be seen in Figure 5c,d.Further, when comparing the cooling-down regimes of both alloys it can be observed that the alloy containing higher Cu content is associated with higher levels of cooling rates.Although the Cu contents here lie inside the commercial spectrum of braze Al fillers, the levels of the cooling rates of each alloy differ from each other.Figures 6 and 7 present the optical micrographs of the Al-11wt.%Si-3.0 and 4.5wt.%Cu braze alloy samples.In the micrographs, the aluminium-rich dendrites can be seen being enveloped by the products of the eutectic reactions; that is, the Al-Si eutectic and the Al-Si-Al2Cu ternary eutectic.The micrographs refer to different positions along the length of the DS castings.Very fine to large lengthscales of the formed dendritic structures can be observed.This is explained due to the wide range of experimental solidification cooling rates experienced across the fabricated DS castings.Figures 6 and 7 present the optical micrographs of the Al-11wt.%Si-3.0 and 4.5wt.%Cu braze alloy samples.In the micrographs, the aluminium-rich dendrites can be seen being enveloped by the products of the eutectic reactions; that is, the Al-Si eutectic and the Al-Si-Al2Cu ternary eutectic.The micrographs refer to different positions along the length of the DS castings.Very fine to large lengthscales of the formed dendritic structures can be observed.This is explained due to the wide range of experimental solidification cooling rates experienced across the fabricated DS castings.Figures 6 and 7 present the optical micrographs of the Al-11wt %Si-3.0 and 4.5wt %Cu braze alloy samples.In the micrographs, the aluminium-rich dendrites can be seen being enveloped by the products of the eutectic reactions; that is, the Al-Si eutectic and the Al-Si-Al 2 Cu ternary eutectic.The micrographs refer to different positions along the length of the DS castings.Very fine to large length-scales of the formed dendritic structures can be observed.This is explained due to the wide range of experimental solidification cooling rates experienced across the fabricated DS castings.Figure 9 depicts the mean experimental values, along with the standard variation, of primary (λ1), secondary (λ2), and tertiary (λ3) dendritic spacings as a function of cooling rate (primary, tertiary), and growth rate (secondary).Power function fittings were suitably derived to represent the experimental scatter of each alloy.Analyzing the experimental tendencies in the graphs of Figure 9, it can be inferred that the 50% increase in the Cu alloying (from 3.0 to 4.5 wt.%) had significant impacts on λ1, λ2, and λ3.This means increases in λ1, λ2, and λ3 by 88%, 45%, and 42% respectively, for the alloy containing 4.5 wt.% of Cu, as compared to the other alloy composition.The multipliers of the experimental equations varied when compared with the experimental tendencies of the two evaluated alloys for a particular microstructural parameter.However, the representative exponents Figure 9 depicts the mean experimental values, along with the standard variation, of primary (λ 1 ), secondary (λ 2 ), and tertiary (λ 3 ) dendritic spacings as a function of cooling rate (primary, tertiary), and growth rate (secondary).Power function fittings were suitably derived to represent the experimental scatter of each alloy.Analyzing the experimental tendencies in the graphs of Figure 9, it can be inferred that the 50% increase in the Cu alloying (from 3.0 to 4.5 wt %) had significant impacts on λ 1 , λ 2 , and λ 3 .This means increases in λ 1 , λ 2 , and λ 3 by 88%, 45%, and 42% respectively, for the alloy containing 4.5 wt % of Cu, as compared to the other alloy composition.The multipliers of the experimental equations varied when compared with the experimental tendencies of the two evaluated alloys for a particular microstructural parameter.However, the representative exponents referring to each microstructural parameter were preserved regardless of the considered alloy.A −2/3 power law characterizes the experimental variations of λ 2 with v L , while a −0.55 exponent is able to represent the two tendencies for λ 1 . Metals 2018, 8, x FOR PEER REVIEW 12 of 23 referring to each microstructural parameter were preserved regardless of the considered alloy.A −2/3 power law characterizes the experimental variations of λ2 with vL, while a −0.55 exponent is able to represent the two tendencies for λ1. Al-11wt.%Si-3.0wt.%CuCirculating between the dendrites is a liquid rich in solute.The dendritic array is formed by lateral instabilities of higher order, which are the secondary and tertiary branches developed from the primary stems.The higher order tertiary formations are those that are located closer to the interdendritic portions, which are prone to commence the eutectic reaction.It appears that such proximity with the eutectic mixture may affect the growth of tertiary branches, which becomes less sensitive to the cooling rate variations, resulting in a relationship λ3 = constant ṪL −1/4 .Circulating between the dendrites is a liquid rich in solute.The dendritic array is formed by lateral instabilities of higher order, which are the secondary and tertiary branches developed from the primary stems.The higher order tertiary formations are those that are located closer to the interdendritic portions, which are prone to commence the eutectic reaction.It appears that such proximity with the eutectic mixture may affect the growth of tertiary branches, which becomes less sensitive to the cooling rate variations, resulting in a relationship λ 3 = constant ṪL −1/4 . The solidification paths of the Al-Si-Cu braze alloys were computed using the Thermo-Calc software (Thermo-Calc Software AB, Solna, Sweden).This was possible by using the assumption of Scheil conditions and the TCAL5 Al-based Alloys Database.Figure 10 shows the isopleth simulation relative to the Al-11Si-xCu system, considering then a parameterization of 11wt %Si.The solidification evolutions of these alloys were also calculated, as can be seen in the bottom plots of Figure 10.In this case, an impurity level of 0.2 wt % Fe was considered in the alloys composition to permit more realistic sequences of precipitation to be estimated.According to Thermo-Calc results, the eutectic reaction may occur at 522 • C for both Al-11wt %Si-3.0 and -4.5wt %Cu alloys.The products of this reaction are: α-Al + AlFeSi + Si + Al 2 Cu.A comparison of the graphs at the bottom of Figure 10 shows that a mass fraction of 9% is associated with the eutectic structure for the alloy containing 3.0 wt % Cu, whereas it is 14% for the Al-11wt %Si-4.5wt%Cu alloy.The solidification paths of the Al-Si-Cu braze alloys were computed using the Thermo-Calc software (Thermo-Calc Software AB, Solna, Sweden).This was possible by using the assumption of Scheil conditions and the TCAL5 Al-based Alloys Database.Figure 10 shows the isopleth simulation relative to the Al-11Si-xCu system, considering then a parameterization of 11wt.%Si.The solidification evolutions of these alloys were also calculated, as can be seen in the bottom plots of Figure 10.In this case, an impurity level of 0.2 wt.% Fe was considered in the alloys composition to permit more realistic sequences of precipitation to be estimated.According to Thermo-Calc results, the eutectic reaction may occur at 522 °C for both Al-11wt.%Si-3.0 and -4.5wt.%Cu alloys.The products of this reaction are: α-Al + AlFeSi + Si + Al2Cu.A comparison of the graphs at the bottom of Figure 10 shows that a mass fraction of 9% is associated with the eutectic structure for the alloy containing 3.0 wt.% Cu, whereas it is 14% for the Al-11wt.%Si-4.5wt.%Cu alloy.The following precipitations (fractions %) occurred for the Al-11wt.%Si-3.0 and -4.5wt.%Cu alloys respectively: 9% and 6% of α-Al, followed by nearly 50% and 41% of the solid fraction of Si; after that, the growth of 32% and 39% of solid fractions related to the AlFeSi phase, before the eutectic reaction occurring in the remaining liquid.Accumulation of the proportions of α-Al with that for the binary eutectic Si resulted in ~60% for the Al-11wt.%Si-3.0wt.%Cu alloy, while less than 50% of the The following precipitations (fractions %) occurred for the Al-11wt %Si-3.0 and -4.5wt %Cu alloys respectively: 9% and 6% of α-Al, followed by nearly 50% and 41% of the solid fraction of Si; after that, the growth of 32% and 39% of solid fractions related to the AlFeSi phase, before the eutectic reaction occurring in the remaining liquid.Accumulation of the proportions of α-Al with that for the binary eutectic Si resulted in ~60% for the Al-11wt %Si-3.0wt%Cu alloy, while less than 50% of the fraction of these phases is associated with the Al-11wt %Si-4.5wt%Cu alloy.Such a combination of mass fractions remains important since-according to Okayasu [18]-fine α-Al phases and tiny silicon structures may provide superior tensile and fatigue properties. The phases within the directionally solidified alloy samples have been identified by XRD patterns, as can be seen in Figure 11.The XRD spectra corresponding to different cooling rates along the length of the castings reveal the occurrence of four different phases, which are α-Al, Al 2 Cu, and Si and AlFeSi.The presence of the Fe-bearing phase was next confirmed through SEM analysis of the microstructures, as seen in Figures 12-15. Metals 2018, 8, x FOR PEER REVIEW 14 of 23 fraction of these phases is associated with the Al-11wt.%Si-4.5wt.%Cu alloy.Such a combination of mass fractions remains important since-according to Okayasu [18]-fine α-Al phases and tiny silicon structures may provide superior tensile and fatigue properties. The phases within the directionally solidified alloy samples have been identified by XRD patterns, as can be seen in Figure 11.The XRD spectra corresponding to different cooling rates along the length of the castings reveal the occurrence of four different phases, which are α-Al, Al2Cu, and Si and AlFeSi.The presence of the Fe-bearing phase was next confirmed through SEM analysis of the microstructures, as seen in Figures 12 to 15 Backscattered electron (BSE) images using SEM, as well as EDS mapping and EDS point analyses, were undertaken to characterize the secondary phases (morphology and chemistry) in the Al-Si-Cu alloy castings.The images in Figures 12 and 13 indicate that the microstructure contains binary Al-Si eutectic (gray color areas) and ternary eutectic structures consisting of Al + Si + Al2Cu (brighter areas in the images).It can be clearly seen that both eutectics change appreciably in scale as a function of the concerned cooling rate sample as characterized by the images at the left (fast cooling) and right (intermediate cooling) sides of Figures 12 and 13.Backscattered electron (BSE) images using SEM, as well as EDS mapping and EDS point analyses, were undertaken to characterize the secondary phases (morphology and chemistry) in the Al-Si-Cu alloy castings.The images in Figures 12 and 13 indicate that the microstructure contains binary Al-Si eutectic (gray color areas) and ternary eutectic structures consisting of Al + Si + Al 2 Cu (brighter areas in the images).It can be clearly seen that both eutectics change appreciably in scale as a function of the concerned cooling rate sample as characterized by the images at the left (fast cooling) and right (intermediate cooling) sides of Figures 12 and 13.Higher magnification backscattered electron images were used to identify the growth of the Febearing intermetallic needles, as indicated by arrows in Figure 13.Compositional analysis of these particles using EDS showed that the phases were: 1. α-Al; 2. Al2Cu; 3. Si; 4. AlFeSi.The detailed quantitative chemical composition analyses of solid phases for two different cooling rate samples of the Al-11wt.%Si-4.5wt.%Cu alloy are given in Figure 15.These particles can be considered to be coarse enough to permit an accurate determination of their compositions. Samples related to the slow-cooled regions of the DS castings were chosen to be subjected to examination through elemental SEM-EDS mapping, as can be seen in Figure 14.The final distribution of the elements can be seen within the phases and constituents.The red contrast for Al showed a high intensity within the α-Al dendrite branches, as expected, while the green for Si was concentrated in both the binary and the ternary eutectic Si phase.Spots concentrated in Fe were noted in the fourth EDS mapping in the bottom images of Figure 14. In the fast-cooled samples of Figures 12 and 13, the morphology of the binary eutectic Si phase is polyhedral-like.However, for all the other samples related to lower cooling rates, the morphology of the eutectic Si is flake-like as can be seen in Figures 12-15.The formation of polyhedral silicon particles in hypoeutectic Al-Si based alloys has not been commonly reported in the open literature. In the present investigation, the Si precipitates although inhibited from growing in the samples, solidified under faster conditions.Under a fast regime of solidification, the liquid becomes enriched in Si, which remains constrained in between the primary stems.As the temperature declines, such particles may grow preferentially in the areas surrounding the Al2Cu particles, that is, the previous Cu-enriched zones.As a consequence, a flake-like morphology may not be attained since the growth is interrupted forming polyhedral Si particles.The preferential grown polyhedral silicon particles in fast-cooled samples are located neighbouring the Al2Cu particles, as can be seen in Figures 12d and 13d.These morphologies in the Al-11wt.%Si-3.0wt.%Cu and Al-11wt.%Si-4.5wt.%Cu alloys were Higher magnification backscattered electron images were used to identify the growth of the Fe-bearing intermetallic needles, as indicated by arrows in Figure 13.Compositional analysis of these particles using EDS showed that the phases were: 1. α-Al; 2. Al 2 Cu; 3. Si; 4. AlFeSi.The detailed quantitative chemical composition analyses of solid phases for two different cooling rate samples of the Al-11wt %Si-4.5wt%Cu alloy are given in Figure 15.These particles can be considered to be coarse enough to permit an accurate determination of their compositions. Samples related to the slow-cooled regions of the DS castings were chosen to be subjected to examination through elemental SEM-EDS mapping, as can be seen in Figure 14.The final distribution of the elements can be seen within the phases and constituents.The red contrast for Al showed a high intensity within the α-Al dendrite branches, as expected, while the green for Si was concentrated in both the binary and the ternary eutectic Si phase.Spots concentrated in Fe were noted in the fourth EDS mapping in the bottom images of Figure 14. In the fast-cooled samples of Figures 12 and 13, the morphology of the binary eutectic Si phase is polyhedral-like.However, for all the other samples related to lower cooling rates, the morphology of the eutectic Si is flake-like as can be seen in Figures 12-15 In the present investigation, the Si precipitates although inhibited from growing in the samples, solidified under faster conditions.Under a fast regime of solidification, the liquid becomes enriched in Si, which remains constrained in between the primary stems.As the temperature declines, such particles may grow preferentially in the areas surrounding the Al 2 Cu particles, that is, the previous Cu-enriched zones.As a consequence, a flake-like morphology may not be attained since the growth is interrupted forming polyhedral Si particles.The preferential grown polyhedral silicon particles in fast-cooled samples are located neighbouring the Al 2 Cu particles, as can be seen in Figures 12d and 13d.These morphologies in the Al-11wt %Si-3.0wt%Cu and Al-11wt %Si-4.5wt%Cu alloys were shown to be associated with cooling rates of 14.4 K/s and 20.8 K/s respectively, i.e., for a relative position in the casting of P = 3 mm. The experimental evolutions of the Si spacing, λ Si , and the Al 2 Cu spacing, λ Al2Cu , as a function of binary eutectic growth velocity, v BE , and as a function of the ternary eutectic growth velocity, v TE , were experimentally determined, and are plotted in Figure 16.Experimental growth laws in the form of power functions were established.shown to be associated with cooling rates of 14.4 K/s and 20.8 K/s respectively, i.e., for a relative position in the casting of P = 3 mm.The experimental evolutions of the Si spacing, λSi, and the Al2Cu spacing, λAl2Cu, as a function of binary eutectic growth velocity, vBE, and as a function of the ternary eutectic growth velocity, vTE, were experimentally determined, and are plotted in Figure 16.Experimental growth laws in the form of power functions were established.The binary Al-Si eutectic was characterized through the experimental evolution of λSi as a function of the eutectic growth velocity, as can be seen in Figure 16a.Figure 16a also synthesizes the experimental tendency (dot line) from the study by Kaya [23] (steady-state solidification of the Al-12.6wt.%Si-2.0wt.%Cu alloy) as a function of growth rate.It can be seen that the experimental eutectic growth law, derived in the present study, shows a higher slope if compared to the experimental tendency of the stationary regime.This demonstrates that the unsteady-state growth of eutectic Si in ternary Al-Si-Cu alloys seems to remain more sensitive to the variations in the solidification thermal parameters.The binary Al-Si eutectic was characterized through the experimental evolution of λ Si as a function of the eutectic growth velocity, as can be seen in Figure 16a.Figure 16a also synthesizes the experimental tendency (dot line) from the study by Kaya [23] (steady-state solidification of the Al-12.6wt%Si-2.0wt%Cu alloy) as a function of growth rate.It can be seen that the experimental eutectic growth law, derived in the present study, shows a higher slope if compared to the experimental tendency of the stationary regime.This demonstrates that the unsteady-state growth of eutectic Si in ternary Al-Si-Cu alloys seems to remain more sensitive to the variations in the solidification thermal parameters. It was seen that a lower Al 2 Cu spacing characterizes the alloy with higher Cu content as compared to those of the other alloy (i.e., alloying of 3.0 wt % Cu).This is because the multiplier of the experimental equations in Figure 16b decreases from 0.60 to 0.42 with increase in the alloy Cu content, while preserving the same exponent for both experimental equations. The exponents of the reported eutectic growth laws as a function of the growth rate were found to be close to −1/2 [26,27] rather than −3/4, as can be observed in the present results of Figure 16b.The −1/2 exponent is well established for the regular eutectic growth in binary Al-Cu alloys.The power growth law λ 2 v = constant is that which was originally proposed by Jackson and Hunt for the growth of regular eutectics [26]. However, it is important to remember that the microstructure of the referred ternary eutectic is formed by a three-phase mixture of silicon and Al 2 Cu in an α-Al matrix.As such, two major factors appear to contribute to the higher sensitivity of the variations in the growth rate rendered by a higher exponent: (i.) the thermal instability induced by the unsteady-state regime of heat flow extraction during the growth of the Al 2 Cu eutectic phase, and (ii.) the solute-driven unsteadiness due to the buildup of Si rejected during the growth of the AlSi eutectic. The evolutions of the experimental tensile mechanical properties versus the primary dendritic spacing are shown in Figure 17 for: (a) σ u -ultimate tensile strength, (b) σ y=0.2 -yield strength, (c) δ-elongation-to-fracture, and (d) HV-Vickers hardness.Hall-Petch-type correlations were adopted to represent some of the experimental scatters.It was seen that a lower Al2Cu spacing characterizes the alloy with higher Cu content as compared to those of the other alloy (i.e., alloying of 3.0 wt.% Cu).This is because the multiplier of the experimental equations in Figure 16b decreases from 0.60 to 0.42 with increase in the alloy Cu content, while preserving the same exponent for both experimental equations. The exponents of the reported eutectic growth laws as a function of the growth rate were found to be close to −1/2 [26,27] rather than −3/4, as can be observed in the present results of Figure 16b.The −1/2 exponent is well established for the regular eutectic growth in binary Al-Cu alloys.The power growth law λ 2 v = constant is that which was originally proposed by Jackson and Hunt for the growth of regular eutectics [26]. However, it is important to remember that the microstructure of the referred ternary eutectic is formed by a three-phase mixture of silicon and Al2Cu in an α-Al matrix.As such, two major factors appear to contribute to the higher sensitivity of the variations in the growth rate rendered by a higher exponent: (i.) the thermal instability induced by the unsteady-state regime of heat flow extraction during the growth of the Al2Cu eutectic phase, and (ii.) the solute-driven unsteadiness due to the buildup of Si rejected during the growth of the AlSi eutectic. The evolutions of the experimental tensile mechanical properties versus the primary dendritic spacing are shown in Figure 17 for: (a) σu-ultimate tensile strength, (b) σy=0.2-yieldstrength, (c) δelongation-to-fracture, and (d) HV-Vickers hardness.Hall-Petch-type correlations were adopted to represent some of the experimental scatters.Some aspects related to the Al-11wt.%Si-3.0wt.%Cu alloy may be synthetized.The combined mass fraction of α-Al + Si is ~60%, the ternary eutectic is 9% in fraction, and the eutectic spacing is higher in of about 43%, as compared to the other alloy.In contrast, the Al-11wt.%Si-4.5wt.%Cu alloy is characterized by ~47% of mass fraction of α-Al + Si, and 14% of the mass fraction of the ternary Some aspects related to the Al-11wt %Si-3.0wt%Cu alloy may be synthetized.The combined mass fraction of α-Al + Si is ~60%, the ternary eutectic is 9% in fraction, and the eutectic spacing is higher in of about 43%, as compared to the other alloy.In contrast, the Al-11wt %Si-4.5wt%Cu alloy is characterized by ~47% of mass fraction of α-Al + Si, and 14% of the mass fraction of the ternary eutectic, as typified by a smaller eutectic spacing.Based on the mentioned data, it is possible that these changes in the different microstructure characteristics balance each other out, resulting in a similar evolution of the ultimate tensile strength and the strain-to-failure, as observed in Figure 17.Single Hall-Petch-type formulations are proposed in these cases to represent both alloys.σ u and δ increase with decreasing λ 1 along the length of the DS Al-Si-Cu alloy castings.This is because lower spacings contribute to a more extensive distribution of the second phases.If these hard particles are better distributed throughout the microstructure, higher strength values can be expected. The variations in λ 1 were shown to be not significant to change for both the yield tensile strength and the hardness.These properties are associated with lower stresses, as compared to σ u and δ.In the case of σ y , only the start of plastic deformation is achieved.Hardness indentations, in turn, exhibit relatively low plastic deformation.Under such lower stresses, the Al-11wt %Si-4.5wt%Cu alloy properties (σ y=0.2 and HV) are higher, as can be seen in Figure 17b,d.This is explained by the higher proportions of Fe-bearing intermetallic particles and interdendritic fine scale ternary eutectic, as compared to those related to the Al-11wt %Si-3.0wt%Cu alloy. Conclusions The following conclusions can be drawn from the present experimental investigation: • The solidification microstructures of the Al-11wt %Si-3.0 and 4.5wt %Cu braze alloy samples were shown to be characterized by aluminum-rich dendrites enveloped by the products of the eutectic reactions; that is, the Al-Si eutectic and the Al-Si-Al 2 Cu ternary eutectic. • The fraction of phases forming the Al-11wt %Si-3.0 and -4.5wt %Cu alloys changed, respectively: from 9% to 6% of α-Al, followed by nearly 50% and 41% of the solid fraction of Si; after that, the growth of 32% and 39% of the solid fraction related to the AlFeSi phase before the eutectic reaction occurred in the remaining liquid.Accumulating the proportions of α-Al with that for the binary eutectic Si resulted in ~60% for the Al-11wt %Si-3.0wt%Cu alloy, while less than 50% of fraction of these phases was associated with the Al-11wt %Si-4.5wt%Cu alloy. • Large dendritic variations in scale were shown to occur from the bottom to the top of the directionally solidified castings of both examined alloys, associated with a wide range of experimental solidification growth rates and cooling rates.This permitted the establishment of power function growth laws relating the primary (λ 1 ), secondary (λ 2 ), and tertiary (λ 3 ) dendritic spacings as a function of the cooling rate (primary, tertiary) and the growth rate (secondary): Al-11wt %Si-3.0wt%Cu Al-11wt %Si-4.5wt%Cu λ 1 = 265 ṪL where λ 1;2;3 (µm), v L (mm/s), and ṪL (K/s).This means that an increase in Cu alloying of 50% (from 3.0 to 4.5 wt %) was shown to be associated with an increase in λ 1 , λ 2 , and λ 3 by 88%, 45%, and 42%, respectively, for a given value of ṪL or v L . • The experimental evolutions of the Si spacing, λ Si , and the Al 2 Cu spacing, λ Al2Cu , as a function of binary eutectic growth velocity, v BE , and as function of the ternary eutectic growth velocity, v TE , were experimentally determined, and experimental growth laws in the form of power functions established: where λ Si;Al2Cu (µm); v BE;TE (mm/s). Figure 1 . Figure 1.Overall sketches showing two adopted techniques dealing with the molten Al-11Si(-xCu) alloys in the present research, which are: the determination of cooling curves and consequently liquidus/eutectic temperatures, and an assembly of the directional solidification system with insertion of thermocouples within a split stainless steel mold. Figure 1 . Figure 1.Overall sketches showing two adopted techniques dealing with the molten Al-11Si(-xCu) alloys in the present research, which are: the determination of cooling curves and consequently liquidus/eutectic temperatures, and an assembly of the directional solidification system with insertion of thermocouples within a split stainless steel mold. Figure 3 . Figure 3. Experimental cooling curves considering different positions along the length of the (a) Al-11wt %Si-3.0wt%Cu and (b) Al-11wt %Si-4.5wt%Cu alloy castings.Each mentioned position refers to a thermocouple inserted within the DS casting. Figure 4 .Figure 5 . Figure 4. Experimental time-dependent (t) displacements of the liquidus isotherm along the casting length (P) of the Al-11Si-xCu alloys.R 2 is the coefficient of determination. Figure 4 .Figure 4 .Figure 5 . Figure 4. Experimental time-dependent (t) displacements of the liquidus isotherm along the casting length (P) of the Al-11Si-xCu alloys.R 2 is the coefficient of determination. Figure 5 . Figure 5. Experimental variations of (a,b) growth rate and (c,d) tip cooling rate during unsteady state directional solidification of the ternary Al-11Si-xCu alloys.R 2 is the coefficient of determination. Figure 8 Figure8shows the magnitude modifications that occurred in the microstructural spacing values for the various positions along the length of the DS castings.Large dendritic variations in scale could be noted.For example, the mean primary dendrite arm spacing, λ1, varied from 45 μm to 930 μm in the samples extracted along the length of the Al-11wt.%Si-3.0wt.%Cu alloy casting.The secondary dendritic spacing, λ2, varied from 7.5 μm to 30 μm as a function of the relative position across the Al-11wt.%Si-4.5wt.%Cu alloy casting. Figure 8 Figure8shows the magnitude modifications that occurred in the microstructural spacing values for the various positions along the length of the DS castings.Large dendritic variations in scale could be noted.For example, the mean primary dendrite arm spacing, λ 1 , varied from 45 µm to 930 µm in the samples extracted along the length of the Al-11wt %Si-3.0wt%Cu alloy casting.The secondary dendritic spacing, λ 2 , varied from 7.5 µm to 30 µm as a function of the relative position across the Al-11wt %Si-4.5wt%Cu alloy casting. Figure 10 . Figure 10.Partial pseudo-binary Al-11wt %Si-xCu phase diagram, and graphs of temperature versus the solid fraction (solidification paths), computed by the Thermo-Calc software for alloys containing 0.2 wt % Fe. . Figure 14 . Figure 14.SEM images in the SE signal of the (a,f) Al-Si-Cu alloy samples along the casting length at different positions from the cooled bottom surface, and respective X-ray elemental mappings through energy dispersive X-ray spectrometry (EDS): Al-K (b,g), Si-K (c,h), Cu-K (d,i) and Fe-K (e,j). Figure 14 . Figure 14.SEM images in the SE signal of the (a,f) Al-Si-Cu alloy samples along the casting length at different positions from the cooled bottom surface, and respective X-ray elemental mappings through energy dispersive X-ray spectrometry (EDS): Al-K (b,g), Si-K (c,h), Cu-K (d,i) and Fe-K (e,j). Figure 15 . Figure 15.Representative SEM microstructures and EDS microprobe measurements (at.%) related to the positions (P) (a) 8 mm, and (b) 48 mm from the cooled bottom of the Al-11wt %Si-4.5wt%Cu alloy. . The formation of polyhedral silicon particles in hypoeutectic Al-Si based alloys has not been commonly reported in the open literature. Figure 17 . Figure 17.(a) Ultimate, (b) yield tensile strength, (c) elongation, and (d) Vickers hardness as a function of the primary dendritic spacing for the Al-11wt.%Si-xwt.%Cu alloys.R 2 is the coefficient of determination. Figure 17 . Figure 17.(a) Ultimate, (b) yield tensile strength, (c) elongation, and (d) Vickers hardness as a function of the primary dendritic spacing for the Al-11wt %Si-xwt %Cu alloys.R 2 is the coefficient of determination.
2018-12-19T04:22:17.515Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "aab0042459decac53d794790e98bd47382f03249", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/8/10/784/pdf?version=1538287536", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "aab0042459decac53d794790e98bd47382f03249", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
33476841
pes2o/s2orc
v3-fos-license
Cut from the same cloth: The convergent evolution of dwarf morphotypes of the Carex flava group (Cyperaceae) in Circum-Mediterranean mountains Plants growing in high-mountain environments may share common morphological features through convergent evolution resulting from an adaptative response to similar ecological conditions. The Carex flava species complex (sect. Ceratocystis, Cyperaceae) includes four dwarf morphotypes from Circum-Mediterranean mountains whose taxonomic status has remained obscure due to their apparent morphological resemblance. In this study we investigate whether these dwarf mountain morphotypes result from convergent evolution or common ancestry, and whether there are ecological differences promoting differentiation between the dwarf morphotypes and their taxonomically related large, well-developed counterparts. We used phylogenetic analyses of nrDNA (ITS) and ptDNA (rps16 and 5’trnK) sequences, ancestral state reconstruction, multivariate analyses of macro- and micromorphological data, and species distribution modeling. Dwarf morphotype populations were found to belong to three different genetic lineages, and several morphotype shifts from well-developed to dwarf were suggested by ancestral state reconstructions. Distribution modeling supported differences in climatic niche at regional scale between the large forms, mainly from lowland, and the dwarf mountain morphotypes. Our results suggest that dwarf mountain morphotypes within this sedge group are small forms of different lineages that have recurrently adapted to mountain habitats through convergent evolution. Introduction The adaptation of plant species to high-mountain environments frequently entails a series of convergent phenotypic traits that may be displayed by very different taxonomic groups. Geophytic or chasmophytic-frequently cushion-forming-growth forms, fleshy and/or tomentose leaves, CAM metabolism, and dwarfism are some of the characters that allow plants to survive in high-mountain ecosystems [1]. Morphological homoplasy induced by similar environmental conditions is frequently found in plants belonging to independently evolved lineages and at different taxonomic scales (Table 1), sometimes even confounding taxonomy (e.g. [2]). Some of these features are genetically determined, whereas others are the result of phenotypic plasticity, i.e. the interplay of genotype and environment (e.g. [3,4,5]; among others). In particular, dwarfism appears to be caused by both genetic and environmental causes [1]. Lower temperatures in mountains limit cell division and result in smaller plant sizes. Thus, alpine plants tend to have leaves that are, on average, one-tenth the size of those in conspecific lowland populations [6]. Carex sect. Ceratocystis Dumort. is a small group (5-19 species depending on the taxonomic treatment) of predominantly cespitose sedges mainly distributed in temperate Eurasia and North America [18]. Due to hybridization processes [19,20] and subtle morphological boundaries [21], this group displays a high degree of taxonomic complexity that has led many authors to generically refer to most of the taxa as the "C. flava group" ( [22][23][24][25][26][27][28], among others). Recent works have established the existence of six well-defined species in sect. Ceratocystis in the Western Palaearctic: C. castroviejoi Luceño & Jim.-Mejías, C. demissa Hornem., C. hostiana DC., C. flava L., C. lepidocarpa Tausch. and C. viridula Michx. [18,21]. Apart from these welldefined species, there is a set of populations of dwarf morphotypes in some western and central Circum-Mediterranean mountains (High Atlas, Sierra Nevada, Pyrenees-Cantabrian Range, and Alps; Fig 1) whose taxonomic status remains disputed ( Table 2). These are small-sized plants-usually no more than 10 cm high-that grow in mountain peat bogs and wet meadows. The differences between the dwarf and the well-developed morphotypes are known to be maintained under cultivation ( [29]; Jiménez-Mejías pers. obs.). The strong morphological resemblance that the dwarf morphotypes share led Chater [30] to consider them all as a single species (C. nevadensis Boiss. & Reut.) in Flora Europaea. Morphological affinities of dwarf morphotypes with well-developed individuals of the well-defined species were investigated in Table 1. Evolutionary studies showing cases of morphological homoplasy related to adaptation to mountain environments. Reference Convergent character Taxa (Family) Hypothesized adaptive function [7] Dwarf cushion-forming habit and white tomentose indumentum Veronica spp. (Plantaginaceae) Protection from low temperatures [8] Cushion-life form Multiple appearances in Angiosperms Resistance to low temperatures, freezing, and drought [9] Dwarf shrubby habit Alchemilla spp. (Rosaceae) Resistance to low temperatures and wind, microhabitat modification [10] Translucent bracts Rheum alexandrae Batalin and R. nobile Hook.f. & Thomson (Polygonaceae) Protection of the inflorescence, and pollen grains in particular, from low temperatures and ultraviolet light [11] Bright-colored bracteoles and thin stems Bupleurum commelynoideum H.Boissieu s.l. (Apiaceae) a recent morphometric analysis [21]. Populations from the Sierra Nevada and High Atlas Mountains could not be distinguished from each other. The Pyrenean-Cantabrian populations were found to represent the smaller plants within the clinal variation of well-developed individuals of C. lepidocarpa and C. flava. Accordingly, the dwarf habit was identified as the main cause of the lack of discriminant characters. Similarly, Schmid [22] reported that dwarf plants from the Alps represent the smaller-sized portion of C. flava variation and considered the high altitude where these plants grow as the reason behind their different morphology. Molecular phylogenetic analyses [18] additionally showed that different mountain population sets have affinities with different species. This could suggest independent origins of the shared morphological features. Taxonomy has traditionally been based on macromorphological features, since most sensory information processed by the human brain is visual [38]. More rarely, taxonomy relies on characters perceived by other human senses, such as smell or flavour. Micromorphology and anatomy are additional sources of variation that may be used to distinguish macromorphologically cryptic taxa [39]. Micromorphological characters of achene epidermis have been studied for taxonomic purposes in Carex. They have clarified taxonomy in several species complexes (e.g., Carex retrorsa Schwein [40]; Carex sect. Phacocystis Dumort. [41]; Carex sect. Phyllostachys Tuck. [42]; C. gynodynama Olney and C. mendocinensis W.Boott [43]; former genus Kobresia Willd. [44]). Within Carex sect. Ceratocystis, achene epidermis has been also studied [23,45]. The detected variability has been reported to be linked to chromosome number variation, and thus to the taxonomic structure of the group. The development of DNA-based barcoding methods has helped taxonomists detect and differentiate morphologically similar species, including cryptic taxa and species complexes (i.e. [38,[46][47][48][49][50]. A phylogeny of sect. Ceratocystis based on nuclear ITS and plastid rps16 and 5'trnK sequences [18] showed that DNA sequences have high taxon specificity and discriminant power (e.g. 88.2% of the plastid haplotypes were taxon-specific), which is in accordance with findings in North American populations of taxa of the same section [51]. These results suggest that sequencing of specific regions may be useful to circumscribe taxa within Carex sect. Ceratocystis. Species distribution modeling allows researchers to reconstruct the potential range of species/populations on the basis of climatic and other environmental variables [52,53]. This technique can be used to evaluate differences in ecological requirements between species and sets of populations. Currently, species distribution modeling and niche overlap analyses are being widely used in evolutionary studies to evaluate the role of climatic variables, together with geography, in the evolution of species and their ecological preferences [54,55]. In this paper, we use molecular, macromorphological and micromorphological data, as well as species distribution modeling, in the C. flava group to: 1) test the hypothesis of a lack of morphological differentiation between the sets of populations of dwarf morphotypes from different mountain ranges; 2) re-evaluate if the similar morphotypes are the result of convergence and how many times they have evolved; and 3) assess whether high mountain environments have induced the homoplasic morphological characteristics of these populations. Circumscription of study group We considered four different population groups of dwarf mountain morphotypes within the C. flava group ( Table 2; Fig 1): Alps, Pyrenees-Cantabrian Range, High Atlas (henceforth Atlas), and Sierra Nevada (see Introduction for further clarification). Samples from plants belonging to these groups were previously recovered in a clade also including C. lepidocarpa and C. flava, termed the bent-beaked clade [18]. In our phylogenetic study, we included samples of well-developed individuals ascribable to six well-defined taxa within sect. Ceratocystis (C. demissa, C. flava, C. hostiana, C. lepidocarpa subsp. lepidocarpa, C. lepidocarpa subsp. jemtlandica, and C. viridula). We relied on materials already deposited in herbaria, as well as on field collections in Spanish territory. No permission was needed for field collections, as these did not include threatened species or new prospections in protected areas. Macromorphological study We studied the morphological differentiation among the dwarf mountain morphotypes using classic multivariate techniques. The relationship between dwarf mountain morphotypes and well-developed individuals of the well-defined species were already studied in previous works [18,21,23,29]. Here, we intended to objectively assess the degree of morphological resemblance among the different population sets of the dwarf morphotypes. One hundred specimens from herbaria (BM, JACA, LEB, M, MA, MSB, NEU, RNG) and field collections (deposited at UPOS) from the four groups of dwarf morphotypes were included in the macromorphological morphometric study: 17 specimens from Sierra Nevada, 21 from the Alps, six from the Atlas, and 56 from Pyrenees-Cantabrian Range (S1 Appendix). From a previous PCA exploration using 24 characters, we selected eight macromorphological quantitative characters (Table 3) as those with the highest correlation values with other characters. Measurements were made using an ocular micrometer, with an accuracy of up to 0.1 mm. All observations were performed using a stereoscopic binocular Nikon SMZ645 microscope. Glume and utricle color were scored as qualitative characters; these characters were not included in the multivariate analyses. PCA was conducted to study the macromorphological variation (MPCA). Data were not transformed. Calculations were performed using a correlation matrix to minimize the effect of scale. Only principal components with eigenvalues greater than 1 were retained. Quartile distribution was calculated for each variable and morphotype to check the degree of overlapping. Characters were considered to be taxonomically useful when overlap was equal to or lower than a threshold of 25% [21,56]. All analyses were performed in PAWS Statistics 18. Micromorphological study Silica bodies in achene epidermal cells were studied in search of additional features that may help distinguish the different sets of dwarf populations. Forty-three achenes from the four population groups of dwarf morphotypes were studied: ten from Sierra Nevada, 17 from the Alps, three from the Atlas and 13 from Pyrenees-Cantabrian Range (S1 Appendix). All studied specimens except three were also included in the macromorphological study. Although most achenes were taken from different vouchers, scarcity of ripe fruits forced us to include several achenes from the same voucher in a few cases. In order to visualize silica bodies, achenes were digested in a solution of acetic anhydride and sulfuric acid (9:1) for 24 h at room temperature, washed with distilled water, and then subjected to a 10 min ultrasonic bath in a Nahita 621/2 sonicator. Finally, achenes were placed in Petri dishes and air-dried at room temperature. Sonication was repeated when periclinal and outer anticlinal walls were not totally removed with the treatment. Micromorphology was examined under scanning electron microscopy (SEM) after gold coating with a Hitachi S3000-N electron microscope at ×1200 magnification. Lateral and overhead images were taken of representative epidermal cells from each sample (Fig 2). Eight different measurements were taken directly from the photomicrographs. To minimize perspective and scale effects in lateral pictures, micromorphological characters were included as five different ratios, scaling heights by widths, therefore coding shape parameters (Table 3). Only igw was directly included as an average measurement, since it was obtained from the overhead picture and scale was expected to be constant. Being aware of differences in maximum width among epidermis cells of the same individual, we selected and measured the largest cells in each sample. The resulting micromorphological dataset was analyzed using PCA (mPCA). Quartile distribution for the macromorphological study was also calculated. All statistical analyses were performed in PAWS Statistics 18 as explained above. Table 3. Macro-and micromorphological characters studied for the dwarf populations of the Carex flava group. Loadings for the two principal components (PC-1 and PC-2) from MPCA (macromorphlogy) and mPCA (micromorphology) are provided; the highest loadings for each component are marked in bold. Distribution modeling Species distribution modeling (SDM) was used to evaluate the climatic niche and potential distribution of population complexes within the bent-beaked clade of the C. flava group. We analysed four population/species datasets: the two dwarf morphotypes confined to different mountain systems (Alps, Pyrenees-Cantabrian Range), and C. flava and C. lepidocarpa, the two widely distributed well-developed large morphotypes. The Atlas and Sierra Nevada population sets were not included in the analyses because of the low numbers of localities available (7 and 11, respectively). It must be noted that even though the dwarf morphotypes seem to be linked to mountain environments, the distinction between well-developed large and dwarf morphotypes is based on morphology, and not on altitude thresholds. Indeed, populations of large morphotypes have been found at altitudes as high as~2000 m in the Alps (e.g. C. flava in Risoul; C. lepidocarpa in Col du Lautaret; see S2 Appendix). For modeling analyses, we employed the maximum entropy algorithm, as implemented in Maxent v.3.3 [57], because it is appropriate for presence-only data and its good predictive performance has been demonstrated [53]. We retrieved a set of 19 bioclimatic variables at 30 seconds resolution under current conditions from the WorldClim website (www.worldclim.org; [58]). Layers were clipped to the extent of Europe and the Mediterranean region. To test for multicollinearity of variables, pairwise Pearson's correlation coefficients were calculated for a random sample of 1000 points of the study area generated using Hawth's Analysis Tools for ArcGIS [59]. We did not find any high correlation (r!0.70; S1 Table) among variables and therefore all 19 bioclimatic variables were included in the Maxent models. In the occurrence dataset, we included a total of 361 point localities (Fig 1), including 99 of C. flava, 118 of C. lepidocarpa, 59 of the dwarf morphotype from the Alps and 67 of the dwarf morphotype from Pyrenees-Cantabrian Range. These four sets of localities were modelled separately. We extracted occurrence information of each population group from herbarium specimens from ARAN, BM, GAP, JACA, FI, LEB, M, MA, MPU, MSB, NEU, P, RNG, SOM and UPOS. We also included localities listed in [60,61], which were considered taxonomically reliable sources. In the case of C. flava and C. lepidocarpa, we focused on the regions geographically close to the dwarf morphotypes, in order to accurately detect differences in niche requirements at a fine scale. Localities were geo-referenced using Google Earth, and only those with a precision higher than ± 4 km were selected. This precision represented a balance between the inclusion of a small number of highly accurate localities, and the inclusion of a large number of inaccurate localities. Localities are listed in the S1 Appendix. When running the analysis, each occurrence dataset was randomly split into training data (80%), used for model building, and test data (remaining 20%), used to evaluate model accuracy. Ten subsample replicates were performed of each analysis, and fitness of the resulting models was assessed with the area under the receiver-operating characteristic (ROC) curve [57]. A jackknife analysis was employed to evaluate variable contributions to the models. To convert continuous suitability values to presence/absence, we chose the threshold obtained under the maximum training sensitivity plus specificity rule, which has been shown to produce accurate predictions [62]. We assesed the climatic niche overlap between the well-developed large and dwarf mountain morphotypes using environmental-space (E-space) analyses based on Schoener's D values [63]. We evaluated overlap at two scales: (1) pairwise comparisons between the entire areas of considered groups; and (2) parwise comparisons at regional scale between areas where the morphotypes co-occur (Alps and Pyrenees-Cantabrian Range; Fig 1). The E-space was represented in a Principal Component Analysis (PCA) as implemented in the R package ecospat [64]. Niche overlap between morphotypes was represented by plotting together the E-space of each pair of morphotypes. We also performed tests of niche equivalency and niche similarity, which compare the observed niche overlap with the overlap obtained using random subsets of samples [65,66], as implemented in ecospat too [64]. As we were studying very closely related sets of populations, we explored niche conservation (the niche overlap is more equivalent/similar than expected by chance) between each compared pair of morphotypes using the option "greater". The niche equivalency test determines whether the observed niche overlap is constant when randomly reallocating the occurrences of both entities between their ranges. On the other hand, the niche similarity test checks whether the overlap between two niches is more similar than the overlap obtained if random shifts within each environmental space are allowed. When comparing the niche of a dwarf morphotype with that of C. flava or C. lepidocarpa, we set the dwarf morphotype niche as reference, and only allowed the well-developed morphotype niche to randomize (rand.type = 2). This allows testing the effect of shifts from the ancestral well-developed morphotype (see Results) into the derived dwarf morphotype's space [64]. When we compared between dwarf morphotypes, or between well-developed morphotypes, we allowed random shifts between the two areas (rand.type = 1). All tests were based on 100 iterations. Phylogenetic analyses A total of 31 individuals were sequenced for ITS, rps16 and 5'trnK (S1 Appendix). Most sequences (24 individuals) were taken from a previous phylogenetic study [18]. Sampling was expanded by obtaining ITS, rps16 and 5´trnK sequences from seven additional individuals, mainly of populations from the Alps (S1 Appendix). The sampling consisted of 20 individuals from different populations of the four groups of dwarf morphotypes of sect. Ceratocytis and 11 individuals unequivocally ascribed to well-developed large morphotypes (C. demissa subsp. demissa, C. demissa subsp. cedercreutzii, C. flava, C. hostiana, C. lepidocarpa subsp. lepidocarpa, C. lepidocarpa subsp. jemtlandica and C. viridula). The well-defined taxa were chosen to represent the molecular variation of sect. Ceratocytis in Europe and North Africa detected in our previous study [18]. Carex castroviejoi was excluded due to its incongruent placement in nuclear and plastid phylogenies to avoid this conflicting signal, although this does not affect the topological relationships among the groups of populations studied here. Carex cretica was selected as outgroup in accordance with our previous reconstruction [18]. Herbarium specimens (M, MA) and silica-dried field-collected materials (vouchers deposited at UPOS) were included. Destructive sampling permission for DNA extraction was provided by these institutions. Total DNA was extracted using the DNeasy Plant Mini Kit (Qiagen, California). The PCR conditions and primers followed [18]. Sequencing was carried out by Stab Vida (Caparica, Lisboa, Portugal). Sequences were edited using Seqed (Applied Biosystems, California). Only one informative indel was found and coded manually as an additional binary character. Bayesian phylogenetic analyses were conducted with MrBayes v.3.2 [67]. The model of sequence evolution that best fits the data was selected using the Akaike information criterion (AIC) in jModelTest [68]. Models were calculated independently for each plastid marker, and also for each ITS region (ITS1, 5.8S and ITS2). The indel was analysed under the F81 model [67]. Two parallel Markov Chain Monte Carlo were run for 10,000,000 generations with a sampling interval of 1000 generations. We applied a burn-in of 25% to ensure stationarity after checking with Tracer [69]. The remaining trees were summarized in a majority rule consensus tree, with posterior probability (pp) as the measure of clade support. We also performed a maximum likelihood (ML) analysis as implemented in RAxML 8 [70] to complement the Bayesian analysis. The analysis was partitioned and the coded indel was maintained as a binary character. One hundred non-parametric bootstrap replicates were performed to assess topology uncertainty. Ancestral morphotype reconstruction To analyze morphotype shifts in the course of diversification of the C. flava group, and to find out if the shared morphological traits are the result of convergent evolution, we reconstructed morphological ancestral states in Mesquite v.3.0 [71]. We used trees obtained from the Bayesian phylogenetic analysis described above. Given the lack of clearly defined operational taxonomic units (OTUs) within the bent-beaked clade [18], we decided to select samples for the ingroup following these criteria: (1) two sets of morphotypes were considered, the well-developed large morphotypes (C. flava and C. lepidocarpa) and the dwarf morphotypes; and (2) for each morphotype we kept one sample per detected plastid haplotype. In this way, we representatively covered the detected genetic variation in each set of morphotypes, and minimized the random overweight of a particular morphotype over the others. Outside the ingroup, only one sequence per species was kept. All other samples were pruned from the Bayesian posterior distribution of phylogenetic trees prior to the reconstruction using Mesquite. Morphotype was coded as a qualitative trait with two character states: "well-developed large" (large plants, with all the reproductive diagnostic characters conspicuousy developed), and "dwarf" (small plants, with the reproductive characters reduced). The analysis was performed using the parsimony reconstruction method. In order to account for the uncertainty in tree topology, all pruned trees from the stable Bayesian posterior distribution were analyzed using the "Trace character over trees" option in Mesquite. Nomenclatural Acts The electronic version of this article in Portable Document Format (PDF) in a work with an ISSN will represent a published work according to the International Code of Nomenclature for algae, fungi, and plants, and hence the new names contained in the electronic publication of this PLOS ONE article are effectively published under that Code from the electronic edition alone, so there is no longer any need to provide printed copies. In addition, new names contained in this work have been submitted to IPNI, from where they will be made available to the Global Names Index. The IPNI LSIDs can be resolved and the associated information viewed through any standard web browser by appending the LSID contained in this publication to the prefix http://ipni.org/. The online version of this work is archived and available from the following digital repositories: PubMed Central, LOCKSS. Macromorphological variation The first two components showed eigenvalues higher than one. They accounted for 61.78% of the total variance (43.99% for PC-1; 17.79% for PC-2) in the dataset. Examination of the scatter-plot (Fig 3A) from these principal components revealed the partial overlapping of all dwarf mountain morphotypes. Plants from the Alps were displaced toward the highest scores of PC-1, whereas those from the Atlas and Sierra Nevada appeared toward the lowest values. Specimens from the Pyrenees-Cantabrian Range were displaced towards the highest values of PC-2. Morphological characters with the highest loadings were, in descending order, utricle length and utricle beak lenght for PC-1, and male spike width and lowest bract width for PC-2 (Table 3). Characters displaying less than a 25% overlapping threshold are displayed in Table 4 and Fig 4. The main diagnostic macromorphological characters that allow distinction among dwarf morphotypes are summarized in Table 5. Micromorphological variation SEM photographs revealed that the general features of the studied samples agree with previous studies on sect. Ceratocystis [45]. The inner anticlinal wall of the epidermic cells bears a large central ± conical silica body elevated on a narrower platform that is surrounded by a variable number of peripheral smaller silica bodies. The morphometric analyses found slightly different variation patterns among the dwarf morphotypes (Fig 3). The first two components showed eigenvalues higher than one. Principal component analysis of the micromorphological dataset (mPCA; Fig 3B) did not differentiate populations from different mountain ranges. Samples from Atlas plants were recovered in the periphery of the scatter-plot, displaced towards the lowest values of PC-1. Those from the Alps were placed along PC-2, showing relatively high PC-1 scores, whereas the samples from the Pyrenees-Cantabrian Range were located at lower values of PC-1. The specimens from Sierra Nevada were intermingled among the samples Table 4. Comparisons among the Carex flava group dwarf morphotypes from different mountain ranges based on morphological data. Characters listed display less than 25% overlap between dwarf morphotypes. Character abbreviations as in Table 3. Alps Atlas Micromorphological characters are given in italics and measured as ratios. Characters are abbreviated according to Table 3. from the Alps and Pyrenees-Cantabrian Range. The first two components accounted for a total variance of 60.67% (38.53% for PC-1; 22.14% for PC-2). Micromorphological characters with the highest loadings for each component were central body height and central body width for PC-1, and central body shape 1 and central body shape 2 for PC-2 (Table 3). Characters overlapping equal or below 25% in pairwise comparisons between dwarf morphotype groups are displayed in Table 4 and Fig 4. The main diagnostic micromorphological characters are summarized in Table 4. Distribution modeling The average distribution models (Fig 5) supported differences in potential distribution between the well-developed large and dwarf morphotypes. High values of the area under the ROC curve (between 0.9 and 1.0) were obtained. Carex flava and C. lepidocarpa displayed similarly widespread potential distributions mainly in Central Europe, where they spanned lowland and montane (below timberline) areas. Suitable areas were also detected in the Mediterranean peninsulas (Iberian, Italian, Balkan and Anatolian), but restricted to mountain areas. According to jackknife tests, the most informative variables for the models were bio18 (precipitation of warmest quarter) and bio17 (precipitation of driest quarter) for C. flava and C. lepidocarpa, respectively. For the dwarf mountain morphotypes, we inferred more restricted potential distributions mainly in southern European mountains: Alps (mainly for the Alps population set), Pyrenees (for both the Pyrenees-Cantabrian Range and Alps sets), and Cantabrian Range (mainly for the Pyrenees-Cantabrian Range set). Sierra Nevada and the Atlas were not recovered as part of the potential ranges of either the Alps or the Pyrenees-Cantabrian Range population sets. The potential areas of well-developed large and dwarf morphotypes overlapped at mid-altitudes in the Alps and Pyrenees. However, higher altitudes were inferred as only suitable for the dwarf morphotypes, while the adjacent lowlands were inferred as only suitable for the well-developed large morphotypes (Supplementary S1 Fig). Flattened, scarcely sharp, filling an area no more than a quarter of the platform surface; sometimes inconspicuous. Scarcely developed, flattened, sometimes a bit prominent and sharp. Well-developed, sharp to rounded, filling an area of approximately half of the platform surface. Flattened but sharp, filling an area no more than a quarter of the platform surface. Atlas Lowest bract (10) Table 6. The largest value of Schoener's D was obtained for the comparison of the whole ranges of well-developed morphotypes of C. flava and C. lepidocarpa (Fig 6, Table 6). On the contrary, the lowest value of D was obtained for the niche overlap between the two dwarf morphotypes. D values also revealed that when the whole ranges are tested, both well-developed morphotypes overlap more with the dwarf morphotype from the Alps than with the dwarf morphotype from Pyrenees-Cantabrian Range. Remarkably, comparisons of regional co-occurring areas display similar D values between well-developed morphotypes and the co-ocurring dwarf morphotypes ( Table 6). The niche similarity and equivalency tests revealed that, when comparing the entire areas of well-developed and dwarf morphotypes, these are significantly more similar than expected by chance, but not identical. The comparison between C. flava and C. lepidocarpa using their entire areas revealed that their niches were both more similar and more identical than expected at random. On the contrary, the comparison of the two dwarf morphotypes (Alps vs. Pyrenees-Cantabrian Range) retrieved that their niches are well differentiated, not meeting similarity or equivalency (Table 6). When the comparisons were restricted to populations co-occurring in the Alps or the Pyrenees-Cantabrian Range, the results were not significant for any of the pairwise comparisons, revealing differences between the niches of both co-ocurring well-developed morphotypes and the dwarf morphotypes, but also between populations of well-developed C. flava and C. lepidocarpa from the Alps (Table 6). Phylogenetic analyses Models selected for each DNA region were GTR+I for ITS-1, rps16 and 5'trnK, K80 for 5.8S, and GTR+G for ITS-2. Our Bayesian phylogenetic reconstruction using the concatenated matrix with the three markers (ITS-rps16-5'trnK) yielded a consensus topology (Fig 7) congruent with that found in [18]. The ML topology agreed with the Bayesian tree but with overall lower support values (Fig 7). The sect. Ceratocystis was divided in three main strongly supported clades: the C. hostiana clade (1 pp; 95 bs), sister to the other two clades; the straightbeaked clade (1 pp; 88 bs), which includes C. demissa and C. viridula; and the bent-beaked clade (0.99 pp; 70 bs) with all the remaining samples. Within the bent-beaked clade, three main well-supported subclades (A-C) were found, each containing a different set of welldefined taxa and dwarf morphotypes. Subclade A (1 pp; 69 bs) contained C. flava and the Table 6. Pairwise statistical tests for comparison of ecological niche overlap, niche equivalency, and niche similarity between different sets of populations of well-defined and dwarf morphotypes. In each case, the hypotheses tested were whether the niches are more equivalent/similar than expected by chance. Statistical significance is represented by p-values, and marked with an asterisk (*) when significant (p<0.05). rt1 = rand.type1, both ranges were allowed to shift; rt2 = rand.type2, the dwarf morphotype range was fixed, and only the range of the well-developed morphotype was allowed to shift (see Materials and methods for details). Ancestral morphotype reconstruction Ancestral morphotype was unambiguously inferred as "well-developed" (100% of trees) for the most recent common ancestor (MRCA) of Carex sect. Ceratocystis and also for the MRCA of the bent-beaked and straight-beaked clades, and that of the straight-beaked clade (C. demissa and C. viridula) (Fig 8). However, the ancestral morphotype was mostly equivocal (>50% of trees) for the MRCAs of the bent-beaked clade, C. flava (subclade A) and C. lepidocarpa B lineage (subclade B). For the C. lepidocarpa C lineage (subclade C), we obtained similar proportions of "dwarf" morphotype and equivocal reconstructions. Excluding the equivocal reconstructions, the most probable ancestral morphotype was inferred as "well-developed" for the whole bent-beaked clade and the C. flava and C. lepidocarpa B lineages, and "dwarf" for the C. lepidocarpa C lineage. Discussion Cut from the same cloth: Morphological convergence, a result of adaptation to mountain environments? Previous molecular phylogenetic analyses [18] already showed that different dwarf morphotypes had taxonomic affinities with at least two different lineages, suggesting their independent origins. Accordingly, our molecular data revealed that mountain populations of the C. flava group are genetically heterogeneous and belong to three different lineages. Macromorphological studies not focused on the dwarf mountain plants have shown that differences between species, when tested in well-developed plants, are taxonomically significant [21,23,26,27,72]. However, our macromorphological study focused on dwarf mountain plants show that the differences between these populations are subtle or inexistent. This result explains previous proposals that mountain dwarf morphotypes constitute a single taxon [30,31] despite the molecular evidence to the contrary. The close resemblance of morphotypes from the Alps and the Pyrenees-Cantabrian Range is especially striking. Even though populations belong to three different lineages (Fig 7), they show a wide degree of overlap in the MPCA (Fig 3A), with no macromorphological characters showing an overlap below 25% (Table 4). The phylogenetic relationships of the Alps and Pyrenees-Cantabrian Range dwarf morphotypes (Fig 7), together with their lack of macro-morphological differentiation ( Fig 3A; Table 4) and inferred history of morphological evolution (Fig 8), point to convergent evolution as the process behind their morphological resemblance. The dwarf morphotypes do not form a monophyletic group and are recovered within different lineages (Fig 7). Despite the uncertainty on ancestral state reconstruction (Fig 8), the morphotype with the highest probability was "well-developed" for the MRCA of the bent-beaked clade, C. flava and C. lepidocarpa B lineages. We interpret these results as suggesting the convergent acquisition of the dwarf morphotype by certain mountain populations of these two species. Morphological convergence induced by similarly harsh environmental conditions is frequently found in mountain plants (Table 1), and sometimes produces identical adaptational responses in different groups [8,11,15], even resulting in adaptative radiations [9,13]. The selective pressures in mountain habitats seem to modify the well-developed large growth forms found in lowland environments, leading to similar morphologies regardless of the ancestral genotype (cf. [1]). The results of niche overlap, equivalency and similarity indeed point in this direction. The niche similarity but not equivalence between well-defined morphotypes and dwarf-morphotypes when tested using the whole ranges (Table 6) indicates that all of them are closely related plants that share ancestral ecological features, but also display underlying differences [73]. When the comparisons were restricted to well-defined and dwarf Results are summarized on the 50% majority rule consensus tree obtained from the Bayesian phylogenetic analysis. Pie charts at nodes summarize the results of the parsimony optimization conducted over the posterior distribution of trees from the Bayesian analysis. Each chart shows the proportion of trees for which a given morphotype was reconstructed for that node: dwarf (black), well-developed (white), and equivocal (gray). Terminals were pruned to leave only one representative per morphotype and haplotype, as explained in Materials and Methods. morphotypes co-occurring in a same mountain range, there is an overall lack of similarity and equivalence ( Table 6), indicating that their regional niche spaces are not interchangeable. This is consistent with a role of ecological differentiation as a driver of morphological differentiation. The potential distributions inferred by our modeling analyses for dwarf populations indeed retrieve higher-altitude habitats that are colder and more exposed than those preferred by their lowland, paraphyletic counterparts (S1 Fig; see also [29]), which suggests the dwarf habit as an adaptation to the harsh conditions including strong winds of high mountains. The effect of the geographic restriction on the significance of the comparisons further points to the role of mountain environments in the acquisition of dwarf morphotypes. Niche similarity between well-developed and dwarf morphotypes was only found when tested over the entire Euro-Mediterranean region, but not when the analyses were restricted to a particular mountain range. This implies that certain populations of well-developed morphotypes inhabiting non-mountainous areas are experiencing bioclimatic conditions that are to some extent similar to those of the dwarf morphotypes inhabiting mountain ranges. However, the adaptative response of these well-developed plants did not entail the evolution of a dwarf morphotype. The potential areas of the Alps and Pyrenees-Cantabrian Range dwarf population sets partially overlap, mainly in the Pyrenees. In contrast, very small areas of the Cantabrian Range are recovered as potential habitat for the Alps set, and very small areas of the Alps are recovered as potential habitat for the Pyrenees-Cantabrian Range set (Fig 5C and 5D). Accordingly, the equivalency and similarity tests did not reveal significance for the comparison between dwarf morphotypes from different areas ( Table 6). These differences may be the result of the slightly different climatic conditions of these mountain ranges. Incipient divergence: Subtle differences in the southernmost populations as a possible result of isolation The populations from Sierra Nevada and the Atlas, which are the most isolated of the complex, show a certain degree of morphological differentiation from the other dwarf morphotypes. A previous morphological exploration comparing dwarf and well-developed morphotypes already reported the morphological distinctiveness of the plants from Sierra Nevada and the Atlas. Such differentiation contrasted with the clinal variation found between well-developed large lowland morphotypes of C. lepidocarpa and the dwarf mountain morphotype from the Pyrenees-Cantabrian Range [21]. Our study, including only dwarf morphotypes, shows that in the MPCA both population sets are slightly displaced towards the lowest scores of PC-1, forming the less dispersed set of samples (Fig 3A). In addition, for several macro-and micromorphological features (Fig 3A and 3B, Tables 4 and 6), the Sierra Nevada and Atlas plants showed less than 25% overlap with all other studied populations. This, together with the finding of diagnostic qualitative morphological features (utricle and glumes color: both dark brown in Sierra Nevada populations vs. glumes light-brown or hyaline and utricles greenish to yellowish in Atlas populations; Table 5), readily allows the morphological distinction of the Sierra Nevada and Atlas populations from each other, and also from the other dwarf morphotypes. In addition, the apparent absence of potential habitats in Sierra Nevada and the Atlas for the Alps and Pyrenees-Cantabrian Range population sets (Fig 5C and 5D) suggests that dwarf populations in these southern mountains could be adapted to climatic conditions that are somewhat different to those of their northern counterparts. Sierra Nevada and Atlas populations are both nested within clade C of C. lepidocarpa (Figs 7 and 8). The isolated geographic placement of the Sierra Nevada and Atlas populations strongly suggests a pattern of post-glacial south-to-north disruption of genetic exchange following global warming [74,75]. This is the process that appears to be behind the incipient morphological divergence of these southernmost populations from related counterparts. The process of divergence following isolation could be in their earliest stages in the Sierra Nevada and Atlas populations, with incipient morphological differentiation, but no differences yet in DNA sequences of the selected barcoding markers. Taxonomic implications In comparison with the relatively wide range of climatic conditions where the well-developed forms of C. flava and C. lepidocarpa may grow, there seems to be a relationship between the dwarf morphotypes of the C. flava group and the climatic conditions of the mountains they inhabit. These morphologically similar morphotypes have been interpreted as bridges of clinal variation among taxa [30,31], traditionally complicating the taxonomy of the group. Our analyses show that their depauperate morphology is probably related to their habitat, by means of recurrent convergent adaptation to high mountain environments. As a taxonomic summary, we propose that populations of the dwarf morphotype from the Alps should be considered within the variation of C. flava or C. lepidocarpa, depending on the population ( Table 2). Such taxonomic identity may be addressed only by means of genetic barcoding. Populations from the Pyrenees and Cantabrian Range should be considered within the variation of C. lepidocarpa (although they are known to have experienced some degree of introgression from C. demissa [18]). On the contrary, the morphogeographic compartmentalization of the Sierra Nevada and Atlas populations, together with their low phylogenetic differentiation (Fig 7), support a status of these population sets as infra-specific taxa within the same species [39]. Therefore, a separate subspecies rank appears to be suitable for them: C. lepidocarpa subsp. nevadensis in Sierra Nevada (see Table 2) and a new subspecies in the High Atlas, which is herein described (C. lepidocarpa subsp. ferraria, S3 Fig). Diagnosis-A subspecies similar to Carex lepidocarpa subsp. nevadensis (Boiss. & Reut.) Luceño, from which it differs by the paler female glumes that bear an apical scabrid mucro (vs. dark brown and without mucro in subsp. nevadensis), and the utricles light green (vs. dark brown in subsp. nevadensis). Conclusions The morphological resemblance among the mountain dwarf morphotypes of the Carex flava group seems to be at least partly the result of the recurrent convergent adaptation to harsh mountain environments in different lineages of the group. The subsequent underdevelopment of diagnostic morphological characters has contributed to this striking morphological resemblance of dwarf mountain plants, that led multiple authors to consider them as a single species different from all other taxa of the group. Supporting information S1 Appendix. Studied material. Letters or codes in brackets are indicated if samples were included in the macromorphological (M), micromorphological (m) or molecular study (ITS, 5'trnK and rps16 GenBank accession numbers); symbol à indicates new sequences obtained in this study; ×n indicates the number of samples included from the same population, if more than one was included. (DOCX) S2 Appendix. Point localities employed in the distribution modeling analysis. Each tab of the spreadsheet displays localities of a well-developed or dwarf morphotype of the studied species. Information for each locality includes country, location, longitude and latitude in decimal degrees, collector and herbarium where the voucher is deposited, or reference if the information comes from a bibliographic source.
2019-04-02T13:12:45.645Z
2017-12-27T00:00:00.000
{ "year": 2017, "sha1": "d8aef1ce8e35aecc64b48db8ee4dc8bd45ed6fe9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0189769&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ed9db2561fb74e1db2b7f9dc7b20c64ea241341", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253144590
pes2o/s2orc
v3-fos-license
A 27-MHz frequency shift keying wireless system resilient to in-band interference for wireless sensing applications A 27-MHz wireless system with binary frequency shift keying (BFSK) modulation at 400-kHz is reported. The receiver has been designed to handle in-band interference corrupting the BFSK signal with the use of complex filters and amplitude comparison method. The BFSK modulation is carried out with a voltage-controlled oscillator before up-converting with a 27-MHz local oscillator. The bipolar junction transistors (BJT-based) power amplifier with 30% efficiency pumps 220 mW into a spiral antenna. The inductive-degenerated low-noise amplifier with a voltage of more than 30 dB amplifies an incoming signal before feeding into a mixer for complex direct down conversion. With deliberate Gaussian interference injection, the minimum ratios between the signal with interference and the interference only at the distance of 2.5, 10 and 15 m are 3.3, 8.5 and 11.5 dB, respectively at a maximum data rate of 20 kbps. Without any interference included, the system can achieve a data rate of 40 kbps at the maximum transmission distance of 15 m. Conceptually agreed with the presented bit-error-rate (BER) analysis, the BER measurements with Gaussian and single-tone/two-tone in-band interferences also confirm superiority offered by the amplitude comparison method where the signal-to-noise ratio is at 1 dB for BER=10 -3 at 10 kbps (10 dB better than the phase detection counterpart). INTRODUCTION The frequency shift keying (FSK) modulation [1]- [4] plays a significant role in wireless communication applications for the present and future technological demands [5]- [9], [10]- [13]. In wireless sensor networks, millions of devices and sensors are seamlessly connected together over limited spectral bandwidth. The restricted spectrum bandwidth could practically make interference among these devices unavoidable. To avoid severe interference from an adjacent-channel image signal and to maintain receiver's power consumption at the minimum without circuit complexity, a direction-conversion or zero-intermediate frequency (zero-IF) architecture is usually selected in modern wireless receiver [14]- [16]. Although this direct-conversion receiver structure can effectively remove such image signal, an in-band interference still poses a major design challenge. This is a quite difficult scenario where the wanted and undesired signals are sitting inside the same frequency band and they cannot be separated by means of simple filtering. In order to withstand the in-band interference without any complicate coding and multiple-access methods, a simple FSK modulation can be employed for low-cost wireless sensor nodes since its frequency-division nature (i.e. the modulated carrier signal sits at different places on the spectrum according to the information values) could help combat the interference [13], [17]- [22]. More specifically, in the case of FSK modulation with a direct-conversion receiver, a phase comparison method has long been a popular choice as a data-bit extraction part of the FSK demodulation process owing to its simplicity [23]- [26]. This phase comparison technique usually employs a digital logic circuit. A small down-converted signal needs to be amplified and limited for logic operation. This thus makes the phase detecting method prone to interference causing a high bit-error rate unless special techniques are employed [24]- [26]. A key feature of the direct-conversion binary frequency shift keying (BFSK) receiver explored in this work is its ability to handle in-band interference when an appropriate demodulation and data bit extraction technique has been exercised. Specifically, an amplitude comparison technique after complex filtering is investigated and compared with a well-established phase detection method. With a mathematical analysis on bit-error rate (BER) comparing to the phase detection technique, the frequency-energy conversion counterpart co-operating with complex filtering and amplitude comparison process shows superior tolerant capability to a significant level of in-band interference. A 27 MHz wireless system concept has been implemented and tested with low-cost discrete components to demonstrate this interference resilient property of the studied FSK receiver. BFSK modulator, power amplifier (PA), low-noise amplifier (LNA), up and down conversion mixers as well as the core BFSK demodulators have been constructed from easy-to-find integrated-circuit and semiconductor components to illustrate promising versatility of the proposed architecture. The system has been thoroughly tested for both wireline and wireless connections where inband interferences (Gaussian, single-tone and two-tone) has been injected at the intermediate frequency (IF) band on the transmitter end. BER measurements and extensive experimentation under various interference conditions strongly suggested that the proposed receiver system significantly outperforms the phase-detection system but with not much extra cost on circuit complexity. A receiver system architecture and proposed method are reviewed in section 2 where bit-error rate analysis has also been carried out to compare the well-known phase detection technique to its amplitude comparison method. The important circuit building blocks employed in the proposed system are explained in section 3. Various experimentation methods and measured results are described and summarized in section 4 before concluding the study in section 5. SYSTEM ARCHITECTURE DESIGN AND PROPOSED METHOD A conceptual spectrum diagram for direct down-converting BFSK signal from a radio frequency (RF) band to 0 Hz (zero-IF) before extracting the digital data bits is depicted in Figure 1(a). The direct down conversion is done with a complex local-oscillator (LO) signal, SLORx(t)=ALOexp(jLORxt). This renders a complex binary signal around +1 and -0 representing bits "1" and "0", respectively (typically, |1|=|−0|=) as shown as the signal SZIF(t) at the bottom of Figure 1(a), so the main complex signal for the bit data will be = (+ 1 ) for bit "1" (1a) By focusing only on the data signals of SZIF(t) in time domain with all the images and interference removed for simplicity as illustrated on left of Figure 1(b) which is corresponding to the "1" (red) and "0" (blue) bit spectrum, the data bit can be directly extracted by phase comparison between I and Q signal parts because of the different phase shift during different data bits [23]- [26]. Alternatively, this SZIF(t) can be passed on to two complex filters with their center frequencies at +1 and -0, where the data bits can be recovered by amplitude comparison since the complex filter would only pass a complex signal on one side of 0 Hz [27], [28]. In this work both bit extraction techniques are employed and compared under deliberate in-band interference injection. The 27 MHz FSK wireless radio transmission architecture is illustrated in Figure 2 where Figures 2(a) and 2(b) illustrates the transmitter and the receiver, respectively. Transmitter The 1-bit binary pseudo-random binary sequence (PRBS) signal modulates a 400-kHz carrier signal with a voltage-controlled oscillator before being up-converted by 27-MHz with a Gilbert mixer. The RF signal power (centered at 27.4 MHz) is boosted using a power amplifier to drive an antenna for RF electromagnetic radiation. A wideband Gaussian noise can be deliberately added to the FSK signal as interference for system evaluation. Receiver An in-coming RF signal from an antenna is voltage amplified by a low-noise amplifier (LNA) before feeding into a down-conversion mixer to perform a direct down conversion with a complex local oscillator (LO) signal (27.4 MHz) to obtain the complex signal SZIF(t) as in Figure 1(a). This SZIF(t) is then channeled into two different paths: i) being filtered by two complex filters (+j/˗j complex filters) whose center frequencies sit at +/˗ rad/s on the opposite side of 0 Hz prior to detecting the signal amplitudes and recovering data bits by amplitude comparison [also known as a frequency-to-energy conversion technique], and ii) being filtered by real lowpass filters before performing bit recovery by phase detection using a D flipflop. The phase comparison technique looks simpler than its amplitude comparison counterpart due to less circuit complexity. However, with a high level of in-band interference in the system, phase detection technique can be much more severely disturbed and rendering incorrect data recovery. The reason behind this comes from the fact that the amplitude comparison technique takes signals from two complex filters where the impact from any interference appearing at both complex filters can be greatly reduced by comparison process. But in the case of phase detection, the signal has to be amplified and limited before entering a phase detector such as a D flip-flop, this limiting step can be highly erroneous when interference is significantespecially, with an in-band random interference where no simple kind of analog filter can be employed to remove it. Bit-error-rate analysis and comparison for the two detection methods A simplified BER analysis comparing between the two techniques in the subsequent section will be carried out to verify the aforementioned assumption. In this simple analysis it is assumed that the in-band interference gets through the complex filters and the lowpass filters in both types of bit detection methods. Due to a number of circuit building blocks inside the presented receiver/demodulator architectures, the following BER analysis only serves for a comparison purpose between two-bit recovery techniques and it does not precisely represent the actual BER of the really complicated demodulator structures or the actual wireless channel. Phase detection with a D flip-flop (DFF) after lowpass filtering To simplify the BER analysis, as illustrated in Figure 3 it is assumed that the Gaussian interference disturbs only the signal received from the I path at DFF's data terminal, SID as in Figures 3(a) and 3(b). While the signal from the Q path entering the DFF's clock node, SQCLK is clean, i.e., the source of bit error comes from SID only. Thus this optimistic BER analysis of the phase detection technique is simply an issue of detecting error bits from the signal SID which is a fairly standard BER calculation [29]. For a single supply system, the logic signal switches between 0 and VA, the probability density function (pdf) of the Gaussian-interfered SID is as depicted in Figure 4 and the BER of the phase detection technique, BERPD can be expressed as [29]. Where n is the interference's rms voltage where VTh (=VA/2) is the threshold voltage level for logic decision. Q(x) is widely known as a Q function and its value can only be found by approximation [30]. Amplitude comparison after complex filtering For simplification, it is again assumed that the uncorrelated Gaussian interferences are present at both inputs of the comparator SCP and SCN after complex filtering and amplitude detection process as in Figure 5. Figure 5(a) shows how interferences enter the comparator while Figure 5(b) displays disturbed time-domain signals. The first two graphs in Figure 6 show the probability density functions (pdf), PSCP(x) and PSCN(x) of the signals SCP and SCN at the comparator's inputs. For a single supply system, it is assumed that the pre-amplitude-detected signal swings between VA and 0 rendering the ideal amplitudedetected/rectified voltage level at VA and 0, where  is a rectification factor with 0<1. The comparator mathematically performs a subtracting task between SCP and SCN before limiting the difference, and if the difference (SCP-SCN) is greater (or lower) than zero, this implies that bit "1" (or "0") would be detected. Therefore, the BER can be computed from the pdf of SCP-SCN, i.e., P(SCP-SCN) which can be seen as the convolution of PSCP(x) and P(-SCN(x)) (the pdf of the -SCN signal). Using the proof developed in [31], [32] if PSCP(x) and PSCN(x) are and when bit "1" is sent, P(SCP-SCN) can be expressed as Similarly, when bit "0" is sent, P(SCP-SCN) can be expressed as These ( − )@"1" ( ) and ( − )@"1" ( ) are also shown as the last graph in Figure 6. The BER is the total probability of the error bit detection, i.e., For simplicity, = = and the BER of the amplitude detection technique, BERAD is reduced to Specifically, if the amplitude detection process can manage 1/2 of VA, i.e. =1/2, then BERAD=0.5 (0.5 / ) which is still an improvement by a factor of two as compared to the phase detection technique. Noting that under a single supply system with square-wave signaling, the ratio (0.5 / ) is technically a signal to noise ratio (SNR), but if the signal under consideration is sinusoidal, the (0.5 / ) is instead equal to 2SNR. Comparison plot of BER as a function of SNR between BERPD and BERAD for a square-wave signaling scenario with =1 are illustrated in Figure 7. It is important to note that the SNR under consideration in these analyses is at the inputs of bit extraction circuitries (a comparator or a phase detector) and not at the actual input of the demodulator. Moreover, in the presented BER analysis, correlation of the noise/interference signals at the comparator's and the phase detector's inputs have not been taken into account. However, these simple BER graphs can be used to serve for a comparison purpose between the twobit extraction techniques. If the interferences at the comparator's inputs are correlated (they actually are, to a certain extent), the BER graphs would definitely be better than those in Figure 7. A power amplifier (PA) For simplicity, a class-A PA in Figure 8(a) is employed [15]. The main bipolar junction transistors (BJT) is biased with a current mirror to allow a large Vce voltage swing with the cost of PA's power efficiency (PE) wasted in the current mirror. In this work, a discrete KSP10 is used for the BJT. The antenna impedance is transformed to 25  (instead of 50 ) for the PA's output load so that more power can be delivered to the antenna due to a limited voltage swing (ideally at 2VCC peak-to-peak). Assuming a sinusoidal voltage swing at the load, the ideal maximum power delivered to the load RL is V 2 CC/2RL. The PA's efficiency and maximum output power is plot against input frequency in Figure 8 Low-noise amplifier (LNA) A class-A inductive degenerated low-noise amplifier of Figure 9(a) is employed with KSP10 [15], [16]. The LNA's s-parameters s11 and s21 with respect to a 50  reference system is shown in Figures 9(b) and 9(c), respectively. The s11 of -10 dB widely extends from 15 to 40 MHz well covering the operation for this wireless transmission system. The voltage gain is also measured as depicted in Figure 9 A polyphase filter and an envelope detector A 3 rd -order RC polyphase filters as shown in Figure 10 [33], [34] has been used for complex filtering followed by a simple 2 nd -order RC lowpass filter in the receiver as part of the amplitude comparison technique for data bit extraction. Note also that a differential 5 th -order RC passive lowpass filter has been used for the phase detection method. A BJT-based amplitude detector circuit in Figure 11 (developed from [35], [36]) has been used for amplitude detection with a single supply of 5 V. Discrete transistors BC547 and BC558 have been employed in this work. Figure 11. A BJT-based amplitude detector circuit EXPERIMENTATION, RESULTS AND DISCUSSION The complete system has been tested for both wire-line and wireless setups. The two aforementioned bit recovery techniques have been extensively compared. Measured results are described here. Wire-line system test Without the PA and the LNA involved, the output of the Tx's up-conversion mixer has been directly connected to the input of the Rx's down-conversion mixer. The results in time-domain are shown in Figures 12(a) and (b) and 13(a) and (b). In Figure 12, with no interference added to the modulated signal, both bit extraction techniques are working correctly where the phase detection technique does win on the basis of system simplicity and slightly lower power consumption. To test interference resilience of the system, Gaussian noise has been deliberately added to the BFSK signal in front of the up-conversion mixer as indicated in Figure 12(a). This interference (together with the BFSK carrier) is also translated to be well inside the RF transmission band around 27.4 MHz. The results are as illustrated in Figure 13 with in-band interference at a significant level, the amplitude comparison technique with two complex filters can still operate correctly while its phase detection counterpart fails and continuously produced erroneous recovered bits. On the right side of Figure 13(a), we can see that the phase shift between the I/Q signals have been severely disturbed and this leads to incorrect bit recovery. It is important to note that the injected interference with frequency around 400±50 kHz (in-band) severely degrades bit extraction functionality by the phase detection method. Wireless system test The transmitter, Tx and the receiver, Rx are separated by some physical distance with a clear line of sight as shown in Figure 14(a). The double-sideband spectrum at the power amplifier's input is depicted in Figure 14(b). The lower sideband signal and the 27-MHz LO leakage are clearly visible. These unwanted signals will not be strongly suppressed by the PA and LNA due to their rather low-quality factors. However, this will not be a serious issue owing to the direct-conversion receiver architecture where these undesired out-of-band interference can be easily removed by the baseband real lowpass filters or the complex filters. Figure 15 demonstrates the operation at a distance of 10 meters with and without interference as shown in Figures 15(a) and (b) are from phase and amplitude detections, respectively. Both bit recovery techniques can perform correctly under low interference level as shown on the left side of Figures 15(a) and 15(b). On the right-hand side, the figures show how the high in-band interference level can severely corrupt the data extraction process using phase detection while the amplitude comparison method after the two complex filters can still function correctly. The result suggests that a smart receiver could alternately select an appropriate bit recovery method according to the present interference level so that trade-off between bit-error rate (BER) and power consumption can be well balanced. The minimum Corrupted signal-to-in-band interference+noise ratio (cSibINR)-measured at the outputs of the complex filters, is the smallest ratio between power of the modulated signal corrupted by the interference+noise, PSig±inf_noi and power of the in-band interference+noise without the modulated signal, Pibinf_noi, that allows the receiver with the complex-signal amplitude-comparison technique still perform correctly at a 20-kbps data rate while its phase detection counterpart practically fails, i.e., this cSibINR can be expressed by (18). Figure 13(b) indicates how cSibINR of the complex-signal amplitude-comparison technique can be measured for a wire-line test. From the experiment, the phase detection technique always fails at these minimum cSibINR levels recorded for the amplitude comparison technique. This result confirms its inferiority under a highly interfered environment. Plots of the received power level measured at the LNA's input and the minimum cSibINR against the transmission distance is shown in Figure 16. Figure 16(a) shows the LNA's input power while Figure 16 Bit-error rate measurement The BER has been measured with a wire-line connection setup where the transmitter is directly connected to the receiver i.e., the power amplifier, low-noise amplifier and antennae have been omitted. The received data bits have been retimed, digitized and compared with its transmitted counterpart by mean of digital logic processing on a field-programmable gate array (FPGA) (Xilinx Zybo zynq 7000 [37]). The results are as depicted in Figure 17 for both methods of bit extraction at 5 k, 10 k, 20 kbps. Noting that the signal-to-noise ratio (SNR) in this graph has been measured at the input of the receiver/demodulator (the receiver mixer's input)-not at the comparator or the phase detector's inputs as considered for the BER analysis in section 2.3. It is obvious that the amplitude-detection method offers much superior performance over its phase-detection counterpart (as theoretically predicted by the calculations in Figure 7) where the SNR is at 1 dB for BER=10 -3 at 10 kbps (10 dB better than the phase detection counterpart). Similar to [26], the tolerance to in-band interference has also been tested with the system being subjected to single-tone and two-tone in-band interferences around the modulating frequency of 400 kHz as shown in Figure 2(a). The sensitivity results are plotted in Figure 18 where the signal-to-interference ratio 181 (SIR) has been measured at BER=10 -3 at 10 kbps. The frequency offset is a frequency deviation from the modulating frequency (=400 kHz) of a single-tone interference as shown in Figure 18(a) or of a common frequency of the two-tone interferences as shown in Figure 18(b), the two-tone frequency difference was fixed at 100 kHz. The SIR sensitivity level in Figure 18(c) is plotted against a frequency span from the two-tone common frequency (fixed at 400 kHz). From the measured results in Figure 18, a smaller SIR level at BER=10 -3 strongly suggests that the amplitude detection technique outperforms its phase comparison counterpart. Table 1 summarizes the performance of the transmission system. (c) Figure 18. Compared sensitivity at BER=10 -3 at 10 kbps: (a) single-tone interference, the offset is measured from 400-kHz, (b) the two-tone common frequency as offset from 400-kHz, and (c) with a fixed two-tone common frequency and a symmetrical frequency span ) 11 dB (phase detection) for data rate=20 kbps 7.9 dB (amplitude detection) 11.8 dB (phase detection) CONCLUSION A 27-MHz BFSK wireless radio system has been reported. The receiver employs a direct conversion with complex filtering and amplitude comparison for recovering digital data. This helps make the receiver more tolerant to any in-band or out-of-band interference as compared to a well-established phase comparison technique. The system has been successfully verified with measurements using off-the-shelf discrete components. In the future study, number of components and power consumption can be further reduced by employing a single complex filter. This will be integrated in a standard complementary metal oxide semiconductor field effect transistor (CMOS) technology and reported in another literature.
2022-10-27T15:08:07.444Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3d3296c5504097f3d0a28bff734dfffc2fb12a5b", "oa_license": "CCBYSA", "oa_url": "https://ijece.iaescore.com/index.php/IJECE/article/download/26484/16185", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ea2cc09a382523ec93395163129a8fd7094d4420", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
153877482
pes2o/s2orc
v3-fos-license
Exponential Discounting Bias (cid:3) We address intertemporal utility maximization under a general discount function that nests the exponential discounting and the quasi-hyperbolic discounting cases as particular speci(cid:133)cations. The suggested framework intends to capture one important anomaly typically found when addressing the way agents discount the future, namely the evidence pointing to the prevalence of decreasing impatience. The referred anomaly can be perceived as a bias relatively to what would be a benchmark exponential discounting setting, and is modeled as such. The general discounting framework is used to address a standard optimal growth model in discrete time. Transitional dynamics and stability properties of the corresponding dynamic setup are studied. An extension of the standard growth model to the case of habit persistence is also considered. Introduction Typically, the benchmark utility maximization dynamic model takes a constant rate of time discounting and, thus, intertemporal discounting is modeled as being exponential. This is an analytically convenient assumption and it is logically consistent with the idea that a constant interest rate is often used to compare the value of money over time, for instance at the level of the evaluation of investment projects. However, there are psychological e¤ects that must be taken into account when addressing intertemporal preferences. Such e¤ects may have a huge impact on how we perceive the behavior of the representative agent in the context of conventional economic models since they tend to generate a departure relatively to exponential discounting. In Xia (2011) three types of time preference anomalies that imply a deviation relatively to the standard exponential discounting framework are identi…ed. These relate to the timing of the evaluation, the magnitude of the reward, and the sign of the reward. The sign e¤ect was …rst highlighted by Kahneman and Tversky (1979) and basically states that gains are discounted more than losses. The magnitude e¤ect is a matter that has received increasing attention on recent literature (see Noor, 2011 andBialaszek andOstaszewski, 2012) and relates to the evidence that there is an inverse relation between the amount of the reward and the steepness of discounting over time, i.e., agents tend to be more patient when larger rewards are under evaluation. The most debated issue, though, is the one concerning changes on the degree of impatience as time elapses. This point relates essentially to the basic evidence that there is decreasing impatience over time -human beings tend to place much more weight on the di¤erence between a reward to be received (or a cost to be incurred) today or tomorrow than on the di¤erence between two consecutive dates in the far future. Thus, the rate of discount that we apply when measuring the present value of some near in time outcome is typically much larger than the discount rate applied to a distant in the future event. This is also the same as saying that the discount rate decreases in time. Such type of phenomenon is known as hyperbolic discounting and it has been widely discussed at various levels in recent years. The discussion on the subject, from an economic point of view, has started with Strotz (1956) and Pollak (1968) and received in ‡uential contributions in the 1990s, with the work, among others, of Akerlof (1991), Laibson (1997Laibson ( , 1998 and O'Donoghue and Rabin (1999). These authors have raised some fundamental questions: Does the popularity of exponential discounting come from its time consistency or from analytical tractability? How can one incorporate into economic models an operational notion of decreasing impatience? If preferences are truly present-biased, how does this relate to important behavioral issues as self-control or procrastination? Are agents aware of their own intertemporal preferences, so that they adopt sophisticated plans of action or does unawareness lead to a naive interpretation about the future? These interrogations continue today to be a rich source of debate on behavioral eco-nomics and related …elds. Part of the debate is still centered on justifying why hyperbolic discounting should be considered a rational way to form intertemporal preferences, more than exponential discounting. Prelec (2004), Dimitri (2005), Drouhin (2009), Farmer and Geanakoplos (2009), and Gollier (2010) argue that hyperbolic discounting is time consistent and rational. Decreasing impatience in a stochastic environment allows for a formal proof of such claim. Other authors are more skeptical about how hyperbolic discounting is being approached in the literature. While there is a tendency to search for analytical discount functions that may allow for an elegant treatment of economic models, one should take into account arguments as the ones by Rubinstein (2003) and Rasmussen (2008) who believe that modifying functional forms does not answer the main questions posed by the apparent lack of rationality in economic behavior. As stated by Ariel Rubinstein, a deeper understanding of intertemporal human decisions requires opening the black-box of decision making more than changing slightly the structure of the model used to address human behavior. Other relevant contributions on the …eld of hyperbolic discounting relate the generalization of the concept and the exploitation of the corresponding implications. In Bleichrodt, Rohde, and Wakker (2009) the commonly used discount functions are modi…ed in order to account for other kinds of time inconsistency on the formation of preferences besides decreasing impatience. Speci…cally, the proposed framework accommodates the possibilities of increasing impatience and strongly decreasing impatience. Also Benhabib, Bisin, and Schotter (2010) present a general version of the discount function, that contemplates the most common speci…cations of exponential and hyperbolic discounting found in the literature. The powerful notion of hyperbolic discounting, and its most common speci…cation in economics -Laibson's quasi-hyperbolic discounting concept -have been applied to study a wide range of relevant economic issues. Just to cite a few, we highlight the contributions of Gong, Smith, and Zou (2007) Barro (1999) and Coury and Dave (2010) on the implications of non-exponential discounting to economic growth. In this paper we generalize the quasi-hyperbolic discounting setting and apply the new framework of intertemporal preferences to a standard discrete time optimal growth problem. The setup di¤ers from other approaches on the subject because we relate the shape of the discount function to issues of …nancial literacy, following the analysis on the exponential growth bias as developed by Stango and Zinman (2009) and Almenberg and Gerdes (2011). Our argument is that in the same way people tend to underestimate future values of variables that grow at constant rates, individuals also tend to overestimate close in time values (relatively to the ones more distant in the future) when discounting them to the present. This reasoning allows us to present a discount function that is ‡exible enough to characterize di¤erent degrees of hyperbolic discounting and to nest the exponential discounting case as a possible limit outcome. The proposed speci…cation of intertemporal preferences is analytically convenient to address a discrete time optimal growth model. It enables us to derive explicit stability conditions and it serves to compare di¤erent degrees of deviation from the constant discount rate benchmark. Additionally, we extend the model to include habit persistence in consumption in order to demonstrate the ‡exibility of the exponential discounting bias concept when used in di¤erent settings. The remainder of the paper is organized as follows. Section 2 discusses in detail the notion of exponential discounting bias relating it with …nancial literacy issues. In section 3 this concept is applied to compare di¤erent possibilities in terms of hyperbolic discounting. Section 4 approaches utility maximization under the general speci…cation for intertemporal preferences. Section 5 sets up the growth model and analyzes the underlying dynamics. In section 6, an extension is explored; namely, the model is adapted in order to account for habit persistence. Finally, section 7 concludes. Anomalies in Financial Evaluation Recently, Stango and Zinman (2009) and Almenberg and Gerdes (2011) have carefully analyzed the evidence that points to a tendency to underestimate the future value of a given variable that grows at a constant rate. This exponential growth bias clearly exists in practice, for instance in what concerns household …nancial decision making. The mentioned literature emphasizes the link between the extent of the bias and the degree of …nancial literacy. A poor ability to perform basic calculations and the lack of familiarity with elementary …nancial concepts and products will, in principle, imply a wider gap between individuals' calculations and the true future values, i.e., there is a negative correlation between …nancial literacy and the exponential growth bias. Well informed agents will be able to understand the basic notion of capitalization and to perceive the exponential path followed by any value that accumulates over time. However, many studies have been discovering serious ‡aws on the understanding, by the average citizen, of simple …nancial concepts and mechanisms. This was highlighted by Lusardi (2008) and Japelli (2010), among others. Financial literacy or, more precisely, the lack of it, can explain the kind of de…ciency that consists in linearizing an exponential series in time. The important argument concerning the lack of ability on accurately addressing the value of money in time is that incorrect answers are biased. As emphasized by Almenberg and Gerdes (2011), individuals are almost twice as likely to underestimate the correct amount than to overestimate it. Thus, on the aggregate it makes sense to state that in a society where a given degree of …nancial illiteracy exists, the future values of a series that grows at a constant rate will be underestimated. Exponential growth bias will then be common when assessing the future value of an investment that o¤ers a return at a given annual constant interest rate. It is reasonable to conceive the existence of a link between the interest rate and the rate of time preference. In Farmer and Geanakoplos (2009, pages 1,2), this link is explained in simple terms, 'A natural justi…cation for exponential discounting comes from …nancial economics and the opportunity cost of foregoing an investment. A dollar at time s can be placed in the bank to collect interest at rate r, and if the interest rate is constant, it will generate exp(r(t s)) dollars at time t. A dollar at time t is therefore equivalent to exp( r(t s)) To understand how …nancial illiteracy might contribute to deviate agents'preferences from exponential discounting, we just need to make the inverse path to the one that is present in the evaluation of the exponential growth bias, i.e., if individuals tend to underestimate future values when assessing them in the present, they will certainly overestimate current values when thinking about them as if they were taking decisions at some future time moment. In analytical terms, the idea of exponential growth bias is commonly pre- where F V is the future value, P V the present value, r the interest rate, t is time and 2 (0; 1) measures the magnitude of the bias. If one wants to address the present value given the future value, we just need to rearrange the previous expression and write it as The above relation implies decreasing impatience. Far in the future outcomes are much less valued than the ones occurring in the near future. Now the bias works on the opposite direction -near in time results are overestimated. We can call this e¤ect exponential discounting bias, and we may de…ne it as the tendency to overestimate close in time values of a variable that grows at a constant rate. The exponential discounting bias will be bigger the larger is the extent of …nancial illiteracy and it constitutes an alternative explanation about why preferences in time tend to imply hyperbolic discounting: agents want to select a constant rate of time preference, namely a rate of time preference that follows the interest rate path, but their ability to undertake the proper computations is biased, in such a way that far in time values are less considered than the ones near the current period. Taking into consideration the notion of exponential discounting bias can be an analytically convenient way of approaching departures from strict exponential discounting. According to the distinction introduced by O'Donoghue and Rabin (1999) between naive and sophisticated agents, the discussed bias puts us closer to the naive evaluation of intertemporal preferences in Akerlof (1991) than to the sophisticated behavior that is implicit in Laibson's (1997Laibson's ( , 1998 analysis. In this context, a sophisticated person will know exactly what the respective future selves'preferences will be, while naive individuals are not able to realize that as time evolves, preferences will evolve as well. As a result of the understanding that a bias on discounting cannot be perceived by the agent, since it is the outcome of an anomaly on an otherwise intended constant discounting behavior, the representative agent in the models of the following sections will display a clearly naive behavior. Therefore, she will not be concerned with the possibility of tomorrow selves choosing options that are di¤erent from the ones chosen today. Since people are not aware of their own time inconsistency, it is legitimate to consider a dynamic optimal control problem where the representative agent maximizes at a given date t = 0 her future utility, and thus to design an optimal plan where the present bias exists but the agent acts as if it did not exist. In short, the analysis in this paper …nds support on two logical arguments: First -Individuals desire to turn intertemporal preferences compatible with the opportunity cost of money. This is the benchmark time consistent behavior that the rational agent would like to adopt; Second -Lack of a solid …nancial literacy eventually introduces a biased evaluation of intertemporal preferences, that makes the representative agent to act as if she was an exponential discounter, when in fact she is not. Departures from Exponential Discounting In order to account for decreasing impatience, Loewenstein and Prelec (1992) have proposed the following hyperbolic discount function: D H (t) = (1 + t) = , where and are two positive parameters. This discount function implies a decreasing discount rate: short-term discount rates are higher than long-term discount rates. Empirical evidence suggests that this is a much more appropriate and realistic way to approach intertemporal preferences then just considering a constant discount rate over time. While empirically more suitable, hyperbolic discounting, considered as modeled above, is much less tractable from an analytical point of view than exponential discounting. Because of this, Laibson (1997Laibson ( , 1998, based on a previous formalization by Phelps and Pollak (1968), has proposed an approximation to hyperbolic discounting, that he dubbed quasi-hyperbolic discounting; this is straightforward to apply to the standard dynamic optimization models of economists. The discount function takes the following form: , with s the time period in which the future is being evaluated; b 2 (0; 1), b 2 (0; 1). Note that in the limit case b = 1 we are back at exponential discounting. As in the hyperbolic case, the quasi-hyperbolic discount function captures the idea that discount rates decline with the passage of time. Laibson proposes, in his studies, a small exercise to compare discount rates on each of the settings. He considers exponential discounting ( b = 1; b = 0:97), quasi-hyperbolic discounting ( b = 0:6; b = 0:99), and hyperbolic discounting ( = 10 5 ; = 5 10 3 ) and draws a graph where it is evident that D QH (t) generates a time trajectory that is considerably closer to D H (t) than the one originating in plain exponential discounting. In the previous section, it was stated that the absence of a stable impatience level over time may be interpreted as an anomaly, something similar to the tendency that individuals have to linearize a series of values that accumulate at a constant rate (and, hence, truly exhibit an exponential path). In the proposed setting, this anomaly should be considered in the reverse way, i.e., if individuals tend to linearize exponential trajectories for the future, when discounting values to the present they will exacerbate the exponential nature of the series under analysis. In this context, we will consider exponential discounting, D E (t) = t s , 2 (0; 1), but we add the possibility of an error of evaluation that increases short-run impatience, generating a kind of hyperbolic discounting. Let (t) be the anomaly term, which transforms D E (t) into a discount function with an exponential bias, i.e., D EB (t) = (1+ (t))(t s) . Function (t) will take the following form: . The assumption of D EB (t) as the discount function has two advantages. On one hand, it allows for an intuitive explanation on why we depart from exponential discounting. There is an error of evaluation by the agents; perhaps they want to adopt a constant discount rate but, relatively to the periods that are closer in time they do not have the capacity to make an objective evaluation of their priorities. As time goes by, such ability evolves and, in the long-run, the error in evaluation is much smaller. On the other hand, we introduce a more general and ‡exible approach to time discounting than the one underlying D QH (t); as we will see below, the values of , 0 , and 1 can be chosen in such a way that we obtain an approximation to D H (t) that is undoubtedly better than the one provided by quasi-hyperbolic discounting. We consider 0 2 [0; 1] and 1 0. Naturally, exponential discounting holds for 0 = 1 = 0, while quasi-hyperbolic discounting is also a particular case of the more general setting provided by D EB (t) for b = 1 and b = (1 0 ) . Recover Laibson's example and consider the following parameter values for the exponential bias discount function: = 0:97, 0 = 0:95, and 1 = 23. Figure 1 displays a graph that is similar to the one in the original Laibson's analysis (50 periods are considered and hyperbolic and quasi-hyperbolic discount functions are displayed; pure exponential discounting is ignored in the displayed …gure). To this …gure, we add the exponential bias case for the parameter values that were chosen. It is evident that the new function generates results that o¤er a much better …t with the hyperbolic discount function than the ones generated by the quasi-hyperbolic case. After 15 periods there is almost a perfect match between D EB (t) and D H (t) (although, if we introduced additional periods -after 50 -we would start to see a departure of one of the series relatively to the other; nevertheless, this widening gap would never be as pronounced as the one regarding quasi-hyperbolic discounting). Equation (1) represents the utility in the current period, t = s, from consuming today and in all future moments from t = s + 1 to an unde…ned future date. The term u(c s ) is current consumption utility; the instantaneous utility function obeys conventional properties of continuity, smoothness, and concavity. Future utility is taken into account for all possible time moments but discounting implies that a larger weight is put on closer in time consumption opportunities. The discount function that we will consider is the one involving the exponential bias, D(t) = D EB (t). We can take the same sequence of utility functions, but now initiating one period later. This becomes, Taking into account U s (c) and U s+1 (c) as presented above, we can address intertemporal utility under a recursive form. The following expression is straightforward to obtain from the simultaneous consideration of (1) and (2), under exponential discounting bias. Now we denote time by t instead of s, in order to re ‡ect that the important issue is that we are considering two consecutive time periods, independently of which the …rst in fact is: The above expression is analytically useful, because one can apply to it, directly, dynamic programming techniques, in order to obtain optimal solutions. 1 Consider a simple budget constraint according to which a representative agent accumulates …nancial wealth (a t ) at a constant rate (r), besides receiving a constant labor income w. This constraint is a t+1 = w + (1 + r)a t c t , a 0 given. The problem the representative agent will want to solve consists in maximizing utility subject to (4). It is crucial to remark, at this stage, that the intertemporal problem is solved under the implied assumption that the representative agent is naive. As discussed in section 2, we are not concerned with the tendency to procrastinate that an individual with decreasing impatience might display, because she will never realize that her intertemporal preferences are, in fact, not constant over time. However, one must also highlight that the inability to understand how the future is e¤ectively being discounted does not constitute an obstacle to the adoption of an optimal behavior; the agent solves an optimality problem and chooses the consumption path that best serves her purpose, which is the maximization of intertemporal utility. Putting it in other words, besides the budget constraint, the agent also faces a literacy constraint that a¤ects the evaluation of time discounting; given these two constraints, the agent acts rationally by solving the dynamic optimization problem she faces. Financial illiteracy is not an impediment to the adoption of an optimizing behavior, although it can change the outcome of the problem at hand. Solving the maximization problem requires de…ning a function V (a t ) such that The corresponding …rst order conditions are and We must also take into account the transversality condition Combining the two optimality conditions, (6) and (7), one obtains an equation of motion for consumption. In order to simplify the analysis, take a logarithmic utility function, u(c) = ln c. For this functional form, the following di¤erence equation is computed, Expression (8) might be rewritten considering as endogenous variable the ratio t := From equation (9), we can determine the steady-state value of the ratio between two consecutive values of consumption. Proposition 1 The typical intertemporal optimization problem of the representative agent under exponential discounting bias has two equilibrium points: Proof. Solve (9), for := t+1 = t The found values have direct correspondence in the exponential case with the solutions = (1 + r) _ = 0. Observe that the steady-state values are as much larger as the wider is the discounting bias, meaning that the deviation from exponential discounting promotes a faster steady-state growth of consumption. This is the obvious result of taking a discount function with a corresponding steady-state value, 1 0 , that is a value higher than the benchmark constant discount factor . Note that the two solutions have a di¤erent nature: the …rst one is unstable and the second one is stable, . This is precisely a same stability outcome as the one achieved under exponential discounting. Since consumption is a control variable, the representative agent has the possibility of selecting the unstable solution as the long-run path of consumption (it is the solution that allows for positive growth, namely if the rate of discount is lower than the interest rate). Therefore, the long-run growth rate of consumption is 1 = 1 0 (1 + r) 1; the larger the value of 0 , the more consumption will grow in the steady-state. The Setup We now characterize the dynamics of a neoclassical growth model under exponential discounting bias. The maximization problem is the same as in the previous section (and the same remarks on naive intertemporal preferences and on the ability to optimize even under eventual …nancial literacy ‡aws continue to be valid). However, the constraint on the problem di¤ers. We take capital accumulation and a production function involving decreasing marginal returns. Let k t represent the capital stock and assume the following parameters: A > 0 (technology index), 2 (0; 1) (depreciation rate), 2 (0; 1) (outputcapital elasticity). The resource constraint takes the form Again, we compute …rst order conditions to encounter an optimal dynamic relation for consumption. If = 1 (endogenous growth model with an AK production function), we end up with exactly the same dynamics as in the previous section (with r = A ). Under decreasing marginal returns, we will be able to …nd constant steady-state values for both the state and the control variable, i.e., k t and c t . Repeating the same procedure of calculus to …nd optimality conditions, we arrive to the di¤erence equation for consumption Because one cannot address consumption dynamics independently of capital accumulation on the present setting, we end up with a system of three di¤erence equations to be analyzed; the system is, Next, we proceed to the full characterization of the dynamics of system (12). This requires …nding the steady-state and looking at local dynamics. Proposition 2 The steady-state of the neoclassical optimal growth problem under expo-nential discounting bias corresponds to a unique equilibrium point: The steady-state is de…ned as the pair of values (k ; c ) such that k t+1 = k t and c t+1 = c t = c t 1 . Applying these conditions to system (12), it is straightforward to determine the values in the proposition. The steady-state value of the capital stock increases with the output-capital elasticity and with the value of the technology index. It falls with a larger depreciation rate. It is also straightforward to observe that a higher 0 (stronger deviation relatively to exponential discounting) implies a larger long-run value for the capital stock; the same is true for the value of . As for parameter 1 , this has no in ‡uence over the steady-state values of the endogenous variables. We illustrate the results with a small numerical example. Let = 1=3, A = 1 and Although discounting is important in terms of the dynamics of the growth model, we conclude that it has a limited impact on the steady-state: parameter 1 does not have any in ‡uence in long-run equilibrium, while a change on 0 disturbes the steady-state slightly by making the discount factor value to change in the same direction. A larger discount factor is synonymous of increased patience, which bene…ts the economy in terms of long-run accumulated capital and consumption levels. Local Dynamics In order to address stability properties, one needs to linearize the system in the vicinity of the steady-state point. Computation leads to Proposition 3 The system is saddle-path stable. There exists one stable dimension, in the three dimensional space of the model. Proof. The existence of one stable dimension implies that one of the eigenvalues of the Jacobian matrix locates inside the unit circle, while the other two fall outside the unit circle. Let the eigenvalues be 1 ; 2 ; 3 . We want to prove that j 1 j < 1, j 2 j > 1 and We start by presenting trace, T r, determinant, Det, and sum of principal minors, M , of the Jacobian matrix; these are: It is straightforward to observe that T r > 3, Det > 1 and M > 3. The constraint on the determinant implies that the eingenvalues are all positive or that 1 > 0; 2 ; 3 < 0. In this second scenario, the conditions involving the trace and the sum of principal minors imply 1 > 3 ( 2 + 3 ) and 1 < 3 2 3 2 + 3 ; these inequalities cannot be simultaneously satis…ed for the constraints on the values of the eigenvalues. Thus, the only feasible possibility is the one under which the eigenvalues are all positive: 1 ; 2 ; 3 > 0. If all the eigenvalues are larger than zero, then the constraints involving the trace and the determinant allow to perceive that full stability (all eigenvalues below one) is not a possible outcome. At least one eigenvalue must be larger than 1. Next, we resort to Brooks (2004) to identify how many eigenvalues e¤ectively fall inside the unit circle. According to the mentioned author, an evaluation of the characteristic polynomial allows to state that: if condition (1 + M ) < T r + Det < 1 + M is met, there exists one real eigenvalue 1 of magnitude less than 1 and either: -a pair of complex conjugate eigenvalues 2 ; 3 = a ib, with ja ibj < 1; -two more real eigenvalues of magnitude less than 1; or -a pair of real eigenvalues of magnitude greater than 1 and having the same sign. Since we have remarked that at least one of the eigenvalues is larger than 1, the only possibility that can hold from the three above is the last one. Thus, if the displayed double inequality is satis…ed, we con…rm that 0 < 1 < 1 and 2 ; 3 > 1. It is straightforward to verify the validity of the condition since it is equivalent to (T r + Det + j) < T r + Det < T r + Det + j Therefore, we con…rm the existence of a single stable dimension, in the three dimensional space of the assumed system The above result can be illustrated through a numerical example. Recover the benchmark values for the exponential discounting bias case, i.e., = 0:97; 0 = 0:95; 1 = 23. With saddle path stability, we have a result that is qualitatively similar to the one of the original Ramsey model with a constant discount rate. There is convergence towards the unique steady-state point, along a one-dimensional stable path; this trajectory is followed because the representative agent has the possibility of adapting its initial consumption level in order to place the system over the stable path, since consumption is a control variable. The expression of the stable trajectory can be presented in general terms. Proof. The saddle-path stable trajectory can be obtained by computing the eigenvector associated to the eigenvalue inside the unit circle. Thus, we can solve the system The slope of the contemporaneous relation between consumption and capital is given by the ratio p 2 =p 1 , i.e., c t c = (p 2 =p 1 ) (k t k ), which corresponds to the expression in the proposition As in the original Ramsey model, the convergence relation between capital and consumption is of positive sign; thus, values of both variables are likely to simultaneously increase towards their long-term values. Note that this is a generic stable path expression that can be applied to speci…c forms of the discount function, namely the quasi-hyperbolic case and also the pure exponential discounting case. Observe, as well, that another stable trajectory emerges from the analysis: one can also relate consumption at t 1 to the capital stock at t. This convergence relation is also of positive sign. Let us return to the numerical example. Take 10 2 (k t k ). In the QHD case, the saddle path is steeper than in the EDB case. If we take EDB as a closer approximation to pure hyperbolic discounting, one possible error in using QHD consists in achieving a larger change in consumption as the capital stock evolves than the one that should, in fact, be obtained. Next, we consider the constant discount rate case. This is the case for which 0 and 1 are zero. As 1 approaches zero, the eigenvalue lower than 1 approaches 0:917 99. Thus, the stable trajectory is c t c = 0:11294(k t k ). This case departs even more from the hyperbolic discounting case and thus the relation between k and c is even more steeper. The above results point to the conclusion that the further we are from the exponential discounting case the less consumption will vary, in the convergence towards the steady- Habit persistence is modeled through the consideration of the following utility func- According to (14), when b = 0, we have the conventional version of the model without habit persistence; a positive b indicates that utility is directly dependent on how much more the individual consumes today, relatively to consumption in the previous period. As it is obvious, the larger the value of b the stronger is the habit persistence e¤ect. Constraint c t > bc t 1 must hold, in order to guarantee a feasible solution. Consider the problem in section 4, relating utility maximization under the exponential discounting bias, subject to resource constraint (4). Now, the problem takes the form Following the same procedure for the computation of …rst-order conditions as before, one arrives to the dynamic equation of consumption, which is equivalent to Proposition 5 The optimal control problem of the representative household under exponential discounting bias and habit persistence has three steady-state points: Proof. By taking := t+1 = t , one transforms di¤erence equation (17) intò with`1 := 1= 1 0 ,`2 := 2 1 + r and`3 : The solutions of the equation are =`2 p`2 2 4`1`3 2`1 _ = b, which correspond to the ones in the proposition Comparing with the problem without habit persistence, we have now an additional solution, = b, but the other two remain exactly the same. Again, the representative agent chooses the path that opens the door for a possible positive steady-state growth rate of consumption, i.e., 1 = 1 0 (1 + r) 1. If we include the habit persistence feature in the growth model with capital accumulation, the result is that, once more, this extension does not interfere with the steady-state outcome and with the stability properties of the system. The steady-state continues to correspond, as before, to a saddle-path stable equilibrium. Under habit persistence, system (12), which characterized growth dynamics, takes now the form of a four-dimensional set of relations, In the steady-state, c := c t+1 = c t = z t = v t and k := k t+1 = k t . The evaluation of (18) under the previous conditions leads to an exact same outcome as the one in proposition 2. Habit persistence does not change the unique equilibrium point towards which the economy converges in the long-run. Stability could be addressed as in the case b = 0. We present the linearized system, in the steady-state vicinity, but we do not pursue a generic discussion on the signs of the eigenvalues. Instead, we just characterize stability resorting to a small example. The linearized system is: where j is the same combination of parameters as in section 5 and b j = For these values, independently of the degree of habit persistence (i.e., of the value of b), one …nds, for the Jacobian matrix in (19), a pair of eigenvalues inside the unit circle. Therefore, habit persistence does not change the stability result previously found: saddle-path stability holds, what implies that the representative agent will be able to control the consumption path in order to place the system on the stable arm, through which the economy converges towards the steady-state. Conclusion People do not evaluate future outcomes as if they were computers or calculators. Measuring the future value of some current event or the present value of some future event is many times an intuitive process in which individuals engage. In the same way there is evidence of an exponential growth bias, according to which individual agents tend to linearize the sequence of accumulated future outcomes, we can conceive a kind of exponential discounting bias, according to which we may explain the evidence that points to decreasing impatience and that is analytically translated in the concept of hyperbolic discounting. The notion of exponential discounting bias is more general than the one commonly used by economists to characterize observed intertemporal preferences, i.e., the notion of quasi-hyperbolic discounting. This allows for a ‡exible analysis, where we can shape the trajectory of the discount factor in the way we …nd more reasonable in order to be as close as possible to what evidence reveals. Furthermore, the new speci…cation has appealing features from an analytical tractability point of view: because the bias originates on a misperception about how to evaluate the future that does not introduce any kind of sophistication on individual behavior, i.e., any kind of ability to understand that the perception of the future will change as time evolves, the optimization model can be approached similarly to what is done in the exponential discounting case. When assessing the dynamics of an intertemporal representative consumer growth model in discrete time, the exponential discounting bias assumption has allowed to construct a three dimensional dynamic system, from which it is straightforward to analyze steady-state properties and transitional dynamics. The analysis makes it possible to proceed with a thorough characterization of how di¤erent intertemporal preferences may shape the optimal relation between capital accumulation and consumption. The exponential discounting bias concept is adaptable to other features of the benchmark utility analysis. Speci…cally, in this paper, one has explored the implications of introducing habit persistence into the utility function; the conclusion is that steady-state and stability results remain basically the same when the new assumption is taken into consideration. Panel c -λ3
2022-05-29T06:21:16.935Z
2012-01-01T00:00:00.000
{ "year": 2019, "sha1": "e4d0a6f61e3ee497c1c34909ff6c60259bde017a", "oa_license": "CCBY", "oa_url": "https://repositorio.iscte-iul.pt/bitstream/10071/8022/5/12-05.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "149f87926a540aa1b32dee4e4c98bffecab60aa4", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
74603649
pes2o/s2orc
v3-fos-license
Nasal Septal Angiofibroma in Pregnancy Extra nasopharyngeal angiofibroma (ENA) is a term used for fibrous nodules that are located outside the nasopharynx. Location of angiofibromas outside the nasopharynx is rare. In addition, septum is an extremely rare area for involvement. While 11 nasal septum-originated cases have been reported until today in the literature, no septal angiofibromas during pregnancy have been reported yet. We are presenting a nasal septum-originated angiofibroma case in a 26-year old pregnant woman, together with literature data. INTRODUCTION Juvenile nasopharyngeal angiofibroma (JNA) is a neoplasia that is histologically benign, locally aggressive, nonencapsulated, extremely vascularized and commonly located in the nasopharynx (1).Although it is the most common benign tumor observed in the nasopharynx, its incidence is 0.5% among all head-neck tumors.It usually originates from the trifurcasion of the palatine bone, superior margin of spheno-palatine foramen which is formed by horizontal ala of vomerine and sphenoid pterygoid process root (2).JNA may cause fatal complications, such as intracranial invasion and bleeding.Extranasopharyngeal angiofibroma (ENA) is a term used for fibrous nodules that are located outside the nasopharynx.Clinic manifestations of extranasopharyngeal angiofibroma are very different than manifestations of nasopharyngeal angiofibroma.ENA is a very rare case and septum involvement is also extremely rare (1).We are presenting a case of 26-year old pregnant woman, who was admitted to our outpatient hospital with nasal septum originated angiofibroma, together with the literature data. CASE The 26-year-old female patient was admitted to our outpatient hospital in the fifth month of her pregnancy, with the complaint of epistaxis recurrences for 1 month.According to the anterior rhinoscopy, a small bleeding ul-pic tissue, as anterior nasal septum did not contain fascia basalis (1).We suggest that hormonal factors may have a key role in our case, the reason being that the tumor developed during pregnancy and it regressed after birth. ENA is clinically distinctive from JNAs.Reasons for admission to hospital are generally epistaxis and slow developing nasal congestion on one side (1,3,4).In contrast to JNA, ENA is seen in elderly women, its symptoms are fast progressive, and it has less hypervascularity (3,4).Computed tomography and magnetic resonance imaging are useful methods to determine localization and spread of ENA.ENA has less vascularisation, hence contrast retention is moderate or none compared to JNA (1,4,15).For its differential diagnosis, lobular capillary hemangioma (LCH or pyogenic granuloma), angiomatous polyps, neurofibromas and hemangioperistoma should be considered (16,17).LCH must be suspected in pregnant women.In the review study of el-Sayed, 7 of 12 LCH cases were originated from septum and one of these lesions was initially misdiagnosed as angiofibroma and according to the re-assessment results, diagnosis was corrected to LCH (17).In our case, LCH was diagnosed with the first biopsy and after the total excision, histopatology of tumor was reported as angiofibroma (Figure 1). Tumor excision is the appropriate treatment option for angiofibromas.The role of preoperative embolization in ENA treatment is not clear.Somdas et al. reported posi-cerated lesion, originated from right nasal septum, was detected.Results from punch biopsy of the mass were reported as "capillary hemangioma".No major bleeding was observed after the biopsy.Treatment was planned after the delivery but the patient was admitted to our clinic three months later, with recurrent epistaxis and severe nasal congestion of the right side.According to the anterior rhinoscopy and endoscopy performed, a hemorrhagic mass with 2-2.5 cm in diameter, which filled almost all of right nasal passage and protruded out of the nostril, was detected.Bleeding was controlled with an anterior compress.As delivery would occur in two weeks, patient was taken to follow-up.Approximately one month after the birth, the examination of the patient revealed a mass in the right nasal cavity, which originated from frontal nasal septum and was diminished to 0.8-1 cm.No protrusion of mass into nasopharynx or paranasal sinuses was present.Routine laboratory assessments were unremarkable except anemia.The mass was removed enblock, with mucosa and periost, under local anesthesia.Intra-operative bleeding was minimal.Reported histopathology result stated angiofibroma (Figure 1).No residual or recurrent sign was detected in post-operative examinations. Numerous theories on the origins and development of angiofibromas were reported (developmental, hormonal and genetic disorders) (1,6).Hiraide and Matsubara suggested that these tumors originated from the periosteum of the perpendicular plate, which is located in the ethmoid bone where the fascia basalis is located (6).Akbas et al. suggested that these tumors originated from ecto- tive results of preoperative embolization prior to excision of a septal angiofibroma (11).Castillo et al. reported an autoamputation of a septum-originated angiofibroma, for which operation was planned (12).As a result, ENA must be considered in the differential diagnosis of vascular tumors and it must be kept in mind that septum has a potential for these kinds of tumor localizations.It is supposed to be known that, clinic view of ENA is distinctive from JNA.The best treatment option is excision.
2017-10-18T15:01:01.156Z
2012-10-10T00:00:00.000
{ "year": 2012, "sha1": "05c63b1cd22f4c7af3ad321fa180681a7129986f", "oa_license": "CCBY", "oa_url": "https://www.ejgm.co.uk/download/nasal-septal-angiofibroma-in-pregnancy-7001.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "05c63b1cd22f4c7af3ad321fa180681a7129986f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246289429
pes2o/s2orc
v3-fos-license
Effect of the Seasonal Climatic Variations on the Accumulation of Fruit Volatiles in Four Grape Varieties Under the Double Cropping System The double cropping system has been widely applied in many subtropical viticultural regions. In the 2-year study of 2014–2015, four grape varieties were selected to analyze their fruit volatile compounds in four consecutive seasons in the Guangxi region of South China, which had a typical subtropical humid monsoon climate. Results showed that berries of winter seasons had higher concentrations of terpenes, norisoprenoids, and C6/C9 compounds in “Riesling,” “Victoria,” and “Muscat Hamburg” grapes in both of the two vintages. However, in the “Cabernet Sauvignon” grapes, only the berries of the 2014 winter season had higher terpene concentrations, but lower norisoprenoid concentrations than those of the corresponding summer season. The Pearson correlation analysis showed the high temperature was the main climate factor that affected volatile compounds between the summer and winter seasons. Hexanal, γ-terpinene, terpinen-4-ol, cis-furan linalool oxide, and trans-pyran linalool oxide were all negatively correlated with the high-temperature hours in all of the four varieties. Transcriptome analysis showed that the upregulated VviDXSs, VviPSYs, and VviCCDs expressions might contribute to the accumulations of terpenes or norisoprenoids in the winter berries of these varieties. Our results provided insights into how climate parameters affected grape volatiles under the double cropping system, which might improve the understanding of the grape berries in response to the climate changes accompanied by extreme weather conditions in the future. INTRODUCTION The grape double cropping system has been applied widely in many subtropical regions (Favero et al., 2011;Chou and Li, 2014). The traditional single cropping system seems not applicable in these regions because of the excess heat resources and the heavy rainfall in the summer season. The excessive rainfall and temperature during the grape ripening period can easily cause insufficient fruit ripeness and fungal infections. Moreover, relatively high temperature in winter does not meet the low-temperature requirements of the grapes for their normal dormancy, which results in uneven bud bursts in the spring (Favero et al., 2011). However, if the dormant buds were forced out of dormancy early during the current season, the double cropping system could be achieved (Chen et al., 2017). Even in winter, there was still adequate temperature and sunlight for the berry ripening in the subtropical viticulture regions, making the double cropping system more commercially adopted. There were two advantages of applying the double cropping system in these regions: (1) in the summer season, the grape berries could ripen earlier than the normal single cropping system, which could avoid the intense rainfall and heatwave as much as possible; and (2) in the winter season, cool climate and less rainfall usually led to better grape quality (Xu et al., 2011;Chen et al., 2017). In the double cropping system, bud break was usually enforced between late January and mid-February in the northern hemisphere, resulting in the first bloom in early April and the first crop in June or July. Vines were then pruned and forced again around mid-August, resulting in the second bloom of the side shoots in mid-September and the second crop in mid-January of the following year (Chou and Li, 2014). Even in the one-crop-ayear culture system, berries composition could vary significantly in different vintages (Downey et al., 2006). In the double cropping system, the climate variations between the summer season and winter season were greater than the single cropping system, which led to great variations in grape qualities. The winter berries were usually considered more favorable for wine production than the summer berries. Junior et al. (2017) showed that the higher values of yield, cluster weight, and titratable acidity (TA) were observed during the summer growing season, whereas the higher values of soluble solids content and pH were observed during winter, which suggested that the grapes harvested during the winter show physicochemical characteristics more suitable than those observed during the summer crops for winemaking purposes in Brazil. However, the winter berries usually had lower cluster weights than the summer berries, thus leading to a lower yield in the winter season (Mitra et al., 2018). Some previous researches reported that the fruitfulness of the second crop of some cultivars, such as "Summer Black, " was much worse in some subtropical areas (Guo et al., 2018). Some plant growth inhibitors, such as chlormequat chloride (CCC), were usually used to promote inflorescence induction to enhance fruitfulness. For grape secondary metabolites, the phenolic compositions were the focus of many researchers in dissecting the variations between the summer and the winter berries in the double cropping system (Xu et al., 2011;Chen et al., 2017;Zhu et al., 2017;Cheng et al., 2019). Similar results were found by previous studies that phenolic compounds, including anthocyanins, flavonols, and flavan-3-ols, were significantly higher in the winter season berries than in the summer season berries. Chen et al. (2017) showed that winter season berries greatly triggered the expression of the upstream genes in the flavonoid pathway in a coordinated expression pattern. However, other secondary metabolites were little studied in the double-cropping system, such as volatile compounds. Volatile compounds are critical secondary metabolites in grapes, which play an essential role in their sensory evaluations. Aromas derived from grapes mainly include norisoprenoids, terpenes, C6/C9 compounds, methoxypyrazines, etc. (Wang et al., 2020). Terpenes and norisoprenoids have low sensory thresholds and pleasant flavors (Fenoll et al., 2009). Grapes of the Muscat family usually have abundant terpenes, which contribute to their intense varietal flavors. Commonly identified terpenes in grapes include a rose oxide, geraniol, nerol, linalool, terpineol, and citronellol, which contribute to the typical rose and floral aroma (Fenoll et al., 2009). 1,1,6-Trimethyl-1,2dihydronaphthalene (TDN), β-damascenone, and β-ionone are common C 13 -norisoprenoids that contribute to fruity, violet, and petrol aromas to grapes and wines (Black et al., 2015). The abundant C6/C9 compounds in grapes contribute to a typical "green leaf " aroma, so they are also called green leaf volatiles (GLVs) (Kalua and Boss, 2010). The metabolic pathways of these volatile compounds are complicated, and many of them are still not quite clear until now. Mevalonic acid (MVA) and 2-methyl-D-erythritol-4-phosphate phosphate (MEP) pathway, carotenoid metabolism, and oxylipin pathway were the most investigated pathways, which could synthesize terpenes, norisoprenoids, and C6/C9 compounds, respectively (Kalua and Boss, 2009;Miziorko, 2010;Lashbrooke et al., 2013). The grape aromas were not only affected by the varieties and their development periods but also affected by the climate factors, such as temperature and light. In general, cluster exposure was beneficial for the accumulation of terpenes, whereas shading would reduce terpene concentrations (Bureau et al., 2000;Zhang et al., 2017). Grapes in cool-climate regions usually had higher C6 aldehyde concentrations, whereas warm-region grapes usually had higher terpene concentrations (Wen et al., 2015;Xu et al., 2015b). Rainfall and irrigation also affected the aroma accumulation in grapes. Regulated deficit irrigation during the berry development would promote the accumulation of terpenes (Savoi et al., 2016). In a previous study, we investigated the variations of ripening progression and flavonoid metabolism in Cabernet Sauvignon (CS) and Riesling (R) grapes under the double cropping system (Chen et al., 2017). In the present study, the aroma characteristics in grapes under the double cropping system were furthermore investigated. Moreover, the two wine grape varieties, another two table grape varieties, Muscat Hamburg (MH) and Victoria (V), that occupied a good market in South China were also investigated. There were significant climate variations between the summer and winter seasons, and the corresponding aroma variations were also found in all of the four varieties under the double cropping system. This study helped us to understand better how climate parameters affected grape volatiles under the double cropping system, which might improve the understanding of the grape berries in response to the climate changes accompanied by extreme weather conditions in the future. Furthermore, the feasibility of applying a double-cropping system in viticulture in South China could be evaluated. Experiment Site and Double Cropping System The 2-year (2014)(2015) study was performed at Guangxi Academy of Agricultural Sciences located in South China (22 • 36 N-108 • 14 E, elevation 104 m). The climate belonged to a subtropical humid monsoon climate with abundant sunshine and heat resources. Vines were trained to a Y-shaped training system with 2 × 4/5 shoots per meter and 1.0 m cordon above ground. Rain shelters were applied to all vines to prevent overrainfall damage. Four varieties were investigated in this study: CS, R, MH, and V. CS and R grapevines were in the same vineyard, whereas MH and V grapes were in another. The distance between the two vineyards was within 1 km. The detailed information on the variety and phenological stages are shown in Supplementary Tables 1, 2. Clusters were weighted at harvest, and estimated yield was obtained by multiplying the average cluster weight by the average cluster numbers per meter. The double cropping system in the experiment site was described by Chen et al. (2017). Briefly, vines were pruned two times, and the grapes were harvested two times per year. In mid-February, 2.5-3.0% hydrogen cyanamide was used to accelerate the bud burst. Summer grapes were harvested around late July and early August, which was called the summer cropping cycle. Then vines were pruned and followed the same procedure in August to start the second season. Winter grapes were harvested in January, which was called the winter cropping cycle. Berry Sampling and Meteorological Data Collection Berries of all varieties were sampled four times in each growing season. Sampling time points were as follows: (1) pea-size (E-L 31), (2) onset of veraison (E-L 35), (3) veraison complement (E-L 36), and (4) harvest (E-L 38). There were three biological replicates for each variety. For each replicate, 300 berries were randomly sampled from about 50 vines, which were distributed in three adjacent rows. One hundred berries of which were used in the determination of the physicochemical parameters. The remaining berries were immediately frozen in liquid nitrogen and stored at −80 • C for the subsequent metabolite and transcriptome analysis. Meteorological data was acquired from the local climate monitoring station within 1 km away from the experiment site. Photosynthetically active radiation and temperature were recorded per hour. Accumulated rainfall was recorded per day. Growing degree days (base 10 • C) was calculated from bloom to harvest according to Bindi et al. (1997). Analysis of Grape Physicochemical Parameters For each replicate, 100 berries were weighted and then manually squeezed. The must was centrifuged (8000 × g) to get clear juice. The total soluble solids (TSS) of the juice were determined with a digital pocket handheld refractometer (PAL-1, ATAGO CO., LTD., Tokyo, Japan). The juice pH was determined by a pH meter (Sartorius PB-10, Gottingen, Germany). TA was measured and expressed as tartaric acid equivalents (g/L) according to the National Standard of People's Republic of China (GB/T15038-2006). Extraction of Grapes Volatile Compounds The extraction of grapes volatile compounds was according to Lan et al. (2016). For each replicate, about 50 g of berries were de-seeded and mashed under liquid nitrogen. Then, the frozen samples with the addition of 1 g polyvinylpolypyrrolidone and 0.5 g D-gluconic acid lactone were ground into powder. The frozen powder was melted under 4 • C for about 6 h and then centrifuged at 8000 × g to get the clear juice. For free volatile compounds, 5 ml grape juice was added in a 20-ml vial with 1 g NaCl and 10 µl 4-methyl-2-pentanol (internal standard). For bound volatile compounds, 2 ml of the clear grape juice sample was added to Cleanert R PEP-SPE resins (150 mg/6 mL, Bonna-Agela Technologies, Tianjin, China), which had been activated with 10 ml of methanol and 10 ml of water. Then, the resins were washed with 2 ml of water and 5 ml of dichloromethane to remove water-soluble compounds and free volatiles, respectively. The resins were eluted with 20 ml methanol afterward. The methanol extract was concentrated to dryness by a rotary evaporator under vacuum at 30 • C and was redissolved in 10 ml of citrate/phosphate buffer solution (0.2 M, pH = 5). The enzymatic hydrolysis of glycosidic precursors was conducted at 40 • C for 16 h by adding 100 µl AR 2000 (Rapidase, 100 g/L, DSM Food Specialties, France). The 5 ml of the above sample was added in a 20-ml vial with 1 g NaCl and 10 µl 4-methyl-2-pentanol (internal standard). Both free and bound samples were placed in a CTC-Combi PAL autosampler (CTC Analytics, Zwingen, Switzerland) equipped with a 2-cm DVB/CAR/PDMS 50/30 µm SPME fiber (Supelco Inc., Bellefonte, PA., United States) and agitated at 500 rpm for 30 min at 40 • C. The SPME fiber was then inserted into the headspace to absorb aroma compounds at 40 • C for 30 min and was instantly desorbed into the gas chromatography (GC) injector for 8 min to thermally desorb aroma compounds, and the injection temperature was set at 250 • C. Gas Chromatography-Mass Spectrometer Analysis of Volatile Compounds in Grapes Both free-volatile and bound-form aroma compounds were extracted by headspace solid-phase microextraction (HS-SPME). Agilent 6890 GC coupled with Agilent 5973C mass spectrometer (MS) was used for the aroma determination. GC was equipped with an HP-INNOWAX capillary column (60 m × 0.25 mm, 0.25 µm, J&W Scientific, Folsom, CA, United States) to separate volatile compounds. The carrier gas was high purity helium with a flow rate of 1 ml/min. The oven program was set as follows: 50 • C for 1 min, increased to 220 • C at a rate of 3 • C/min, and held at 220 • C for 5 min. Identification and quantification of volatile compounds followed our research group method (Wang et al., 2019). Concentrations of volatile compounds were expressed as µg/L grape juice. RNA Extraction and Transcriptome Sequencing The berries of three development stages (E-L 35, 36, and 38) in 2014 were selected to determine the transcriptome sequencing. For each replicate, de-seeded 50 berries were ground into powder under liquid nitrogen protection. The following procedures had been described by Chen et al. (2017). Briefly, the sample total RNA was extracted by using a Spectrum Plant Total RNA Kit (Sigma-Aldrich, Carlsbad, CA, United States). Transcriptome analysis was conducted on the Illumina HiSeq 2000 platform (Illumina, Inc., San Diego, CA, United States) with 50-bp single reads and aligned against the reference grapevine genome 12 × V2, allowing no more than two mismatches. Gene expression abundance was calculated using the fragments per kilobase per million reads (FPKM) method to eliminate the influence of variation in gene length and total reads numbers on gene expression calculation (Sun et al., 2015). The R package "DESeq2" was used to identify differentially expressed genes (DEGs), and the criteria were set as false discovery rate ≤ 0.05 and fold change ≥ 2. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis of DEGs was used to select candidate genes responsible for the differences in aroma compounds between seasons. The data have been deposited in the NCBI Gene Expression Omnibus (GEO) database and are accessible through GEO accession GSE103226 (CS and R grapes) and GSE168785 (V and MH grapes). Total reads and total mapped reads per sample are shown in Supplementary Table 3. Statistical Analysis The SPSS version 22.0 (SPSS Inc., United States) was used for all significance analysis at p < 0.05 (Duncan's multiple range test or t-test). The Pearson correlation analysis was performed in MataboAnalyst 4.0 1 . The figures were prepared by using GraphPad Prism 8.0.2 (GraphPad Software, San Diego, CA, United States), SIMCA 14.1 (Umetrics, Umea, Sweden), and R-3.6.1. Heatmap was prepared by using the "pheatmap" package in R. Principal component analysis (PCA) was performed in SIMCA 14.1 (Umetrics, Umea, Sweden). Meteorological Data Meteorological data in the year 2014 and 2015 were shown and discussed by Chen et al. (2017). The climate conditions of growing seasons in all varieties were further analyzed in this study ( Table 1). CS and R grapes had a similar phenological stage in all growing seasons in 2014 and 2015. The MH and V grapes had 1 https://www.metaboanalyst.ca/ a similar phenological stage in the winter seasons of 2014 and 2015. In the summer season of 2015, MH grapes were harvested 23 days later than V grapes. The summer season had higher mean daily temperature and more high-temperature hours than the winter season in all varieties, but the cumulative PAR and rainfall were not consistent in the years 2014 and 2015. For CS and R grapes, the winter season had less cumulative PAR but similar cumulative sunshine hours and rainfall than the summer season in 2014. In 2015, the summer season of CS and R grapes had more cumulative PAR, sunshine hours, and rainfall than the winter season. In 2014, the winter season of MH and V grapes had more cumulative PAR, sunshine hours, and rainfall than the summer season, while in 2015, the winter season of MH and V grapes had fewer cumulative PAR and sunshine hours. Furthermore, the summer season of MH had more rainfall than the winter season. The weather condition in 2010-2019 was analyzed to present the regular climate characteristics in Guangxi (Supplementary Figure 1). June, July, and August were the hottest months in the whole year with almost the most abundant rainfall, which was a typical subtropical humid monsoon climate. However, the rainfall and sunshine hours also had wide ranges in many months, which made a high intra-or interseasonal variability. Grape Physicochemical Parameters The physicochemical parameters of CS and R grapes were shown and discussed by Chen et al. (2017). In brief, the berries in the summer cropping showed a higher TSS at E-L 31 but showed an opposite result at E-L 38 in CS and R grapes. Berry weight increased along with the berry development, but the berry weight in the winter cropping was significantly lower than those of the summer cropping at E-L 35, 36, and 38 in CS and R grapes. The TSS, TA, pH, and 100 berries weight of MH and V grapes in different sampling times are shown in Table 2. In 2014, the winter season berries had higher TSS than the summer season berries during the development stages. V grapes reached only 12.8 • Brix at harvest in the summer season of 2014, which was almost 8 • Brix lower than their corresponding winter season berries. In 2015, there was no significant difference in TSS of MH berries between the summer and winter seasons during the development stages. For V grapes, the summer season grapes ripened faster than the winter season ones, although there was still higher TSS in winter season berries at harvest. MH and V berries had lower TA and higher pH in the summer season than the winter season. Similar to CS and R grapes, berry weight in the winter cropping was also lower than that in the summer season in MH and V grapes. Reduced berry weight in the winter cropping led to a lower yield than the summer cropping (Supplementary Table 2). Grape Volatile Compounds Totally, there were 173 free-form and 137 bound-form volatile compounds identified in the four grape varieties, and these compounds were sorted into seven groups: C6/C9 compounds, terpenes, norisoprenoids, alcohols, carbonyl compounds, esters, and others (Supplementary Tables 4, 5). The PCA was used to identify the aroma profile variations between the mature grapes (E-L 38) under the double cropping system, as shown in Figure 1. In CS grapes (Figure 1A), two principal components explained 69.2% of the total variance. PC1 (R2X[1]) discriminated the berries of 2014 from those of 2015 and accounted for 43.1% of the total variance. The loading plot showed that the CS berries of 2014 had more abundant aroma compounds than those of 2015. PC2 (R2X[2]) discriminated the winter berries from the summer berries and accounted for 26.1% of the total variance. The loading plot showed that the winter berries had abundant terpenes, and the summer berries had abundant norisoprenoids. In R grapes (Figure 1B), two principal components explained 71.7% of the total variance. Similar to the CS grapes, the two principal components could discriminate berries of the four seasons from each other. The loading plot showed that the winter berries had more abundant aroma compounds than the summer berries, especially terpenes. In V grapes (Figure 1C), two principal components explained 68.6% of the total variation. PC1 (R2X[1]) separated the berries of different seasons, accounted for 50.7% of the total variation. Vintage variation only occupied 17.9% of the total variation. The winter season berries had an abundant aroma profile than the summer season berries, and most terpenes and norisoprenoid concentrations were higher in the winter berries than those in the summer berries. In MH grapes (Figure 1D), two principal components explained 76.8% of the total variation. Unlike the previous three varieties, the summer season berries of 2014 and 2015 could not be clearly discriminated by the PCA model. PC1 (R2X[1]) accounted for 53.3% of the total variation that could discriminate the winter season berries of 2015 from the berries of the two summer seasons. PC2 (R2X[2]) accounted for 23.5% of the total variation that could discriminate the winter season berries of 2014 from other seasons. The winter season berries of 2015 had the most abundant aroma compounds, especially terpenes, norisoprenoids, and C6/C9 compounds. Total Concentrations of C6/C9 Compounds, Terpenes, and Norisoprenoids To figure out how the grape-derived aromas changed during the development stages, the total concentrations of C6/C9 compounds, terpenes, and norisoprenoids were calculated, as shown in Figure 2. The C6/C9 compounds were the most abundant aroma compounds in all the grapes. The accumulation trends of C6/C9 compounds were not consistent in the four growing seasons. In most seasons, C6/C9 compounds peaked at E-L 36 and then declined until the harvest, which was in agreement with the previous study (Wang et al., 2019). However, there were some seasons when the grapes of E-L 38 had the highest C6/C9 concentration, such as the winter season of V and MH grapes. In CS grapes, the berries of 2014 had higher C6/C9 compound concentrations than those of 2015. The significant difference between the summer and winter seasons in C6/C9 compound concentrations only occurred in the former development stages in 2014. However, in the other three varieties, the winter season berries had higher C6/C9 compound concentrations than those of the summer season within the same vintage in most development stages, especially at harvest. For terpenes, MH grapes had the highest concentration among the four varieties, with at least 2000 µg/L at harvest. The other three varieties only had 50-400 µg/L terpenes at harvest, indicating that the grapes of the Muscat family usually had abundant terpenes (Fenoll et al., 2009). CS and R grapes had similar trends in terpenes accumulation. They had the highest total terpene concentrations at E-L 31, then declined until harvest. In MH grapes, the highest terpene concentration occurred at E-L 38. In V grapes, a significant increase of terpene only occurred in the 2014 winter season as grapes developed. However, in other seasons, the terpene concentration at harvest was slightly higher or have little difference than E-L 31 in V grapes. For norisoprenoids, there were no consistent results in all varieties. In CS grapes, the summer season berries of 2014 had higher norisoprenoid concentration than those of the winter season in the same year, whereas no significant difference showed in the vintage of 2015. In R grapes, the significant differences between the summer and winter season berries occurred at E-L 35 and E-L 36 in 2014 and at E-L 38 in 2015. MH and V grapes had the same results in the two vintages, and the winter berries Heatmaps show the log 2 fold changes between seasons (winter season/summer season). Red block indicates higher aroma concentrations in the winter season berries. Blue block indicates lower aroma concentrations in the winter season berries. * Significant differences between the summer and winter season (p < 0.05, t-test). IBMP, 2-isobutyl-3-methoxypyrazine; TDN, 1,1,6-trimethyl-1,2-dihydronaphthalene; TPB, (E)-1-(2,3,6-trimethylphenyl)buta-1,3-diene. Frontiers in Plant Science | www.frontiersin.org had higher norisoprenoid concentrations than those of the summer season. Variations of Volatile Compounds Between Growing Seasons To figure out how the growing seasons affected the concentrations of individual volatile compounds, the key compounds were selected by using the t-test, which showed significant differences in at least one sampling point between seasons. The selected free volatile compounds are shown in Figure 3. In CS grapes, many terpenes had higher concentrations in the winter season in several or a specific stage, especially in 2014. However, some norisoprenoids, such as (E)-β-ionone, 6-methyl-5-hepten-2-one, geranylacetone, (Z)-β-damascenone, and (E)-β-damascenone had lower concentrations in the winter season. β-Damascenone occupied the highest proportion in norisoprenoids of CS grapes (Supplementary Table 4). (E)-2hexenal and hexanal were the main C6/C9 compounds with the highest concentrations (Supplementary Table 4). The winter berries had higher hexanal concentration but had lower (E)-2hexenal than the summer berries at the harvest date. In R grapes, most terpenes had higher concentrations in the winter season, such as γ-terpinene, α-terpinene, βmyrcene, terpinolene, geraniol, etc. In 2014, many terpenes only had higher concentrations before harvest in the winter season berries, but these differences disappeared at harvests, such as bornylene, α-terpineol, terpinen-4-ol, α-calacorene, and D-limonene. Although most norisoprenoids had higher concentrations in several stages, (E)-β-ionone was the only norisoprenoid with a higher concentration in the summer berries in both of the two vintages at harvest. TDN was well-known to contribute "petrol" aromas to "Riesling" wines (Sacks et al., 2012), and it had a higher concentration in the winter berries at harvest in 2015. Different from CS grapes, its winter season berries had high concentrations of both (E)-2-hexenal and hexanal. In V grapes, most terpenes were also more abundant in the winter season berries, such as D-limonene, β-ocimene, terpinolene, terpinen-4-ol, linalool, α-terpineol, cis-furan linalool oxide, etc. For norisoprenoids, the winter season berries had a higher (E)-β-ionone concentration in all sampling times of 2014 and 2015. (Z)-β-damascenone and (E)-β-damascenone only showed higher concentrations in the winter season berries at harvest in 2014 and 2015. Similar to R grape, its (E)-2-hexenal and hexanal also had higher concentrations in the winter season berries at harvest. In MH grapes, almost all of the selected compounds had higher concentrations in the winter season berries at E-L 38. Many of them did not show any difference in the early development, even were higher in the summer season berries at E-L 35 and E-L 36. Except for terpenes, norisoprenoids, and C6/C9 compounds, some benzene derivatives were also had higher concentrations in the winter season berries in all varieties, such as benzaldehyde, benzeneacetaldehyde, benzyl alcohol, and β-phenylethyl alcohol. These compounds contribute roasted, honey, almond, fruity, and floral flavors to the grapes or their wines (Cai et al., 2014). High temperatures could enhance their biotransformation and degradation rate, whereas lower temperatures would increase their concentrations (Scafidi et al., 2013). The selected bound volatile compounds are shown in Supplementary Figure 2. Compared to the free volatile compounds, there were fewer selected bound-form compounds with significant differences between seasons. In CS grapes, most of the selected compounds had no consistent trends in 2014 and 2015. Only 2,3-butanedione, cis-furan linalool oxide, and transfuran linalool oxide showed higher concentrations in the winter season berries in serval stages over 2 years. In R and V grapes, most of the terpenes had higher concentrations in the winter season berries, which was in agreement with the free volatile results. However, in MH grapes, most of the selected compounds showed the opposite trends in 2014 and 2015. Relationship Between Volatile Compounds and Climate Factors As mentioned above, the summer seasons had more hightemperature hours than the winter seasons, but the accumulated PAR and rainfall were not consistent in the two vintages. Thus, the Pearson correlation analysis was used to figure out how climate factors affect the berries' volatile compounds at harvest. The highly correlated compounds (| r 2 | > 0.6) in at least three varieties were selected, as shown in Table 3. Seventeen volatiles showed high correlations to the high-temperature hours, and most of them were negatively correlated. For C6/C9 compounds, hexanal was negatively correlated with the high temperatures in all of the four varieties. For terpenes, γ-terpinene, terpinen-4ol, cis-furan linalool oxide (G), and trans-pyran linalool oxide (G) were all negatively correlated with high temperatures in all of the four varieties. Other terpenes, such as geraniol, were negatively correlated with high temperatures in R, MH, and V grapes, and geranial (G), nerol (G), geraniol (G), and p-cymene were negatively correlated with high temperatures in R and MH grapes. Different from the pronounced effects of high-temperature hours on aromas, fewer compounds highly correlated with PAR and rainfall were selected. Free-and bound-form p-menthan-8-ol concentrations were all positively correlated with accumulated whole season PAR. trans-Furan linalool oxide (G) was negatively correlated with accumulated whole season PAR in CS, R, and MH grapes. The rainfall correlated compounds had no consistent trends in four varieties. 2-Methyl-D-Erythritol-4-Phosphate Phosphate and Mevalonic Acid Pathway Terpenes were derived from two common precursors: isopentenyl pyrophosphate (IPP) and its isomer dimethylallyl diphosphate (DMAPP), which were synthesized from two independent pathways: the plastidial MEP and the cytoplasmic MVA pathways, respectively (Wen et al., 2015). Carotenoids, the precursors of norisoprenoids, were also synthesized from the MEP pathway (Meng et al., 2020). The log 2 fold change was used to present the variations between the summer and winter season berries through MVA and MEP pathways (Figure 4). In the MEP pathway, five VviDXS genes (VIT_200s0218g00110, VIT_204s0008g04970, VIT_205s0020 g02130, VIT_211s0052g01730, and VIT_211s0052g01780) were expressed differently in the berries of the winter and summer seasons in at least two varieties. Only the expression of VviDXS2 (VIT_200s0218g00110) was downregulated in MH grapes, and other genes were all upregulated in several varieties in the winter season berries. VviDXS3 (VIT_204s0008g04970) was the common upregulated expression gene in the winter season berries of all four varieties. VviIPI (VIT_204s0023g00600 and VIT_211s0206g00020) was responsible for the transformation between IPP and DMAPP, which had upregulated expression in R and V winter season berries. The mRNA levels of the GPPS small subunit might play a key role in regulating the formation of GPPS and thus affecting the monoterpene biosynthesis (Tholl et al., 2004). In the present study, VviGPPS small subunit (VIT_219s0090g00530) had higher expressions in MH grapes than other varieties and had low expression levels in CS and V grapes (Supplementary Table 6), which might be correlated with the monoterpene concentration variation among these varieties. In the MVA pathway, there were also many genes significantly affected by the growing seasons. The expression of one VviAACT gene (VIT_218s0089g00590) was downregulated in the winter season berries of all four varieties. HMGR was the key enzyme of the MVA pathway. There were three VviHMGR genes (VIT_203s0038g04100, VIT_204s0044g01740, and VIT_218s0122g00610) differently expressed in the winter and summer season berries. Among them, VIT_204s0044g01740 was upregulated in R, V, and MH grapes. Terpene synthases (TPSs) were the final enzymes of the terpene biosynthetic pathway. TPS-a, TPS-b, and TPS-g were the main VviTPS genes with high expressions (Wen et al., 2015). In CS grapes, most selected VviTPS genes were downregulated in the winter season berries. Glycosyltransferases (GTs) could converse free terpenes into their corresponding glycoside bound forms, and three genes had been proved to have such character: VviGT7, VviGT14, and VviGT15 (Bönisch et al., 2014). The downregulated expression of VviGT7 and VviGT14 were shown in all of the four varieties in the winter season berries. Carotenoid Metabolism Pathway As mentioned above, the MEP pathway also synthesized carotenoids, which were the precursors of norisoprenoids. The following pathway after synthesizing geranylgeranyl Table 7. The condensation of two GGPPs by phytoene synthase (PSY) formed phytoene, the first carotenoid (Cazzonelli and Pogson, 2010). There were three identified VviPSYs genes (VIT_212s0028g00960, VIT_206s0004g00820, and VIT_204s0079g00680), which expressed differently between the berries of the summer and the winter season. VviPSY3 (VIT_206s0004g00820) and VviPSY2 (VIT_212s0028g00960) had higher expressions in the winter season berries and showed significant differences in several stages. Carotenoid cleavage dioxygenases (CCDs) were the key enzymes that catalyzed the generation of norisoprenoids (apocarotenoids) by cleaving the conjugate double bond of carotenoids (Meng et al., 2020). There were four VviCCDs selected in the present study, and most of them had higher expressions in the winter season berries, especially VviCCD4a. In CS grapes, VviCCD4a (VIT_202s0087g00910) had higher expressions at E-L 35 and E-L 38, whereas it was downregulated at the E-L 36 stage. Oxylipin Pathway The C6/C9 compounds, or GLVs, were short-chain alcohols, aldehydes, and esters formed through the oxylipin pathway (Hassan et al., 2015). The main enzymes in the oxylipin pathway included lipoxygenase (LOX), hydroperoxide lyase (HPL), and alcohol dehydrogenase (ADH). The selected genes were expressed differently in the oxylipin pathway between the summer and winter season berries are shown in Supplementary Figure 4. Compared with LOXs, VviLOXA (VIT_206s0004g01510) had high expression levels in the whole development stages of all the four varieties, which might play a key role in the LOX family (Supplementary Table 8; Podolyan et al., 2010;Xu et al., 2015a). The expression of VviLOXA was upregulated in the winter season berries of R, V, and MH grapes, whereas it was downregulated in the CS winter season berries. VviHPL1 (VIT_212s0059g01060) had high expression levels in the development stages. It was reported that VviHPL1 was also related to the accumulation of C6 compounds (Xu et al., 2015a). However, there were no consistent trends in all of the four varieties in the present study. ADH was responsible for the conversion of aldehydes to alcohols. About half of VviADH expressions were downregulated in the winter season berries in all of the four varieties, and others were upregulated. For C6 alcohols, only the MH winter season berries had higher (E)-2-hexen-1-ol and (Z)-3-hexen-1-ol concentrations than the summer season berries in 2014 and 2015. The winter season berries of CS and R had higher (E)-2-hexen-1-ol concentrations than the summer season berries in 2015 and showed an opposite trend in 2014. The V winter season berries had a lower (E)-2hexen-1-ol concentration than the summer season berries in both of the two vintages. Relationship Between Volatile Compounds and Transcriptome Gene Expression To figure out the relationship between volatile compounds and transcriptome gene expression, we selected the transcriptome genes involved in C6/C9, terpenes, and norisoprenoids synthesis pathway to calculate their correlation with the concentration of each corresponding category during berry development (E-L 35, E-L 36, and E-L 38). The highly correlated compounds (the Pearson correlation analysis, | r 2 | > 0.6) in at least two varieties were selected, as shown in Supplementary Table 9. Only three genes involved in the oxylipin pathway showed high correlations to total C6/C9 compound concentration, and two of them were negatively correlated. The genes related to terpenes occupied the highest proportion in all selected genes, and 51 genes showed high correlations with total terpene concentration. Among them, five genes (VIT_211s0052g01730, VIT_203s0038g03050, VIT_202s0025g04864, VIT_202s0025g04880, and VIT_205 s0051g00670) were positively correlated with the total terpene concentration and six genes (VIT_215s0046g03550, VIT_215s0046g03590, VIT_215s0046g03600, VIT_215s0046 g03650, VIT_206s0004g02740, and VIT_214s0083g00770) were negatively correlated with high total terpene concentration in at least three varieties. The VviDXS3 (VIT_211s0052g01730), which was upregulated in the winter berries in all varieties (Figure 4), was positively correlated with the berry terpene concentration in CS, V, and MH grapes. In the carotenoid metabolism pathway, only five genes were selected to have a high correlation with berry norisoprenoid concentration. Among them, two VviCCDs (VIT_213s0064g00810 and VIT_213s0064g00840) were positively correlated with the norisoprenoid concentration in R and V grapes, which was in agreement with the previous analysis. However, the two VviCCDs (VIT_213s0064g00810 and VIT_213s0064g00840) were upregulated in the winter berries in R and V grapes. Effect of the Growing Season on Berries Physicochemical Parameter The berries in the summer season usually had lower TSS than in the winter season under the double cropping system (Xu et al., 2011;Zhu et al., 2017), which was also confirmed in the present study. Severer high-temperature pressure and fewer sunshine hours in the 2014 summer season might inhibit TSS accumulation in the grapes. For MH and V grapes, there were 351 high-temperature hours in the 2014 summer season but only 89 h in the 2014 winter season. Although elevated temperature usually accelerated the sugar accumulation in the grape berries, scorching conditions would exceed the optimum photosynthetic temperature (Gutiérrez-Gamboa and Moreno, 2019). When the temperature exceeded 35 • C, it would cause damage to the photosynthetic apparatus of the grapevines (Gutiérrez-Gamboa et al., 2021). However, in 2015, there was no significant difference in TSS in the MH berries between the summer and winter seasons during the developmental stages. This might be due to the sunshine hours in the winter season of MH grape, which were only 57% of the summer season and led to less carbon assimilation of vines. Fewer sunshine hours during the grape development in the 2015 winter season might slow down the TSS accumulation in the V grapes, which led to a slower ripening rate from the stages of E-L 31 to E-L 35 in the winter season berries. Effect of the Growing Season on Berries Volatile Compounds The winter season berries had higher terpene concentrations than those of the summer seasons in all of the four varieties. In general, most studies on the aromas and aroma precursors of fruity and floral nuances not only highlighted the benefit of the higher temperatures during berry ripening but also their negative effects on the fruit metabolism whenever they were excessively high (Pons et al., 2017). Grapes in warm regions were reported to have higher terpene concentrations than in hot regions (Lecourieux et al., 2017). The Pearson correlation analysis showed that γ-terpinene, terpinen-4-ol, cisfuran linalool oxide (G), and trans-pyran linalool oxide (G) were all negatively correlated with high temperatures in all of the four varieties in the present study. The elevated temperature (>35 • C) would inhibit the accumulation of terpenes (Scafidi et al., 2013). Furthermore, terpene concentrations might be negatively correlated with the average daily maximum temperature during the ripening because of volatilization (Gutiérrez-Gamboa et al., 2021). In our study, the expression of VviDXSs was commonly upregulated in the winter season berries of all four varieties. Lecourieux et al. (2017) showed that the strong repression of the genes encoding the 1-deoxy-D-xylulose-5phosphate synthase (VIT_05s0020g02130, VIT_09s0002g02050, VIT_11s0052g01730, and VIT_11s0052g01780) suggested that the volatile terpenoid biosynthesis might be decreased by high temperature. Similarly, Rienth et al. (2014) reported that high temperatures impaired the expression of 1-deoxy-D-xylulose-5-phosphate synthase transcripts (VIT_11s0052g01730 and VIT_11s0052g01780), which were required for the isopentenyl pyrophosphate (IPP) synthesis, the universal precursor for the biosynthesis of terpenes. The regression of high temperatures on the VviDXSs expression might be the reason for the lower terpene concentration in the summer season berries in the present study. There were also some gene expressions, such as VviGTs, that were downregulated in the winter season berries. The contents of many bound terpene substances are lower in winter than in the summer season (Supplementary Table 5 and Supplementary Figure 2). The GTs were responsible for the synthesis of bound terpene substances as a GT, so to a certain extent, it could be speculated that the downregulation of VvGTs expression caused a decrease of bound terpene substances in winter berries. Free-and bound-form p-menthan-8-ol concentrations were all positively correlated with the accumulation of PAR in the whole season. In general, previous studies reported that the increased light exposure was beneficial for the terpene accumulation, and the shading treatment led to lower monoterpenes levels in bunches (Bureau et al., 2000). In hot climates, the beneficial effect of increased synthesis of terpenes induced by light might be surpassed by the negative effect of the elevated berry temperature (Friedel et al., 2016). The rainfall correlated compounds had no consistent trends in all of the four varieties, and its effect might also be covered up by the high temperature effect. The MH and V grapes had similar results in the two vintages, and the winter season berries had a higher norisoprenoid concentration than those of the summer season. Similar to the terpenes accumulation, high temperatures also inhibited the biosynthesis of norisoprenoids (Wang et al., 2020). Lecourieux et al. (2017) found that heat treatment would repress the expressions of the genes encoding the key enzymes in the carotenoid metabolism, which formed norisoprenoids. So in R, V, and MH grapes, the winter season berries had higher norisoprenoid concentrations than those of the summer seasons. Regarding the different results in CS grapes, different varieties had varied temperature, sunlight, or water requirements, leading to varying responses to the climate changes (Schultz, 2003;Parker et al., 2020). The CS grapes were reported as a late-ripening variety (Parker et al., 2013), which might require more temperature than the other three varieties. The winter seasons might not meet the temperature requirement for the norisoprenoids accumulation in CS grapes, which led to less norisoprenoid concentrations than in the summer seasons. Moreover, the variation between different vineyards might also contribute to the differences between CS and the other two table varieties. The east-west row orientation was believed to have the lowest sunlight interception in canopies among all vineyard orientations (Lu et al., 2021), which was also unfavorable for the norisoprenoid accumulation in CS grapes. In the present study, four VviCCDs were expressed differently in different growing seasons, and most of them had higher expressions in the winter season berries. Scherzinger and Al-Babili (2008) found that both cold (20 • C) and heat stress (38 • C) could increase the expression of the CCD genes. However, Meng et al. (2020) found the high temperature (37 • C) repressed the activity of the VvCCD4b promoter. In the present study, the upregulated expression of most VviCCDs in the winter season berries might show that high temperatures were unfavorable for their expressions. However, in CS grapes, the winter season berries had lower norisoprenoid concentration than those in the summer season. Different expressions of VviCCD1s (VIT_213s0064g00840 and VIT_213s0064g00810) in CS grapes and other varieties might be the reason. The accumulation trends of C6/C9 compounds were not consistent in the four growing seasons in the present study. In most seasons, C6/C9 compounds peaked at E-L 36 and then declined until the harvest. However, there were some seasons when the berries at E-L 38 had the highest C6/C9 concentrations, such as the winter seasons of V and MH grapes. Kalua and Boss (2010) found that CS grapes had the highest C6/C9 compound concentration at pre-harvest, whereas R grapes had the highest C6/C9 compound concentration at harvest. As the C6 compounds are derived from varietal precursors, they could hypothetically contribute to judging wine origin and affiliation (Oliveira et al., 2006). In the Pearson correlation analysis in this study, hexanal was negatively correlated with high temperatures in all of the four varieties. As reported, hexanal was derived from linoleic acid hydroperoxide through the LOX pathway (Oliveira et al., 2006). Podolyan et al. (2010) also reported that two recombinant LOXs had the maximum enzymatic activity at 25 • C and lost about 40% of their maximal activity when the temperature exceeded 35 • C. As for LOXs, the expression of VviLOXA was upregulated in the winter berries of R, V, and MH grapes, whereas it was downregulated in CS winter berries. R, V, and MH winter berries had higher C6/C9 concentrations than the summer berries, whereas there was no significant difference in CS grapes at harvest. The variations of VviLOXA expression in CS and other varieties might be the reason. CONCLUSION This research used metabolomics and transcriptomics to reveal the aroma variations in different grape varieties under the double cropping system. The winter berries had higher TSS content and TA than the summer berries. The lower berry weight in the winter season caused a decreased yield compared to those of the summer season. The winter berries had higher concentrations in many aroma categories than the summer berries, especially terpenes. Climate factor variations were the main reason for the quality variations in the summer and winter season berries. Among all of the climate factors, the temperature might be the dominant one, and its influence could cover up the effects of other factors. Different from other varieties, the CS winter berries had lower norisoprenoid concentrations than the summer berries, indicating that the responses to climate changes might be variety-dependent. The higher concentrations of terpenes and norisoprenoids in the winter berries of most varieties could be associated with the regulated expressions of VviDXSs, VviPSYs, and VviCCDs at the transcription level. The different climates in the summer and winter seasons provided us with a better understanding of how climate changes influenced the grapes' secondary metabolites. The season variation within a vintage under the double cropping system usually exceeded the effects of the vintage in the traditional viticulture regions, making our results more apparent. DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found here: National Center for Biotechnology Information (NCBI) BioProject database under accession numbers GSE103226 and GSE168785.
2022-01-27T14:24:01.567Z
2022-01-27T00:00:00.000
{ "year": 2021, "sha1": "251999f5219ce0dbdd7d00be9fc14b1df1c8d04d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "251999f5219ce0dbdd7d00be9fc14b1df1c8d04d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
230795566
pes2o/s2orc
v3-fos-license
Genome-wide identification and characterization of COMT gene family during the development of blueberry fruit Background Caffeic acid O-methyltransferases (COMTs) play an important role in the diversification of natural products, especially in the phenylalanine metabolic pathway of plant. The content of COMT genes in blueberry and relationship between their expression patterns and the lignin content during fruit development have not clearly investigated by now. Results Ninety-two VcCOMTs were identified in Vaccinium corymbosum. According to phylogenetic analyses, the 92 VcCOMTs were divided into 2 groups. The gene structure and conserved motifs within groups were similar which supported the reliability of the phylogenetic structure groupings. Dispersed duplication (DSD) and whole-genome duplication (WGD) were determined to be the major forces in VcCOMTs evolution. The results showed that the results of qRT-PCR and lignin content for 22 VcCOMTs, VcCOMT40 and VcCOMT92 were related to lignin content at different stages of fruit development of blueberry. Conclusion We identified COMT gene family in blueberry, and performed comparative analyses of the phylogenetic relationships in the 15 species of land plant, and gene duplication patterns of COMT genes in 5 of the 15 species. We found 2 VcCOMTs were highly expressed and their relative contents were similar to the variation trend of lignin content during the development of blueberry fruit. These results provide a clue for further study on the roles of VcCOMTs in the development of blueberry fruit and could promisingly be foundations for breeding blueberry clutivals with higher fruit firmness and longer shelf life. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-020-02767-9. Background Blueberries have become widely appreciated worldwide because they contain phytonutrients such as flavonoids, which were discovered in the early 1900s [1][2][3][4]. The flavonoids in blueberry fruits have been confirmed to control diabetes, exert anti-inflammatory and neuroprotective, effects and protect eye health through their antioxidant activity [5]. Because the functions of blueberry component have made it to be accepted by an increasing number of people as "super fruits" [6], global blueberry production has greatly grown 35% from 2004 to 2016 [7]. However, because of respiration, evaporation, pathogen infection and cell wall degradation, the blueberry fruits have a characteristic of high perishability [8]. How to maintain the quality of flesh blueberry fruit is an urgent problem. Major thrusts of research on the blueberry fruit softening are in two ways. One is on the mechanism of fruit softening related to cell wall structure and some hydrolytic enzyme [9,10], the other one is to extend shelf life by external treatment like cold stage [11], high oxygen treatment [12], cuticular wax preservation [13], ethylene absorbent treatment [14], sodium nitroprusside treatment [15] and acibenzolar-S-methyl treatment [8]. The main theory of sodium nitroprusside treatment and acibenzolar-S-methyl treatment is to improve the activities of phenylalanine ammonia lyase (PAL) and CoA ligase (4CL) in lignin metabolism pathway and Peroxidase (POD) to catalyze the polymerization of precursors of phenolic substances into lignin phenols, so as to make the fruit lignified, strengthen the host cell wall and inhibit pathogen growth [16]. Lignin is a characteristic component of cell walls. Treatment of fruits can induce changes in the lignin biosynthesis pathway to influence the metabolites to have an effect on the pathogen infection and fruit firmness [17]. At present, many fruit trees and vegetables have been reported their effect of lignification on postharvest fruits, such as strawberry [18], red raspberry [19], zucchini fruit [20] and blueberry [15]. The main treatment methods of affecting lignification are external application after harvest. There are only a few studies on genetic modification to increase fruit lignification to make the preservation period prolonged effectively. O-methyltransferases (OMTs) are a multifunctional enzyme in the lignin and flavonoid biosynthesis pathway, in Arabidopsis thaliana it can converse caffeic acid to ferulic acid and 5-OH coniferaldehyde/5-OH coniferyl alcohol to sinapaldehyde/sinapyl alcohol, forming G and S units of lignin [21]. COMTs catalyze N-acetyl serotonin into melatonin [22,23]. The overexpression of them also can help plant grow [24]. Sorghum bicolor COMT can be involved in tricin biosynthesis methylated the flavones luteolin and selgin [25]. The expression of MOMT4 in aspen can change the structure of lignin, which increase the crosslinking of condensed lignin subunits by G-units [26]. On the flavonoid biosynthesis pathway, the antioxidant activity of flavonoids is related to the number of hydroxyl substituents: greater numbers of hydroxyl substituents are associated with stronger antioxidant and prooxidant activities. O-methylation of hydroxyl substituents inactivates both the antioxidant and prooxidant activities of flavonoids [27]. OMTs can be divided into two groups: PI-OMT I family and PI-OMT II family [28]. PI-OMT I family forms by CCoAOMTs, and COMTs belongs to PI-OMT II family. Most of COMTs have two types of domain, Dimerisation (PF08100) and Methylransf_2 (PF00891). There are 7 motifs conserved in COMTs, among them motif A and motif E may be the putative SAM-binding domains. COMTs have a wider range of catalytic substrates such as lignin precursors, alkaloids, flavonoids [29]. These compounds play an important role in plant growth and development and in the face of biotic and abiotic stresses. Therefore, plant OMT enzymes have been widely studied [2,30,31]. Publications of different plant genomes has enabled analyses of COMT family genes in several species to be carried out [32,33]. Blueberry has been widely studied because of its large amounts of flavonoids. The tetraploid blueberry genome was released in 2019 [34]. In this study, we identified COMTs family to find OMTs that may related to the methylation of lignin precursors and flavonoids during the growth and development of blueberry fruits Based on the genome of tetraploid blueberry. The results of this study will build foundations for breeding blueberry cultivars with higher fruit firmness and longer shelf life. Phylogenetic and sequence analyses of COMT genes in blueberry To identify COMT genes in the blueberry genome, one characterized sequence from Arabidopsis thaliana (AT5G54160) and 36 identified sequences from Populus trichoarpa were used as a set of queries in a BLASTP search (E < 1e-5) [35]. In all, 123 candidate sequences were retrieved from the blueberry genome. Then, all the 123 candidate sequences scanned for a Methyltransf_2 domain. Ninety-two sequences with a Methyltransf_2 domain were identified in blueberry. All of them were mapped to pseudochromosomes (VaccDscaff1-VaccDs-caff48) and renamed from VcCOMT1 to VcCOMT92 according to orders of location on the pseudochromosomes. Gene characteristics were analyzed in Table S1 (Additional file 1: Table S1). The result showed that VcCOMT56 was the shortest protein (112 amino acid) and the longest one was VcCOMT89. The analysis of molecular weight showed that 92 VcCOMT proteins ranged from 12 to 201 kDa, and the isoelectric point ranged from 4.62 to 8.73. A maximum likelihood (ML) phylogenetic tree created by using blueberry COMT protein sequences showed that the sequences were distributed into 2 groups, and this finding was supported by high bootstrap values and gene structure (Fig. 1a). Gene structure and conserved domain analysis revealed that all COMTs had a Cterminal catalytic domain named Methyltransf_2 domain including a SAM/SAH binding pocket and a substratebinding site. Some of them showed a common structure with an N-terminal domain called Dimerization [36]. The SAM/SAH binding pocket was highly conserved, while the substrate binding sites were specific to proteins in different groups [37]. The domains of the COMTs in the same group had similar quantities and sizes of introns (Fig. 1b). For example, one Dimerization domain in all the groups was on the one exon. This situation of gene structure was different from Methyltransf_ 2 domain. In the Group Ia and Group Ib, VcCOMTs had Methyltransf_2 domain distributed by two exons which had one intron in the middle except VcCOMT6, VcCOMT61 and VcCOMT83. They had the Methyl-transf_2 domian distributed on three exons with two introns. Although the Methyltransf_2 domain also distributed on three exons with two introns in the Group II, the structure of domain was different from VcCOMT6, VcCOMT61 and VcCOMT83. The second exon in the Group II was very small. Different from the reported Populus trichoarpa that COMTs has only one Methyltransf_2 domain in one sequence, some blueberry COMTs had two or three Methyltransf_2 domains in one sequence [38]. However, the gene structure of Methyltransf_2 domain in VcCOMTs was similar in sequences in the same group. The differences in protein sequences among the blueberry COMTs were analyzed by using Multiple Expectation Maximization for Motif Elicitation (MEME) online tools. In all, 11 motifs were found in the blueberry COMT sequences [35]. Most of the motifs were same in two groups and they were in the same order in COMT sequences within the same group (Fig. 1c). Motifs 10 was special to Group I and only Group II had motif 8. The similar genetic structures and conserved motifs within groups supported the reliability of the phylogenetic structure groupings. The Tandem (TD) events and collinearity analysis of VcCOMTs According to previous studies, a chromosomal region 150-200 kb in length that contains two or more genes is evidence of a tandem [33]. Nine pairs of tandem gene pairs were found in the blueberry genome by MCscanX (VcCOMT1/VcCOMT2, VcCOMT4/ VcCOMT5, VcCOMT25/VcCOMT26, VcCOMT43/ VcCOMT44, VcCOMT52/VcCOMT53, VcCOMT58/ VcCOMT59, VcCOMT62/VcCOMT63, VcCOMT63/ VcCOMT64, VcCOMT75/VcCOMT76). Ninety-two COMTs were mapped to the 48 chromosomes exhibited evidence of 9 TD events on blueberry pseudochromosomes ( Fig. 2a) [39]. Ninety-two COMTs allowing for the detected of 83 collinear relationship (Fig. 2b). The line of same colour between two COMT genes on the chromosomes indicates collinearity. The collinearity of VcCOMTs among the different homologous chromosomes existed in different forms. The first form was one VcCOMT on the one chromosome while to the other VcCOMT was on the other chromosome just like group b, c, d, g (Fig. 2b). The other was one VcCOMT on the one chromosome to some VcCOMTs on the other chromosome just like VcCOMT11, VcCOMT12, VcCOMT14, VcCOMT15 had a collinearity to the VcCOMT3, respectively. This reasons for this phenomenon might be attributed to its allopolyploid genome [34]. Most of the events were located in highly duplicated blocks and were identified as WGD or segmental duplication events with MCScanX. This result indicated that the VcCOMT gene family has expanded and evolved through genome-wide duplication. Analysis of VcCOMT gene promoters in blueberry The start of transcription is a key stage of gene expression, and an important event in this stage is the interaction between RNA polymerase and the promoter. The structure of the promoter affects the binding affinity of RNA polymerase, thus affecting the level of gene expression [32]. We analyzed the cis-acting elements on blueberry COMT genes (Fig. 3). The results for the blueberry COMTs were similar to the results for Catalpa bungei COMTs [33]. According to the function, the cis-acting elements from COMTs could be divided into four classes. Light response-related motifs constituted the majority of the cis-acting elements on the blueberry COMTs and were distributed in all groups. This finding indicated that the COMT genes in blueberry may be controlled by light. Many cis-acting elements related to plant growth and development were found in the promoter region such as AACA motif and GCN4 motif related to the endosperm, RY-element related to seed-specific regulation, circadian which was a regulatory element involved in circadian control and MSA-like element related to cell cycle regulation. We found that there are some stressrelated cis-regulatory elements (CREs) and some hormone related CREs in the promoter region of COMTs such as LTR, ARE, TC-rich repeats and others related to stress response, ABRE, ERE, TGA-BOX, TCA, as-1 which related to hormone. And MYB binding sites, MYC binging sites and W-box were also found in the promoter region which were transcription factor binding sites with MYB, bHLH and WRKY protein. The promoters of VcCOMTs within the same subgroup were similar. Often, the sequences with higher similarities and higher collinearity on the homologous chromosomes, the types and even orders of the cis-acting elements of them were similar, just like VcCOMT59 and VcCOMT64, VcCOMT34 and VcCOMT66, VcCOMT60 and VcCOMT65 in the Group Ia, the VcCOMT26 and VcCOMT13, VcCOMT22 and VcCOMT9 in the Group Ib, the VcCOMT77 and VcCOMT82, VcCOMT78 and VcCOMT75, VcCOMT16, VcCOMT71 and VcCOMT72 in the Group II, especially within the paralogous pairs such as VcCOMT57 and VcCOMT92, VcCOMT85 and VcCOMT91, VcCOMT31 and VcCOMT80, VcCOMT37 and VcCOMT39. Similar regulatory elements within sequences may greatly influence similarities among gene expression patterns and gene functions. A large majority of VcCOMTs had ABRE, related to the abscisic acid and TCA motif related to the salicylic acid. The unique regulatory elements in different subgroups, may underlie the different functions of the genes in different subgroups, for example, GCN4, related to the endosperm, main distributed on VcCOMTs which were in Group Ib and Group II, while the circadian related to the circadian rhythm mainly distributed in Group Ia and Group Ib. Evolutionary analysis of COMT genes in blueberry and other species Four hundred twenty-five COMT sequences were identified in 16 plant genomes including one Chorolphyta, one Charophyte green algea (CGA) and 14 land plants by Hidden Markov Model (HMM) search (Fig. 4a). The CGA were the closest living relatives of land plants [40], but there was no putative COMT searched in Chara braunii. In the genome of green algae Chlamydomonas reinhardtii, three putative COMTs were identified in it and they did not have complete Methyltransf_2 domain. Two of them had other domain Dimerisation2 (PF16864.5) which was different from land plant in the Anthoceros angustus and Physcomitrella patens which didn't have vascular was 10 and 5 times higher than those in the Anthoceros angustus and Physcomitrella patens, respectively. The percentage of putative COMTs in the total number of genes as well as the number of COMTs per megabase of genome in Selaginella moellendorffii were found higher than in Bryophyta. They indicated that the expansion was not related necessarily to an increase in the genome size but could be determined by the development of new functions, the deposition of lignin and the existence of abundant flavonoids [41]. The number of COMTs in diploid apple and that in diploid grape was approximately half of that in tetraploid blueberry (Table 1). In the apple genomes, the percentage of putative COMTs was almost equal in the total number of genes with blueberry VcCOMTs while it was a two-fold decline in the grape genome. Rhododendron williamsianum and Vaccinium corymbosum were used to construct a phylogenetic tree, and the COMTs from the alga Chlamydomonas reinhardtii were used as outgroups (Fig. 4b). The phylogenetic analysis indicated that the COMTs were divided into two clusters. The cluster I was red which was contained COMTs from all the 14 land species. The cluster II (clade is green) didn't have COMTs in the Anthoceros angustus, Physcomitrella patens, which indicating that they might be orthologous genes originating from a single ancestral gene but a new function of COMTs occurred from Selaginella moellendorffii and led to gene differentiation [49,50]. COMTs in Selaginella moellendorffii, were not clustered together with those in angiosperms, and the gymnosperm species in cluster II. The results suggested that COMT had been recruited for S lignin biosynthesis independently in angiosperms, the gymnosperm and Selaginella moellendorffii [51]. The collinearity analysis, gene duplication events and Ka/ Ks analysis of COMTs in blueberry and other plant species To infer the evolutionary mechanism of COMT genes in tetraploid blueberry, we analyzed the collinearity among Vitis vinifera which indicated a palaeo-hexaploid ancestral genome for many dicotyledonous plants [46], Actinida chinensis which belongs to the Actindiaceae family in Ericales [52], an early divergent lineage within asterids and Rhododendron williamsianum which represented species-rich groups within Ericaceae [48] and Vacciniun corymbosum (Fig. 4c). The COMTs on homoeologous chromosomes that showed collinearity are indicated in the same colour in different plants. Two COMTs in the Actinida chinensis had one orthologous region in Vitis vinifera. One COMT in the Actinida chinensis had two orthologous regions in Vitis vinifera. These genes indicated that these orthologous pairs may have already existed before the ancient paleohexaplodiy (γ) event. COMTs of Actinida chinensis and Vacciniun corymbosum had higher collinearity. Most types of corresponding relationship of collinearity between COMTs in the Actinida chinensis and Vaccinium corymbosum were two COMTs in the Actinida chinensis to one COMT in the Vaccinium corymbosum. Some of corresponding relationship of collinearity between COMTs in two genomes were one COMTs to one COMTs in different genome indicating that some COMTs were lost during evolution. One COMT in the Actinida chinensis that had collinearity only with Vaccinium corymbosum among the other species, as shown in orange. These COMTs might have similar function. Interestingly, COMTs in Rhododendron williamsianum had highest collinearity with COMTs in Vaccinium corymbosum. The types between them were more complex, at most appeared 8 COMTs in Vaccinium corymbosum who had collinearity with one COMTs in the Rhododendron williamsianum. COMT duplicated gene pairs were identified in four plants with DupGen_finder software. There were five categories of duplicated gene pairs, including WGD, TD, proximal duplication (PD), transposed duplication (TRD), and DSD pairs. Among the categories, the DSD category had the most duplicated gene pairs from the four plant species. In blueberry, the percentage of gene pairs derived from WGD was higher than the percentages of gene pairs derived from other processes. Grape had nearly the same numbers of PD-, TD-, and TRDderived gene pairs. These three categories of events might have played almost the same roles in the evolution of grape. The pattern for azalea was the similar as that for grape. In addition, DSDs played a major role in the evolution of azalea, and TDs and TRDs might have played similar evolutionary roles. The DSDs and WGDs were the major drivers of evolution in blueberry and kiwi fruit. The Ks values between the homologous genes were used to estimate the time of divergence of the diploid progenitors from their most recent common ancestor (MRCA), which was determined to be between approximately 0.94 and 1.02 million years ago. According to the eq. T = Ks/2λ (λ, synonymous substitution rate; λ = 1.3e-8) [34], 42 COMT pairs were derived from WGD in blueberry before the estimated time of divergence of the diploid progenitors from their MRCA, while 4 were derived after that. The selection pressures on the COMTs in the four plant species were explored based on the Ka/Ks ratios. A Ka/Ks ratio greater than 1 indicated positive selection, a Ka/Ks ratio equal to 1 indicated neutral evolution, and a Ka/Ks ratio less than 1 indicated purifying selection at a low evolutionary rate. The Ka/Ks values of the COMT pairs in the four plant species were all less than 1 (Fig. 4d). Gene expression analyses with differential expression COMTs in blueberry fruits Twenty-two VcCOMTs that were differentially expressed during fruit development according to their expression in the transcriptome analysis (|log2(fold change, FC)| > 1, P value < 0.05) were selected for qRT-PCR at different fruit development stages. Based on the lignin content, we selected three genes related to lignin changes during fruit development, VcCOMT62, VcCOMT40 and VcCOMT92 (Fig. 5, Additional file 4: Table S3). The expression trends of VcCOMTs and the content variation trends of lignin in the early time were similar, which increased in s1 to s2 and then decreased. The s2 was the highest point. The trend of VcCOMT62 was consistent with that of lignin during the fruit development, but the relative expression content was very low. The relative content of VcCOMT40 and VcCOMT92 was relatively high in fruit development stage. The lowest expression of VcCOMT40 and VcCOMT92 were different from the lignin in the lowest lignin content during the fruit development. VcCOMT40 and VcCOMT92 were on the homologous chromosomes which had high sequence similarity in the gene collinearity region. After designing a pair of primers in the collinear region between VcCOMT40 and VcCOMT92, the expression trend was consistent with that of lignin during the fruit development stage. According to the results of multiple sequence alignment (Fig. 6), VcOMT40 and VcCOMT92 contained the same substrate binding sites with COMT who could catalytic caffeic acid and 5-OH coniferaldehyde [37]. Discussion COMTs could react to various substrates, such as phenylpropanoids, flavonoids, and alkaloids; thus, they were ubiquitous in plants because of their importance in plants adaptation to the environment and to adversity [30,53]. As long ago as in the last century, scientists began to be interested in the roles of COMT genes in plants [54,55]. The publication of different plant Fig. 5 The lignin content and relative quantification of VcCOMTs during s1-s6 fruit development. The first line is the broken line chart of lignin content, the relative content of lignin in vertical coordinate, and the abscissa of different fruit development stages; The rest were 22 VcCOMTs relative quantitative histogram, abscissa was different fruit development period, ordinate was relative content of genes genomes had enabled analyses of COMT family genes in several species to be carried out [38,56,57]. Blueberry had been widely studied because of its large amounts of flavonoids. The tetraploid blueberry genome was released in 2019, and 92 COMTs have been identified, named VcCOMT1-VcCOMT92 based on their chromosome positions. According to phylogenetic and gene structure analyses, these 92 COMT genes could be Fig. 6 VcCOMT40, VcCOMT92 Multiple sequence alignment was performed with other related to lignin COMT. Green: SAM binding; Blue: Substrate binding; Orange: catalytic residues divided into 2 groups, named Group Ia, Group Ib and Group II. The sequences and structural similarities were greater within the same branch than between branches. Based on analysis of the conserved motifs, the three groups of COMTs can be roughly divided into two categories [20]. Among the Group Ia and Group Ib all contain motif 10, while the other groups do not. Motif 10 is approximately 15-50 amino acids upstream of the VcCOMT sequence and forms the back wall of the binding pocket [36,37,57,58]. Perhaps because of the different binding substrates, the VcCOMT sequences of the two categories are different from each other. We identified these motifs which were highly conserved in COMTs. Some residues in four motifs (motif I: DVGGG, motif II: DLPHV, motif III: GDMF, and motif IV: VPKG DAIFLKWI) are related to the SAM/SAH binding site [58]. Motif 2 of the VcCOMTs contained motif I (DVGGG) and some of motif II (DLPHV). Motif 1 of the VcCOMTs contained motif III (GDMF) and motif IV (VPKGDAIFLKWI) (Additional file 2: Fig.S1) [28]. Gene duplication probably contributes to the evolution of species and to the adaptation of species to their environments [59]. In the blueberry genome, candidate VcCOMTs were analyzed according to the collinearity of homoeologous chromosomes with MCscanX [60]. The numbers of VcCOMTs with collinearity differed on different chromosomes (Fig. 2b). The many-to-one ratio may exist because some copies of COMT in different chromosomes have been lost due to the influence of the environment during the evolution of blueberry or because some redundant genes with incomplete domains are present. The one-to-many ratio may be a result of distinct subfunctionalization and neofunctionalization. Two COMTs sequences with collinearity and high sequence similarity on homologous chromosomes had similar promoter sequence in the blueberry genome. The cis-regulatory elements present in the promoter regions were the binding sites of COMTs gene with other proteins to play a central role in regulating gene transcription. There were a large number of light response related regulatory elements, rhythm elements and regulatory elements that promote plant endosperm and seed growth, which may be related to plant growth and lignin synthesis [61,62]. In the promoter region of the COMT genes of blueberry, some regulatory elements related to hormones and stress were also found, which was consistent with previous studies. When plants were stressed or treated with external hormones, the content of COMTs increased [63][64][65][66]. In this study, different numbers of COMTs were identified in 15 plant species ranging from algae to land plants ( Table 1). The evolution of COMTs from algae to land plants led to a change in the Dimerization domain (Additional file 1: Table S2, Additional file 3: Fig.S2). Furthermore, we found that the number of COMTs in Selaginella moellendorffii was greater than the numbers in other dicotyledonous species and less than the numbers in Vitis vinifera, Malus X domestica and Vaccinium corymbosum. The development of vascular tissues underlies the differences between Selaginella moellendorffii and Bryophytes. Lignin is the main component of vascular tissue and provides plants with structural support to stand upright. COMTs are important methyltransferases in lignin biosynthesis that methylate components of lignin similar to the S units in Selaginella moellendorffii [51]. The present research suggests that the evolution of lignin in land plants correlates with the evolution of COMT genes [38]. Comparison of the collinearity of the VcCOMTs in blueberry with the COMTs in the other plant species showed that the VcCOMTs that had collinearity with other COMTs were almost the same for the different species. Some COMT collinearity gene pairs between blueberry and kiwi fruit exhibited form of one COMT gene in blueberry to two COMT genes in kiwi fruit, but the collinearity pairs between blueberry and azalea exhibited one-to-many form. Perhaps the results indicated that kiwi fruit has undergone two rounds of WGD [39,47]. And form indicates that COMT genes were duplicated after the differentiation of Vaccinium corymbosum and Rhododendron williamsianum. Gene duplication has five forms: DSD, PD, TRD, TD, and WGD [39]. Different gene replication patterns have different effects on the expansion of the COMT family in different plant species. DSD was the main feature of evolution in the four plant species except grape. Previous studies have revealed that the COMT genes all have tandem duplicates on all of the homoeologous chromosomes [34]. In the current study, TD of VcCOMTs was not identified on all of the homoeologous chromosomes by MCscanX. Fewer VcCOMTs arose through TD than through WGD. However, amplification of COMT genes in the blueberry genome occurred mainly through DSD and WGD. In contrast, the main drivers of gene expansion are WGD and TD in Populus [38]. In citrus, the numbers of TD and WGD events are similar [35]. COMTs have similar gene copy numbers in maize, rice and foxtail millet, and gene expansions in these genomes are mainly generated by TD and segmental duplication [32]. The WGD Ks of kiwi fruit COMTs is less than the Ad-β mean Ks of Actinidia chinensis. This result suggests that the WGD of kiwi fruit COMTs occurred before the shared WGD of Ad-β. The WGD Ks of tetraploid blueberry COMTs is also less than the Ad-β mean Ks of diploid blueberry. This result suggests that the WGD of tetraploid blueberry VcCOMTs occurred before the shared Ericales WGD Ad-β event. The WGD Ks of Rhododendron williamsianum COMTs is between the Ks of the Ad-β event and the Ks of the At-γ event. This suggests that the WGD of Rhododendron williamsianum occurred between two shared events. The Ka/Ks ratios of the five gene replication patterns of the COMTs from the four plant species were less than 1, indicating that the COMTs have experienced strong purifying selection [48]. During fruit development, the content of lignin in fruit increased first and then decreased. This phenomenon may be related to the formation of lignin during fruit development. In the early stage, the fruit swells and hardens, and the lignin content becomes high. From the green fruit stage to the colour-turning stage, the fruit becomes soft, and the lignin content shows a downward trend [67]. Based on the VcCOMT differential expression data from RNA-seq, 22 VcCOMTs were selected for detection of gene expression using qRT-PCR. Three genes had similar trend as lignin expression during fruit development. Although VcCOMT62 had same trend as lignin expression during fruit development. The relative expression of it during the fruit development was too low. It indicated that it was not a main gene to related to lignin content during fruit development. The relative expression of VcCOMT40 and VcCOMT92 during the fruit development was almost highest among all the VcCOMTs. But the expression trend of single gene was slightly different from that of lignin during the fruit development. Because of the high similarity of sequence, in order to reflected the role of individual genes, primers were designed where most of their sequences are different. We designed a pair of primers in the homologous region, including four VcCOMT genes (VcCOMT38, VcCOMT57, VcCOMT40, VcCOMT92) with very high similarity. When we performed qRT-PCR again, it found that the trend was consistent with that of lignin during fruit development. It is suggested that more than one gene is responsible for the biosynthesis of lignin content. Conclusions Here, we identified 92 COMT genes from blueberry and 425 COMT genes from 15 other species. According to phylogenetic analysis of COMTs, we divided the COMTs into two groups, which indicated the existence of two ancestor genes. DSD and WGD were revealed to be the major forces of blueberry evolution. The Ka/Ks ratios of the gene duplication patterns for the COMTs from the four plant species were less than 1, indicating that the COMTs have experienced strong purifying selection. According to the qRT-PCR results for 22 VcCOMTs, VcCOMT40, VcCOMT92 were highly expressed and may play important roles in the synthesis of lignin of blueberry fruit. The results of this study will build foundations for breeding blueberry cultivars with higher fruit firmness and longer shelf life. Plant materials The samples were fruits of 'Northland' blueberry plants at 6 stages of growth and development that were obtained from the blueberry germplasm resource garden of Jilin Agricultural University. Stages 1 to 3 were sorted by increasing size (stage 1, 2-3.5 mm in diameter; stage 2, 4-7 mm; stage 3, 7-9 mm). Stages 3 to 6 were sorted by fruit color (stage 3, white blue, stage 4, 25-50% red skin; stage 5, predominantly purple skin with some red; stage 6, entirely dark blue and soft texture) [67] (Fig. 7). The samples were taken from three different robust trees, frozen in liquid nitrogen and stored at − 80°C. Identification of COMT genes in the genomes of blueberry and other plants The graft blueberry genome was downloaded from the CoGe genome database (https://genomevolution.org/coge/ SearchResults.pl?s=Vaccinium&p=genome). To identify complete COMT genes in the blueberry genome, one characterized sequence from Arabidopsis thaliana (AT5G54160) and 36 identified sequences from Populus trichoarpa were used as a set of queries in a BLASTP search (E < 1e-5). All the searched sequences were scanned for a specific domain (PF00891) with HMM in Pfam (http://pfam.xfam.org). Then, each possible sequence was analysed with the online program CD-search (https://www. ncbi.nlm.nih.gov/Structure/bwrpsb/bwrpsb.cgi) to identify the complete domains. We further identified COMT sequences in Chlamydomonas reinhardtii, Anthoceros Fig. 7 The stage of blueberry development naming s1 -s6 angustus, Physcomitrella patens, Selaginella moellendorffii, Ginkgo biloba, Amborella trichopoda, Oryza sativa, Arabidopsis thaliana, Populus trichocarpa, Malus domestica, Rubus occidentalis, Vitis vinifera, Actinidia chinensis, Rhododendron williamsianum by HMM search. Phylogenetic, domain motif and gene structure analyses for the predicted VcOMT genes First, the protein sequences of VcCOMTs from blueberry and other species were subjected to multiple sequence alignment and ML methods with 1000 bootstrap replicates in MEGA 7.0. The domain sequences of VcCOMTs from blueberry were predicted with CDsearch. TBtools was used to perform exon/intron structure analysis for the VcCOMT genes (https://github. com/CJ-Chen/TBtools) with the mRNA sequences and genomic sequences [68]. The MEME suite (http:// meme-suite.org/tools/meme) was used to analyze the motifs of VcOMT sequences with the following parameter setting: out motifs, 11. Analysis of collinearity between COMTs from blueberry and COMTs from other species Collinearity analysis of VcCOMTs was performed with MCScanX (https://github.com/tanghaibao/jcvi/wiki/ MCscan-(Python-version). Software was used to analyse the collinearity of COMTs between kiwi fruit and grape, blueberry and azalea, and blueberry and kiwi fruit. Analysis of COMT gene promoters in blueberry The elements in the promoter fragments of the VcCOMT genes (1500 bp upstream of the translation initiation sites) were identified using the online program PlantCARE (http://bioinformatics.psb.ugent.be/webtools/ plantcare/html/). Gene duplication and calculation Ka and Ks with COMTs from four species The gene duplication from blueberry, grape, azalea and kiwi fruit was by DupGen_finder (https://github.com/ qiao-xin/DupGen_finder), and Ka, Ks and the Ka/Ks ratio were calculated using the KaKs_Caculator by GLWL model. Therefore, a P-value < 0.05 was retained. Expression analysis of VcCOMTs in blueberry by qRT-PCR Twenty-two VcOMTs were selected for qRT-PCR. The primers for the genes were designed using Primer Premier 5.0. Total RNA was isolated from s1-s6 fruits by the CTAB isolation method. The RNA was checked from a 1.2% agarose gel under UV light with no smearing before concentration detection by spectrophotometry. One microgram of total RNA was used to synthesize cDNA with a PrimeScript™ RT Reagent Kit with gDNA Eraser (TaKaRa, Japan) following the manufacturer's instructions. The detailed methods of the experiment followed the instructions for SYBR Premix Ex Tag (Tli Rnase H Plus). VcOMT genes expression were analyzed in an ABI StepOnePlus Real-Time Quantitative PCR System (Applied Biosystems, Foster City, CA, USA). The thermos cycling parameters were the same as those used by Chen [69]. The EIF gene of blueberry was amplified with EIFF and EIFR primers (Additional file 1: Table S2) and used as a control to normalize the expression of the VcOMTs [70]. The real-time amplification data were analyzed by the Chen method, and a 40-cycle melting curve analysis was performed to ensure the reliability of the expression results. The results are expressed as the normalized relative expression levels (2 −ΔCT ) of the genes in various samples [69]. All experiments were run in triplicate.
2021-01-07T09:01:23.303Z
2021-01-06T00:00:00.000
{ "year": 2021, "sha1": "29d218467e680c5b0864aa7c174c395f53903b37", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-020-02767-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67e60fe4f5d53d85d00d46d163f8f1daa56f015e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
29630614
pes2o/s2orc
v3-fos-license
Update on ocular graft-versus-host disease Ocular graft‑versus‑host disease (oGVHD) occurs as a complication following hematopoietic stem cell transplantation and is associated with significant ocular morbidity resulting in a marked reduction in the quality of life. With no current consensus on treatment protocols, management becomes challenging as recurrent oGVHD often refractory to conventional treatment. Most authors now diagnose and grade the disease based on criteria provided by the National Institutes of Health Consensus Conference (NIH CC) or the International Chronic oGVHD (ICCGVHD) consensus group. This article will provide an insight into the diagnostic criteria of oGVHD, its classification, and clinical severity grading scales. The inflammatory process in oGVHD can involve the entire ocular surface including the eyelids, meibomian gland, corneal, conjunctiva, and lacrimal system. The varied clinical presentations and treatment strategies employed to manage them have been discussed in the present study. The recent advances in ocular surface imaging in oGVHD patients such as the use of meibography and in vivo confocal microscopy may help in early diagnosis and prognostication of the disease. Researching tear proteomics and identification of novel potential tear biomarkers in oGVHD patients is an exciting field as they may help in objectively diagnosing the disease and monitoring the response to treatment. recipients of female donors, [14] skin, [7,13,14] oral mucosa, [7,13] liver, [15] or GI tract involvement during acute or chronic stages of GVHD and lung involvement in cGVHD. [10] Preexisting diabetes, [10] recipients of transplants from Ebstein‑Barr Virus (EBV) positive donors, Asian and other ethnicities compared to Caucasian ethnicity were more likely to develop oGVHD. [15] It has been found that the incidence of severe dry eyes in cGVHD is higher in recipients of peripheral blood stem cell transplantation (PBSCT) or bone marrow transplantation (BMT) in comparison to those receiving cord blood transplantation (CBT). [16] This review is a comprehensive overview of the current understanding of the oGVHD. A PubMed search was conducted using the eye. Featured articles from the year 1983 till May 2020 were included. Graft-versus-host disease (GVHD) often limits the success of allogeneic hematopoietic stem cell transplantation (allo-HSCT) due to its morbidity and mortality in the posttreatment period. Ocular GVHD (oGVHD), the most common long-term complication, has a varying spectrum of disease severity, mediated by immune dysregulation and tissue inflammation with single or multisystem involvement resulting in tissue fibrosis and organ dysfunction. [1] Characteristic diagnostic features involving skin, mouth, gastrointestinal (GI) tract, lung, fascia and genitalia, eyes, nails, scalp, or hair have been observed. [2] Clinical manifestations of systemic acute GVHD (aGVHD) mostly involve skin, GI tract, and liver. Acute oGVHD is a relatively rare manifestation of aGVHD with an incidence of about 7.2% among post-allo-HSCT patients. [3,4] Chronic GVHD (cGVHD) is a complex immune-mediated disorder that can target multiple organs, usually manifesting in the first year after HSCT, and may occur in up to 30-70% of the patients undergoing HSCT. [5] The incidence of chronic oGVHD has been reported to be about 40-60% [6,7] with lower incidences of only one-third being affected as noted by some recent Asian studies. [8][9][10] Up to 60-90% of the patients with chronic GVHD may show oGVHD manifestations. [11][12][13] Risk factors for oGVHD include male recipients of female donors, [14] skin, [7,13,14] oral mucosa, [7,13] liver, [15] or GI tract involvement during acute or chronic stages of GVHD and lung involvement in cGVHD. [10] Preexisting diabetes, [10] recipients of transplants from Ebstein-Barr Virus (EBV) positive donors, Asian and other ethnicities compared to Caucasian ethnicity were more likely to develop oGVHD. [15] It has been found that the incidence of severe dry eyes in cGVHD is higher in recipients of peripheral blood stem cell transplantation (PBSCT) or bone marrow transplantation (BMT) in comparison to those receiving cord blood transplantation (CBT). [16] This review is a comprehensive overview of the current understanding of the oGVHD. A PubMed search was conducted using the keywords: GVHD, transplant, HSCT, BMT, PBSCT, dry eye disease (DED), dry eye. Featured articles from the year 1983 till May 2020 were included. Current Perspectives on oGVHD Diagnostic Criteria Definition and Grading Historically, post-allo-HSCT GVHD classification was deliberated as aGVHD when onset was within the first 100 days of HSCT or cGVHD when it occurred thereafter. [1] To standardize the tools on reporting cGVHD, the 2005 National Institutes of Health (NIH) Consensus Development Projects on Criteria for Clinical Trials in Chronic GVHD issued guidelines for standardized diagnostic criteria, severity scoring, interpretation of histopathology reports, development and validation of biomarkers, response criteria, designing clinical trials, ancillary therapy, and supportive care. The NIH Consensus Conference (NIH CC) classified GVHD based on differences in organ involvement rather than the period of symptoms manifestation whereas in aGVHD manifestation seen after the first 100 days, persisting from a prior episode and occurring as a recurrence or of late-onset were also included. The broad category of cGVHD included classic GVHD and overlap syndrome. Overlap syndrome was characterized by the occurrence of aGVHD and cGVHD symptoms together. [17][18][19] As per the NIH criteria, diagnosis of cGVHD requires at least one diagnostic manifestation of GVHD or a distinctive GVHD manifestation supported by biopsy, laboratory tests, or radiology in the same or another organ. [19] The revised 2014 NIH criteria changed little in terms of 2005 diagnostic criteria but addressed certain areas of controversy such as overlap syndrome, distinguishing active GVHD features from irreversible "fixed" deficits, and also revised the diagnostic criteria for certain organs including the eye. Some authors have recommended that the diagnosis of oGVHD alone should be enough to confirm cGVHD. [6,20] Risk factors for cGHVD include human leukocyte antigen (HLA) mismatch or an unrelated donor, older patient or donor age, female donor for a male recipient, donor lymphocyte infusion, mobilized peripheral blood cell graft, and previous aGVHD. [21] Definition of oGVHD diagnostic criteria The two widely acknowledged diagnostic criteria for oGVHD are as follows: NIH CC 2014 criteria: The diagnostic criteria were based on Schirmer's test and slit-lamp examination [ Table 1]. [2] The International Chronic oGVHD (ICCGVHD) consensus group diagnostic criteria are based on scores derived from the Ocular Surface Disease Index (OSDI), Schirmer's test without anesthesia, corneal fluorescein staining (CFS), conjunctival injection, and presence of systemic GVHD. The diagnostic categories included no oGVHD, probable oGVHD, and definite oGVHD [ Table 2]. [20] While a comparative study of the newer NIH 2014 criteria and ICCGVHD criteria found moderate agreement between the two, ICCGVHD criteria were noted to be better at differentiating oGVHD patients from non-oGVHD DED, due to its more stringent criteria which also considers the status of systemic GVHD. [22] It is interesting to note that the study reporting the validation of ICCGVHD criteria for oGVHD, in comparison to Best Clinical Practices (BCPs), found that BCPs tended to over-diagnose the milder cases of oGVHD, while there was a better agreement between the two in higher severity of disease (BCP was defined as oGVHD evaluation by a highly trained single expert in ophthalmology with extensive [>20 years] clinical experience in evaluating oGVHD patients, based on comprehensive clinical examination). [23] Other diagnostic criteria, which have been used in studies on GVHD, include the following: The Japanese Dry Eye Society criteria for diagnosing dry eye, modified in 2016, requires only the presence of an unstable tear film (tear film breakup time [TFBUT <5 s]) and subjective symptoms (in contrast to the 2006 criteria which had required positive results in ≥2 of the following categories: subjective symptoms, abnormalities of tears, and epithelial damage for diagnosis of dry eye). [24] All published data from Japan on oGVHD have employed the 2006 criteria and the 2016 version is yet to be used in published literature describing oGVHD patients. [25] An extension of the Tear Film and Ocular Surface Society Dry Eye Workshop II (TFOS DEWS II) criteria originally meant for conventional DED has been recently advocated for diagnosing oGVHD in patients undergoing allo-HSCT [26] according to which any new positivity or worsening of the existing disease after allo-HSCT may be considered to be sufficient for diagnosing oGVHD. Diagnosis required ocular surface discomfort symptoms with OSDI score ≥13 along with any one of the following; TFBUT <10 s; tear osmolarity >308 mOsm/L in either eye (or an inter-eye difference >8 mOsm/L); ocular surface staining (>5 corneal spots, >9 conjunctival spots or lid wiper epitheliopathy of ≥2 mm in length and/or ≥25% sagittal width). [27] Prospective comparison of the degree of agreement between three oGVHD diagnostic criteria (NIH criteria, ICCGVHD criteria, and TFOS DEWS ll criteria) applied before and after allo-HSCT, noted that oGVHD diagnosis was higher when the pre-allo-HSCT evaluation was not included as compared to inclusion of pre-allo-HSCT evaluation. TFOS DEWS-ll criteria was found to provide a higher proportion of diagnosis as oGVHD possibly due to the incorporation of TBUT in diagnostic criteria, which made patients with the hyper-evaporative disease and Meibomian gland (MG) abnormalities inclusive even in presence of normal Schirmer's test. The influence of a pre-allo-HSCT evaluation on diagnostic performance seems to be much more for NIH and ICCGVHD criteria emphasizing that majority of pre-allo-HSCT DED cases were due to tear film instability. [26] Pre-allo-HSCT evaluation for DED is now widely recommended to help differentiate between preexisting dry eye and the new-onset DED diagnosed as oGVHD post-allo-HSCT. Severity grading Various grading schemes devised for scoring the severity of ocular involvement in cGVHD include Jab's grading for conjunctival involvement in aGVHD [4] and Robinson's grading for conjunctival involvement in cGVHD. [28] [ Tables 3 and 4] The most commonly used are the NIH and ICCGVHD scoring systems. The other grading criteria described include the German/Austrian/Swiss (GAS) Consensus Conference [29,30] and Japanese Dry eye score. [25] The NIH scoring system ranges from score 0 for asymptomatic keratoconjunctivitis sicca (KCS) diagnosed on slit lamp by an ophthalmologist up to Score 3. A notable modification in the 2014 consensus was the removal of Schirmer's test values from NIH 2005 severity scoring criteria as it was found to have a high false-positive or false-negative rate in various studies with poor correlation to change in symptoms [ Table 1]. [2,31] A more detailed scoring system by the ICCGVHD elaborated severity score of 0 to 3 each is assigned to OSDI, Schirmer, and CFS while conjunctival hyperemia (based on slit-lamp photographs) is scored from 0 to 2 [ Table 2]. [20] German/Austrian/Swiss Consensus Conference on Clinical Practice in cGVHD (GAS CC) proposed a comprehensive grading and staging criteria for oGVHD including the involvement of different ocular tissues, inflammatory activities, the presence of complications, and functional impairment. The extent of ocular surface involvement including the eye and MGs, severity of inflammation, and complications such as corneal perforation, secondary glaucoma, and deterioration of visual acuity are documented. [29,30] The ICCGVHD criteria emphasize more on comprehensive coverage of objective findings by incorporating the OSDI which is a more specific patient symptom metric as compared to the subjective assessment of symptoms or the frequency of instillation of eye drops in the NIH criteria. However, oGVHD patients presenting exclusively with lid or MG involvement may be missed by the ICCGVHD criteria. GAS CC is an even more comprehensive approach in grading the disease by additionally including the involvement of lids, MGs, or lacrimal glands and the presence of complications due to oGVHD. Unlike the NIH CC, both GAS CC and ICCGVHD criteria included parameters, reflective of the severity of ocular surface inflammation activity. [20,30] Clinical Features Clinical symptoms Eye pain and lacrimation are the main complaints in acute oGVHD. [32] The clinical symptoms of chronic oGVHD usually resemble those seen in DED or (KCS syndrome. The distinctive manifestations of chronic oGVHD as per the NIH consensus criteria comprise new onset of dry, "gritty," or painful eyes. [2] Other symptoms may include irritation, watering, photophobia, redness, and blurring. [33] Clinical signs A c u t e o G V H D [ F i g . 1 ] , c o m m o n l y p r e s e n t s a s pseudomembranous or hemorrhagic conjunctivitis. [32,34] A less severe form with conjunctival injection or chemosis may also be seen. [4] Corneal signs include epithelial sloughing, [4,35] corneal epithelial keratitis, or filamentary keratitis which may be secondary to the conjunctival cicatrization due to the disease. [36] Some patients may present with lagophthalmos. [37] Ocular involvement in aGVHD is considered an extremely poor prognostic sign associated with higher GVHD-related mortality. [32] The clinical grading system for conjunctival involvement in acute ocular GVHD is given in Table 3. [4] Chronic oGVHD primarily is a result of inflammatory and fibrotic changes in the ocular surface comprising of the cornea, conjunctiva, lacrimal glands, MGs, and eyelids. It should The total score is obtained by adding the severity score for Schirmer test + CFS + OSDI + conjunctival injection be noted that other factors such as conditioning regimens, irradiative therapy, and immunosuppression might also impact the clinical manifestations in addition to the GVHD disease process itself. Corneal signs due to the KCS syndrome include punctate keratitis, epithelial erosions, and epithelial defects which may progressively worsen to keratinization, stromal thinning, melt, and perforation [ Fig. 2a]. Recurrent corneal perforation, sometimes bilaterally, is not uncommon with calcareous degeneration or lipid keratopathy being seen rarely. The progression from the stage of epithelial ulceration to perforation tends to be rapid and is often refractory to standard medical or surgical treatment modalities. [38][39][40][41] Progressive ocular surface inflammation leads to corneal neovascularization, conjunctivalization, and less commonly limbal stem cell deficiency, which will adversely affect visual acuity. [41][42][43][44] Decreased corneal sensation tends to predispose the development of neurotrophic ulceration. [45] Conjunctival involvement is a distinctive aspect of chronic oGVHD, seen in about half of the chronic oGVHD, and is a marker for severe systemic involvement of GVHD. [4,46] Less severe cases manifest as conjunctival hyperemia or chronic conjunctivitis involving both palpebral and bulbar conjunctiva. Other less common features include cicatricial conjunctivitis with obliteration of fornices, cicatricial entropion, symblepharon, ankyloblepharon, and lagophthalmos, which could progress to conjunctival keratinization and punctal occlusion. [4,46,47] Conjunctival subepithelial fibrosis seen as fine white lines under intact conjunctival epithelium is indicative of a past insult. [48] The grading scale for cicatricial conjunctivitis in chronic oGVHD is given in Table 4. [49] Pseudomembranous and serosanguineous conjunctivitis are less frequently seen forms of conjunctival involvement which though more characteristic of acute oGVHD, have been seen in chronic oGVHD too. [4] Subtarsal fibrosis in upper tarsus noted in 40% chronic oGVHD cases along with the worsening of ocular surface epitheliopathy in these patients was suggested to be of diagnostic value in oGVHD. [50] Decreased conjunctival goblet cell density and increased squamous cell metaplasia and surface keratinization of the ocular surface has also been noted. [45] Superior limbal keratoconjunctivitis (SLK) like inflammation has been reported as a manifestation of oGVHD, which can worsen to LSCD and corneal pannus formation. This has been attributed to soft tissue microtrauma from increased frictional forces compounded by tear mucin deficiency due to goblet cell loss. [51] Meibomian glands (MG) are severely affected by rapid and aggressive destruction over time in chronic oGVHD [52] resulting in unstable tear film aggravating the DED. T-cell-mediated damage to the MG epithelial cells is primarily responsible for the gland dysfunction with hyperkeratinization of duct epithelium and sub-epithelial-stromal fibrosis contributing to obstructive Meibomian Gland Dysfunction (MGD) in chronic GVHD. [53] The prevalence of MGD ranges from about 47.8-68.4% in oGVHD. [54,55] The MG loss and damage in oGVHD are often more severe than those seen in other DED such as Sjogren's syndrome. [56] Early detection and aggressive management can perhaps help in minimizing damage in oGVHD as few studies have shown some reversibility of MG damage in the initial stages. [52,57,58] Meibography revealed a loss of about 80% MG function in oGVHD patients evaluated over 1 year with over 25% being refractory to treatment. [52] Lid margin irregularity, vascular engorgement, plugging of MG, and displacement of mucocutaneous junction due to duct outlet obstruction are also seen. [52,59] In vivo confocal microscopy (IVCM) imaging has documented morphological changes like inflammatory cell infiltration, gland atrophy, and fibrosis. [59] Morphological changes in MG seem to have a multifactorial etiology inflammatory damage of glands due to the GVHD alone. Besides damage before allo-HSCT, [58,60] conjunctival inflammation related to GVHD, mechanical compression due to subconjunctival fibrosis, effects of condition regimen with radiation therapy or chemotherapy also seem to be responsible factors. MG gland infiltration by tumor cells or immunosuppression damaging cell viability [52,58,61] were held to be responsible reasons for poor correlation of MG loss to the severity of oGVHD or subconjunctival fibrosis. [48,52,58] However, MG loss does seem to be more with increasing severity of oGVHD. [55] As pretransplant upper lid, MG atrophy has been implicated to be a predictive factor for the likelihood of oGVHD, [58] close monitoring of the MG status by infrared meibography or pre-and post-allo-HSCT IVCM can help in early detection of the posttransplant ocular inflammatory process. [52,58] Prevalence of posterior blepharitis associated with MGD has been reported in 47-63% of chronic GVHD patients, with a significant correlation with the severity of KCS symptoms. [54,62] Lacrimal gland involvement is responsible for the tear aqueous deficiency in oGVHD with the resultant DED or KCS being the most characteristic feature in up to 69 to 77% Changes as in grade 3 involving >75% of the total surface area with or without cicatricial entropion in at least one eyelid of oGVHD cases. [63] Fibrosis and inflammation caused by stromal fibroblasts with T-cell infiltration centers around the periductal area of the lacrimal gland lead to the destruction of the tubuloalveolar secretory units. [64,65] Epithelial-mesenchymal transition of the host cells may be triggered by the migration of inflammatory cells and large amounts of cytokines produced or radiation therapy before the HSCT. About 50% of these infiltrating stromal fibroblasts are thought to be of donor origin which along with T-cells and recipient-derived fibroblasts contribute to the pathogenesis of GVHD. [53,66,67] Bilateral nasolacrimal duct obstruction (NLDO) leading to dacryocystitis has been reported in oGVHD. [68,69] NLDO induced by epithelial and subepithelial inflammation and punctal occlusion -both inflammatory and spontaneous have also been observed. [70,71] Eyelids abnormities (lagophthalmos, trichiasis, poliosis, entropion, and less commonly, ectropion) occur due to chronic tarsal conjunctival inflammation, atrophic eyelid alterations, keratinization, and cicatricial changes. [72] True cicatricial ectropion due to mechanical shortening of the anterior lamella caused by cutaneous involvement of GVHD has also been reported. [73] Increased eyelid laxity in oGVHD, resulting from higher elastolytic enzyme (like MMP-9) activity mediated by the chronic inflammatory process both due to GVHD and systemic malignancy, compounds the ocular discomfort symptoms and ocular surface signs. [74] Eyelid skin may exhibit scleroderma-like skin lesions, pigmentary discolorations, vitiligo, and dermatitis. [36] The other less commonly seen signs which may be seen in chronic oGVHD include cataract, episcleritis, scleritis, posterior scleritis, anterior uveitis, vitritis, and serous choroidal detachment. [29] Myeloablative chemotherapy instead of total body irradiation as a conditioning regimen is associated with a lower rate of cataract formation and posterior segment complications. [22] Newer Diagnostic Modalities Though several new diagnostic methods have been added to the armamentarium of DED diagnostics, [75] the ones about the evaluation of oGVHD in recent literature will be discussed here. There is no single adequate test for oGVHD diagnosis with a combination of clinical parameters and investigational modalities being recommended. Meibography Meibography is a technique of in vivo observation of MGs [48,[56][57][58]61,76,77] . Meibography in oGVHD shows complete or partial MG loss/atrophy, structural alteration such as distortion, or dilation of ducts. [52,56,58] Occasional finding of slender MG either pre-and early-post-HSCT has been attributed to long-term immunosuppression causing sebaceous hyperplasia which results in obstruction MGD and can be reversed in some cases. [57] As MG loss seen prior to the allo-HSCT can progress rapidly following oGVHD onset, noninvasive meibography for routine evaluation of hematological malignancies patients before and at regular follow-up posttransplant has been recommended. [58] Early detection of MGD is helpful in oGVHD prediction allowing the treating physician initiation of appropriate therapy before the onset of significant damage. [52,58] Various subjective [76,78,79] and objective methods [77,78,80,81] for grading meibography images have been described. A cutoff value of 40% of MG area calculated using image analysis software has been adopted for diagnosing MGD in oGVHD patients. [55] Consensus on the correlation of MG area loss on meibography to oGVHD severity is not conclusive with some in agreement [55,56] and few others [48,58] not concurring. The same also applies to the correlation between ocular surface clinical parameters and MG loss on meibography. [52,55,58] Hence, besides local inflammation there seems to exist a multifactorial etiology for MGD in oGVHD. Tear interferometry Non-contact tear interferometry visualizes the interferometric pattern of the lipid layer of the tear film and measures its thickness, thereby providing a functional MG assessment. [77] There is a paucity of studies evaluating the lipid layer in oGVHD. A higher grade of severity of lipid layer interferometric pattern changes have been seen in oGVHD patients on DR-1s tear film lipid layer interferometry (Kowa, Tokyo, Japan) assessment [59,82] with greater instability of the lipid layer in oGVHD patients as compared to Sjogren's syndrome. [83] While different tear interferometric patterns have been described to correlate with different DED subtypes of DED, inadequate tear volume makes it difficult to observe a typical interference pattern in severe aqueous deficient (AD) Sjögren's syndrome, oGVHD, or Stevens-Johnson syndrome. [84] oGVHD with afflictions of the lacrimal gland and MG manifests a combined AD-evaporative DED and shows a reduced lipid layer thickness (LLT) in tear interferometry in comparison to non-oGVHD and healthy eyes. [85] In vivo confocal microscopy IVCM changes in oGVHD include decreased corneal epithelial cell density, [86] epithelial dendritic cell (DC), conjunctival epithelial immune cell (EIC), [87,88] increased goblet immune cell (GIC), [88] anterior stromal cell density, anterior stromal extracellular matrix (ASEM) accumulation (reflective of engraftment of donor fibroblasts or altered fibroblast cell populations in the host cornea), [89] reduced sub-basal nerve number and density, altered branching, reflectivity and increased tortuosity, [87][88][89] and altered conjunctival epithelia and stromal immune cell density. [87] IVCM changes seem to correlate well with disease severity scores (Japanese Dry Eye score, ICCGVHD). [88] While the comparison of corneal and conjunctival IVCM changes between oGVHD patients and healthy controls or post-HSCT patients without oGVHD revealed significant changes in the former, [86,88,89] these changes were of comparable severity in oGVHD and non-oGVHD DED of comparable severity. This suggests that IVCM changes are reflective of a local inflammatory phenomenon seen in oGVHD DED rather than due to systemic GVHD. [87] IVCM can, therefore, be a useful tool to study the cellular structural changes in DED with and without GVHD. [86] IVCM study of MG morphology in post-allo-HSCT revealed atrophic glands with increased surrounding fibrosis with inflammatory cellular infiltration in oGVHD compared to numerous compact glandular acini units evident in post-HSCT non-oGVHD patients. [59] Tear film osmolarity Tear film osmolarity is a global indicator of DED irrespective of the subtype or etiology and is considered its best single predictor [90] with a cutoff value of >310 mOsm/L for diagnosing oGVHD (98.4% sensitivity and 60.7% specificity). [91] A cutoff value of 312 mOsm/L has been recommended for differentiating definite oGVHD (as per ICCGVHD criteria) from non oGVHD (sensitivity of 91% and specificity of 82%). [92] There is a significantly raised tear osmolarity in oGVHD with a good correlation with the severity of clinical parameters (Schirmer's, TBUT, OSDI) and staining scores [57,[90][91][92] and increasing disease severity. [91,92] Though its diagnostic efficacy in oGVHD is good, it is noted to be lower than that of Schirmer's and TBUT, with clinical dry eye tests showing a higher correlation coefficient for chronic oGVHD probability compared to tear osmolarity. [91,92] Currently, tear osmolarity in isolation is not recommended to diagnose oGVHD but is a useful supplement to clinical dry eye tests used in oGVHD diagnosis in post-allo-HSCT, given its ease of performance by non-ophthalmologist and with lower interobserver. [22,91,92] A novel digital imaging analysis technique for quantification and morphological characterization of corneal fluorescein staining which may help distinguish DED due to Sjogren's and oGVHD has been recently proposed by Pelligrini et al. [93] Shimizu et al. evaluated corneal higher-order aberrations (HOAs) using Zernike analysis in anterior segment optical coherence tomography (CASIA system, SS-1000, Tomey, Japan) and found higher corneal HOAs in chronic ocular GVHD eyes than the non-GVHD and normal eyes, which correlated with visual acuity and severity scores. [94] Role of Tear Biomarkers, Inflammatory Mediators, and Protein in Diagnostics The immune reaction in GVHD comprises of donor T-cells trigger of host antigen-presenting cells (APCs), which activate the donor effector T-cells to mediate the target tissue damage. The precise role of the various subtypes of T-cells, cytokines, and B-cells is not clear. [95] Though CD4+ and CD8+ T-cells are the predominant infiltrates in ocular surface tissues in chronic oGVHD, [96] it is difficult to classify it as pure T-Helper cell-1, T-Helper cell-2, or T-Helper cell-17-mediated disease. Studies evaluating tear cytokines in oGVHD found raised intercellular adhesion molecule-1 (ICAM-1), [97] interleukin-1 receptor antagonist (IL-1Ra), [98] IL-2, [99] IL-1 β, [97] IL-6, [9,97,99,100] IL-8, [9,85,97,98] IL-10, [9,98,99] IL -2AP70, [9] IL-17A, [9,99] interferon gamma (IFNγ), [9,99,100] tumor necrosis factorα (TNF-α), [99] matrix metallopeptidase 9 (MMP-9), [9,101] and vascular endothelial growth factor (VEGF). [9] Among these IL-10, IL-6, and TNF-α, IL-8, ICAM-1, IL-12AP70, VEGF, IFNγ, and MMP-9 were found to have a fair correlation with the clinical ocular surface evaluation tests. [9,97,99,100] While these biomarkers were not raised, tear MMP 7 and MMP 9 were noted to be elevated non-oGVHD eyes post-allo-HSCT. [9] Certain tear cytokines have been proposed as possible biomarkers for chronic oGVHD (ICAM-1, IL-8, IL-1 β, IL-10, IL-17, IL-6, CXCL-10, TNF-α, MMP-9, and VEGF). [9,[97][98][99] Comparative study of cytokines in oGVHD with non-oGVHD DED observed raised levels of ICAM-1, IL-1β, IL-6, and IL-8 and reduced levels of IL-7 and EGF. [97] Lower levels of IL-7, EGF, and IP-10 in oGVHD patients suggest a disease protective role for these mediators. [97,98] While cGVHD was conventionally thought to be T-helper cell2-mediated, recent evidence points towards the role of T-helper17 cells as key effector cells in cGVHD which is supported by raised tear levels of IL-6, IL-17 A, IL-1β, and TNF-α in oGVHD patients. [99,102] IL-17A and IL-6 may also have a role in triggering proliferation and alterations of the germinal B-cell, which are now believed to influence cGVHD pathogenesis. [103] Recent reports of increased conjunctival neutrophil infiltration [104] and tear inflammatory mediators [101] produced by them (neutrophil elastase, MMP-9, MMP-8, and myeloperoxidase [MPO]) highlights the role of neutrophils in oGVHD immunopathogenesis with these neutrophils releasing nuclear chromatin complexes as extracellular DNA (eDNA) webs that are termed neutrophil extracellular traps (NETs). [105] oGVHD is associated with excessive accumulation of NETs which are recognized to be contributory to pathologic changes (corneal epitheliopathy, conjunctival fibrosis, ocular surface inflammation, and MGD) seen. [85] Neutrophil secreted biomarkers (eDNA, neutrophil gelatinase-associated lipocalin [NGAL], Oncostatin M [OSM], and tumor necrosis factor F superfamily member14 [TNFSF14]) could be useful in differentiating DED due to oGVHD from other etiologies. Besides, raised levels of neutrophil elastase, myeloperoxidase, IL-8, TNF-α, and brain-derived neurotrophic factor (BDNF) were obtained in ocular washings of oGVHD. [85] Tear total tear protein levels are reduced in oGVHD. [9] An extensive tear proteomic profiling identified 79 proteins to be differentially expressed in oGVHD as compared to non-oGVHD. [102] Structural proteins, nucleic acid binders, and oxidoreductase enzymes were seen to be prominently upregulated proteins while enzyme modulators, hydrolases, carrier proteins, receptor binding proteins, and defense and immunity-related proteins were down-regulated. Histone proteins, which are known to have pro-inflammatory proteins, were the most highly unregulated and may be associated with the increased NET formation in these eyes while Lipocalin-1, which has numerous protective effects, was the Other protective proteins such as Lysozyme-C and Lactotransferrin were also downregulated. [106] Treatment A multidisciplinary approach and coordination with the HSCT team are imperative in the management of oGVHD. In recent times, with greater emphasis on organ-specific treatment, increasing systemic immunosuppression is no longer considered an optimal treatment approach for organ-specific GVHD. The three-pronged treatment approach, as adopted in another ocular surface immune-mediated inflammatory disease, comprises lubrication and tear preservation, prevention and control of tear evaporation, and most importantly, reducing ocular surface inflammation [ Table 5]. [107] Medical management Lubrication and tear preservation In both acute and chronic oGVHD with severe aqueous deficiency dry eye, topical lubrication with non-preserved phosphate-free artificial tears is the first-line treatment. Frequent use of tear substitutes throughout the day supplemented with viscous ointment before bedtime helps not only in preserving the ocular surface but also in diluting tears inflammatory mediators. Topical mucolytics (acetylcysteine [5-10%]) is beneficial in DED with filamentary keratitis. Though oral secretagogues, such as pilocarpine or cevimeline (selective muscarinic agonists), may be beneficial in stimulating aqueous tear flow in chronic oGVHD induced sicca symptoms, their use is limited by adverse drug reactions and toxicity. Dual treatment with topical secretagogues rebamipide and diquafosol have been used in oGVHD patients with beneficial effects. [108] Tear preservation with punctal occlusion, either with silicone plugs (reversible) or thermal cauterization (usually irreversible) may be performed. The number of puncta to be occluded is guided by disease severity and Schirmer's test. However, the threshold for silicone plugs punctal occlusion should be low, especially in chronic oGVHD, where lacrimal gland dysfunction is irreversible. Spontaneous plug loss is a common complication, probably due to punctal subepithelial fibrosis. [71] Thermal cautery may be considered in severe cases with recurrent plug extrusion. Any associated blepharitis and MGD should be treated accordingly and achieving a maximal reduction in the lid and ocular surface inflammation is mandatory before punctal occlusion. Prevention of tear evaporation Tear film instability and evaporative dry eye due to MGD should be treated on usual lines with warm compresses, lid scrubs, and maintenance of lid hygiene. Topical erythromycin ointment and systemic tetracycline antibiotics, mainly doxycycline and minocycline, and macrolide antibiotics, azithromycin, help to reduce inflammation of the MGs, and subsequently meibum secretion and tear film quality. Further, nutritional supplements such as fish oil (omega-3 fatty acids) and flaxseed oil (2000 mg/d) may be helpful owing to their anti-inflammatory properties. The use of moist chamber goggles to increase the periocular humidity has been employed to alleviate discomfort in DED patients, though the effects may be transient. [109,110] Reducing ocular surface inflammation Topical steroids are used in both acute and chronic oGVHD, although their role in the former remains controversial. While some studies did not find a role for topical steroid therapy in altering the disease course of pseudomembranous conjunctivitis, [4,111] Kim et al. suggested that the use of aggressive topical steroid therapy along with pseudomembrane removal may help improve epithelial healing and reduce cicatricial changes in these patients. [112] In chronic oGVHD, they are helpful in patients presenting with cicatricial changes. [28] Topical steroids are contraindicated in patients with corneal epithelial defects, stromal thinning, or infection. Adverse effects of long-term steroid use (glaucoma, cataracts, corneal thinning, and secondary infectious keratitis) are common comorbidities in these eyes. Hence, the use of topical immunosuppressants, (cyclosporine [CsA] eye drops, and tacrolimus ointment) has been advocated. Topical CsA eye drops have been used with some success in patients with chronic oGVHD and KCS refractory to conventional lubrication and steroid drops. An increase in goblet cell density and epithelial cell turnover in the conjunctiva along with improvement in symptoms, corneal fluorescein staining, and basal tear secretion has been noted. Tacrolimus is similar to CsA but with greater immunosuppressive potency, and its systemic use has also shown to be beneficial in ocular GVHD. [113] Topical IL-1 receptor antagonist (IL-1Ra) or Anakinra 2.5% (FDA approved immunomodulatory drug for rheumatoid arthritis treatment), has shown some promise in a double-masked randomized control trial with improvement in symptoms and reduction in corneal epitheliopathy after 12 weeks of instillation in oGVHD. [114] Topical Tranilast acts by inhibiting the production and/or release of ocular inflammatory mediators and cytokines and in collagen synthesis as well as TGF-β induced matrix production and is effective in treating mild dry eye associated with cGVHD. [115] Sub-anticoagulant dose heparin (100 IU/mL) by diminishing the effects of NETs has been shown to have a therapeutic effect in oGVHD. [85] Deoxyribonuclease I (DNase), a major extracellular endonuclease, selectively targets extracellular DNA, and thus degrades NET. Early clinical trials have demonstrated the therapeutic potential of topical recombinant human deoxyribonuclease I (0.1% DNase), pulmozyme (Genentech) in patients with oGVHD DED without severe adverse effects. [116] Intravenous immunoglobulin (IVIG) through its immunomodulatory activity may reduce autoimmune-mediated inflammation in DED. [117] Topical IVIG drops application for oGVHD DED which is currently being investigated in Phase1/ll clinical trials. Biological tear substitutes Appropriate management of corneal epithelial erosions, corneal ulcers, and perforations are required to maintain the health and integrity of the corneal surface. Biological tear substitutes such as autologous serum act like preservative-free tears being rich in nutrients such as epithelial and nerve growth factors, cytokines, vitamin A, fibronectin, and transforming growth factor-A. It acts by providing lubrication and improving corneal sensitivity, thereby contributing to enhanced integrity. [118] However, their use is not recommended in presence of active inflammation, systemic infections, extremes of age (infant or elderly), or overall poor health such as malnutrition. Umbilical cord serum eye drops or allogeneic serum eye drops have been tried as alternatives but are limited by the risk of transmission of serious blood-borne diseases. [119] Topical therapy with autologous platelet lysate drops rich in platelet-derived growth factors (PDGF), known to improve wound healing and corneal re-epithelization, is a safe and effective option for oGVHD patients refractory to conventional therapy. [120,121] Contact lenses have also been used to provide ocular surface protection in oGVHD, as in other ocular surface disorders. Soft silicone hydrogel bandage contact lenses and rigid gas-permeable scleral lenses such as Prosthetic Replacement of Ocular Surface Ecosystem (PROSE) have been tried. [122] However, they should be used with caution, especially in the acute setting, keeping in mind the increased risk of infection and ischemia. Surgical management Surgical intervention is mostly reserved as the last resort and may be necessary for severe cases. Superficial epithelial debridement and removal of filaments are helpful in cases of filamentary keratitis. Amniotic membrane transplantation may be required in cases of persistent epithelial defects, superior limbic keratoconjunctivitis, and symblepharon formation. [123,124] ProKera (Bio-Tissue, Inc., Doral, FL), an FDA (U.S. food and Drug Administration) approved device, is a polymethylmethacrylate ring akin to a symblepharon ring that functions as a carrier for cryopreserved amniotic membrane. Its use has been described in acute oGVHD to restore ocular surface integrity and prevent more severe complications. [125] Severe cases of DED may even warrant a temporary tarsorrhaphy [126] to decrease ocular surface exposure. Mucous membrane grafts and skin grafts may be required for the management of cicatricial lid disease. Allogenic limbal stem cell transplantation from the same hematopoietic stem cell donor, [41,43,44,127] lamellar keratoplasty, [128] tectonic patch grafts [ Fig. 2b], and penetrating keratoplasty [126] are performed in a limited capacity and only as a final effort, given a poor prognosis for graft survival because of severe preexisting ocular surface inflammation. Ocular surface stem cell transplantation using conjunctival and limbal allografts obtained from the patient's HSCT donor has been reported to be a promising treatment modality associated with good long-term survival of the graft. [41,43,44] Keratoprosthesis may also be considered in severe cases for visual rehabilitation with bilateral blindness; osteo-odonto keratoprosthesis has been successfully performed in a few cases. [129] Cataract surgery in ocular GVHD A cataract occurs commonly in patients of oGVHD, and is multifactorial in origin, resulting from a combination of toxicity from chemotherapeutic agents, total body irradiation (TBI) for the pretransplant conditioning process, and prolonged high-dose systemic and topical steroids [ Fig. 2c]. In addition to keratopathy secondary to DES (dry eye syndrome), cataract is the most common cause of vision loss in oGVHD. Posterior subcapsular cataract (PSC) is the most frequently encountered and is present in most cases. Nuclear sclerosis is also present in many cases, but is relatively more common in older patients, suggesting the involutional cataract component [Fig. 3]. The reduction of glare acuity in the presence of a reasonably good Snellen's visual acuity is common. [130,131] As cataract surgery can induce or exacerbate a preexisting DES, it is important to aggressively treat the DES and optimize the ocular surface before performing cataract surgery in oGVHD. Frequent lubrication, topical anti-inflammatory, and immunosuppressive therapy, use of punctal plugs as needed and prior treatment of any lid and adnexal pathology are important. Another preoperative challenge is obtaining accurate biometry readings and intraocular lens power calculation. Both optical biometry and topography evaluation should be performed. It is recommended to obtain multiple readings; in case of discrepancy, it is best to defer the surgery, optimize the ocular surface, and re-evaluate after a few weeks. [132] Although the literature on cataract surgery in GVHD is limited, micro-incision cataract surgery (MICS) with phacoemulsification is beneficial in reducing ocular surface complications as compared to extracapsular cataract extraction. [133,134] Biplanar or triplanar clear corneal incisions, anterior limbal incision, or scleral tunnel incisions may be considered. Clear corneal incisions are suitable for cases with the optimized ocular surface while in severe cases, refractory to the best treatment, it will be best to consider scleral incisions for cataract surgery. The majority of the postoperative complications are due to DES (punctate keratopathy, filamentary keratitis, recurrent corneal epithelial defects), which may worsen to stromal melt and perforation in severe cases. Topical nonsteroidal anti-inflammatory drugs should be used with caution, particularly in cases of severe oGVHD, as they may increase the risk of corneal melt and ulceration. Increased IOP in the early postoperative period, worsening of preexisting glaucoma, significant visual axis opacification (VAO), and cystoid macular edema also occur commonly. About 18-44% of VAO have been reported to require yttrium aluminum garnet (YAG) capsulotomy. [133,134] Close observation and follow-up in the postoperative period and patient counseling regarding the continuation of preoperative lubricants and anti-inflammatory therapy in addition to antibiotics and steroids is of utmost importance. Conclusion oGVHD is a complex disease, which often shows a recurrent course and may be refractory to conventional DE therapy. It could involve the whole ocular surface, necessitating a multipronged approach of treatment. DED significantly affects the ocular surface, necessitating a multipronged approach of treatment. oGVHD may manifest as part of multisystem involvement or de-novo, in patients with no signs of systemic GVHD. It is imperative that every patient before allogeneic HSCT, be referred to a cornea specialist, to evaluate the baseline parameters for the pre-HSCT diagnosis of DED. It is also desirable to maintain a regular follow-up of these patients for early diagnosis of changes that occur on the ocular surface post HSCT. Newer diagnostic modalities have helped in diagnosing the disease earlier and also monitor its response to treatment. More recently introduced treatment agents such as topical platelet lysate and Heparin drops have shown promise, but further studies are required to establish their efficacy. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-04-30T06:16:43.366Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "59568d73d0d8d4ae224e93d9679732f9e8cef794", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_2016_20", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9dbd990e70d8435d2e22b8e41b4fbfc393953f72", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
215813237
pes2o/s2orc
v3-fos-license
Modeling and forecasting trend of COVID-19 epidemic in Iran until May 13, 2020 Background: COVID-19 is a new disease and precise data are not available about this illness in Iran and in the world. Thus, this study aimed to determine the epidemic trend and prediction of COVID-19 in Iran. Methods: This was a secondary data analysis and modeling study. The daily reports of definitive COVID-19 patients released by Iran Ministry of Health and Medical Education were used in this study. Epidemic projection models of Gompertz, von Bertalanffy, and least squared error (LSE) with percentage error were used to predict the number of hospitalization cases from April 3, 2020 until May 13, 2020. Results: The prediction of the number of patients on April 3, 2020 by von Bertalanffy, Gompertz, and LSE, with 95% confidence interval (CI), were estimated at 44 200 (39 208-53 809), 47 500 (38 907-52 640), and 48 000 (40 000-57 560), respectively. The number of deceased COVID-19 patients was also estimated to be 3100 (2633-3717) individuals by the von Bertalanffy model, 3700 (2900-4310) by Gompertz's model, and 3850 (3200-4580) by LSE. Making predictions about the flat epidemic curve and number of patients based on Gompertz model, will project 67 000 (61 500-87 000) cases. Based on Gompertz and von models, 7900 (6200- 9300) and 4620 (3930- 5550) deaths will occur from May 13 to June 1, 2020, respectively, and then the curve will flatten. Conclusion: In this study, estimations were made based on severely ill patients who were in need of hospitalization. If enforcement and public behavior interventions continue with current trends, the COVID-19 epidemic will be flat from May 13 until July, 2020 in Iran. Introduction Coronaviruses are a large family of viruses that have been identified since 1965; to date, 7 species of them have been reported to affect humans. These viruses have 3 genotypes of alpha, beta, and gamma. The natural reservoirs of these diseases are mammals and birds, and thus they are considered as zoonotic diseases (1,2). Severe acute respiratory syndrome (SARS) is caused by a species of coronavirus that infects humans, bats, and certain other mammals, which has led to epidemics in 2002 19 is not clear yet and its transmission risk is not completely understood (10). However, the virus is believed to be transmitted mostly via contact, droplets, aspirates, feces. Generally, everyone is prone to this viral disease. The mean incubation period of COVID-19 was 5.2 days (4.1-7 days) and the basic reproductive number (R 0 ) was reported 2.2 (95% CI: 1.4 to 3.9) (11). In another study, the mean incubation period ranged from 0-24 days, with the mean of 6.4 days. The R 0 of COVID-19 at the early phase, regardless of different prediction models, was higher than SARS and MERS, and the majority of patients (80.9%) were considered asymptomatic or mild pneumonia (12). The case fatality ratio was 2% (12), 2.3% (8), 3.46% (13), and elderly men with underlying diseases were at a higher risk of death (13). As of March 29, 2020, COVID-19 pandemic was declared by the World Health Organization (WHO) in more than 100 countries (most prevalent in the United States, Italy, China, Spain, Germany, Iran, and France) (14). In Iran, the first case of COVID-19 was February 19, 2020 in Qom, and we used the reported data until March 29, 2020 in Iran. Until March 29, 2020 (The date when this manuscript was being conducted), according to the daily reports, 38 309 cases of Covid-19 and 2640 related deaths were reported in Iran (15,16). As of February 29, 2020, all schools and universities and as of March 7, 2020 almost all public places and shrines have been closed. On March 2, 2020, a team of WHO experts landed in Tehran, Iran, to support the ongoing response to the COVID-19 outbreak in the country (17). Currently, people are referring to health centers and hospitals, and the public is almost alarmed by the epidemic of panic and inaccurate reporting in cyberspace. Important questions in people's mind are as follow: How many people have COVID-19 in Iran? What is the status of COVID-19 epidemic curve in Iran? When will the epidemic will go and how it ends? We cannot answer these questions with certainty, but they will be investigated in terms of pathogenic agents (coronavirus), host conditions, behavior (human), and environmental factors of coronavirus transmission, daily reports of definitive COVID-19 patients released by Iran Ministry of Health and Medical Education, and the use of modeling given the assumptions and the percentage of error. Although the models are different, multiple and changeable in nature and do not insist on the correctness of the forecasts, the decision-making conditions for health policymakers and authorities are more transparent and helpful (18). This study aimed to model and determine the epidemic trend and predict the number of patients hospitalized due to COVID-19 in Iran using mathematical and statistical modeling. Methods This was a secondary data analysis and mathematical modeling study based on a research proposal approved by Shahrekord University of Medical Sciences (Code of Ethics Committee on Biological Research: IR.SKUMS.REC 1398.254) (19). For the statistical analysis of definitive COVID-19 patients in Iran, daily reports of the Ministry of Health and Medical Education were used (15). The definitive diagnosis of COVID-19 was made using virus isolates from patients' biological samples in hospitals. When real time polymerase chain reaction (PCR) test turned positive for COVID-19 in patients with respiratory symptoms and confirmed by the reference laboratory in School of Public Health, Tehran University Medical of Sciences and Pasteur Institute of Iran (20), they were used for analysis. Patient population growth, epidemic curves, and recovered, and deceased individuals were used to form a conceptual framework of an epidemic and predict the COVID-19 epidemic trend. Quasi classical infectious disease (Susceptible→Exposed→In-fected→Removed: SEIR) model was used (21). Different scenarios were designed and implemented for modeling and forecasting. First, based on a search for reliable sources of disease trends and epidemic curves across the world, the curve of Iran was drawn (10,18,22). Focused and scientific group discussion sessions were held with experts on epidemiology, biostatistics, and mathematics, infectious diseases specialists, and health care managers on the topic. Different scenarios were discussed and agreement was reached on the application of final scenarios. To predict the growth of this epidemic, different models were used. In the first scenario, the most optimistic estimation and control of the epidemic was during an incubation period (von model, the most ideal model). In this scenario, traced contacts are isolated immediately on symptom onset (and not before) and isolation prevents disease transmission. In the second scenario, an intermediate and fit-to-data model (Gompertz) was used. In the third scenario, the use of the growth rate is greater than the first and second models, and it is the opposite of the first scenario (LSE). To select the scenarios, fit the data with the models and growth rate of the cases were used. The Gompertz growth, von Bertalanffy growth equation, and curve fitting by LSE method with cubic polynomial for Epidemic forecasts were run in MATLAB software. Models are presented as the following differential equations: Gompertz Differential Equation: Von Bertalanffy's differential growth equation: Cubic polynomial polynomials: . Where p represents the number of individuals in each population, a, b, c, and e represent unknown parameters and t time. d/dt is a derivative of time (23,24). The assumptions of the model are as follow: • Models are based on official reported data and recruitment testing from hospitalized cases. • Any manipulation and misinformation will affect the model. • The method for finding patients is fixed. The unknown parameters were estimated by running the fminsearch, a MATLAB function, which is a least squares algorithm. The parameters were estimated based on the official reported data of infected, cured, and dead cases. The estimated values of the parameters for different scenarios are reported in Table 1. Also, MATLAB software was used to fit data and solve the equations. MATLAB codes are presented in Appendix. Moreover, the percentage of the root 3 mean square error (RMSE) was used to validate the models and 95% confidence interval (CI) was utilized to calculate the coefCI MATLAB function. The basic reproduction number R 0 was calculated using the following formula (25): where, T c and r are the mean generation interval of the infected and the growth rate, respectively, T c = 7.5, and r= 0.1. The growth rate of Gompertz model is r= 0.1; thus, the number of R 0 is 1.75. All estimations and detections of COVID-19 were made based on the current conditions of laboratory sampling from critically ill and hospitalized patients (tip of the iceberg spread of the disease). In this modeling, asymptomatic patients and those with moderate symptoms from whom no samples were taken were excluded. Also, the data of patients diagnosed based on CT scans were not included in this study. The forecast dates were selected based on the end of New Year (April 3, 2020) holidays in Iran and the onset of epidemic curve flattening (May 13, 2020). Results Frequency of daily statistics of COVID-19, including definite new cases, number of deaths, and recovered cases in Iran are shown in Table 2. The trend of this epidemic spread in Iran (daily linear and cumulative trend) is illustrated in Figure 1. According to data released on COVID-19 in Iran as of March 29, 2020, the following forecasts for April 3 and May 13, 2020 were reported (Figs. 2, 3, and 4). According to the Gompertz model, in the most optimistic perspective, the maximum number of infected people until April 3, 2020 is 47 500, with 95% confidence interval (CI: 38907-52640) (Fig. 2 A1). The percentage of the root mean square error (RMSE) for Gompertz model is 12%. Based on von Bertalanffy's growth model (the most ideal model with high isolation of patients and others intervention such 5 39208-53809), (Fig. 2 A2) and 17% RMSE. According to the method of the least squared error, this value was estimated to be 48 000 (CI: 40000 -57560), with 19% RMSE (Fig. 4 H). Moreover, according to Figure 4 G, the maximum population of recovered individuals was estimated to be 15 900, with 95% confidence interval (CI: 13500-19000), according to the method of least squared error. Discussion In this study, the trend of COVID-19 epidemic prediction and estimation of the number of patients, R 0 , deaths, and recovered individuals were performed and reported based on mathematical and statistical models. Although this prediction may be associated with random errors, it was made with assumptions about the past trends of the COVID-19 epidemic in Iran as well as the behavior of the people and government interventions (sampling of severe cases and hospitalization). Implementation of government interventions such as social distancing, isolation of patients, and follow-up of those around them based on the epidemic management protocol in Iran is of high importance. Thus, observance of these interventions by the public and the government has an impact on modeling predictions. Moreover, according to a valid scientific report, delay in the onset of symptoms until the isolation of patients plays an important role in controlling the epidemic (18). To control the majority of outbreaks, for R 0 of 2.5, more than 70% of contacts had to be traced, and for R 0 of 3.5, more than 90% of contacts had to be traced. The delay between symptom onset and isolation had the largest role in determining whether an outbreak was controllable when R 0 was 1.5. For R 0, values of 2.5 or 3.5, if there were 40 initial cases, contact tracing and isolation were only potentially feasible when less than 1% of transmission occurred before symptom onset (18). Therefore, efforts should be made to control this epidemic with greater vigor and urgency and to conduct a daily risk assessment. In the current epidemiological situation (26) in the world and Iran, fear control and avoidance of rumors are very important for COVID-19 prevention and control. There are 3 important and debatable points of view about this epidemic in Iran: First, to avoid tension in the society; second, to properly interpret the COVID-19 case fatality ratio (CFR) in Iran and calculate CFR tactfully; and third, to recommend personal hygiene, including hand washing and avoiding contact with suspected patients, social distancing, discovering unknown cases of infection and early detection, tracing direct contact and isolation of patients, all of which have been emphasized by the health care officials to overcome this disease. Moreover, in interpreting this index, since the denominator of the fraction is only positive cases in hospital beds and the numerator is the number of patients who died of COVID-19. This index should also be calculated until the end of the epidemic period, and if it is until the end of the epidemic and their outcome (death/recovery) is determined, this indicator will approach the real number. The estimated case fatality ratio among medically attended patients was reported to be approximately 2% (12) and the true ratio may not be known for some time (27). The underreporting estimation is very sensitive to the baseline CFR, meaning that small errors lead to large errors in the estimate for underreporting (28). Underreporting of COVID-19 patients (1.38% / cCFR) and modification of disease mortality had previously been 7 estimated by the Center for Mathematical Modeling of Infectious Diseases, London School of Health and Tropical (28). If a country has a higher adjusted CFR (eg, 20.02%), it means that only a fraction of cases have been reported (in this case, 1.38 /20.02 = 6.89% cases reported approximately). This formula can accurately estimate the statistics of all patients with COVID-19 (from asymptomatic and mild to severe cases and death). One article reported that up to 70% of the supply chain could be cut off and the epidemic could be controlled if contact and isolation, quarantine and isolation were appropriately accomplished (18). We think the top priorities in Iran are now circular and comprehensive efforts to conduct epidemiological studies and identify all aspects of the disease (source of disease, reservoir, pathways, infectivity, incubation period, incidence and prevalence, pathogenicity, immunogenicity, herd immunity, causes, epidemic and pandemic pattern, primary and secondary attack rates, response time, time needed for isolation and quarantine, treatment regimens, vaccines and other prevention methods, disease surveillance, and statistical reporting) and evidence-based interventions and epidemic control. The experience of China and South Korea should be used to control this epidemic disease in Southeast Asian countries and in Iran, as South Korea and China were successful in controlling the disease. Cultural conditions are also effective. Given the prediction and modeling of the number of Coronavirus cases in Iran and because the virus is going to circulate in the country for at least a few weeks, we will have an ascending trend in the coming weeks. We recommend using the WHO guideline to properly manage patients (29). Considering China's experience and the fact that it took about 70 days for the epidemic curve to flatten in China, and based on a search of scientific texts, this study provides the following recommendations: (1) Up-to-date and accurate data on definitions related to suspected, probable, and definitive people with Coronavirus should be collected at all provincial levels in the health care system. (2) Percentage of completion and accuracy of assessments and data should be monitored precisely and the epidemic curve should be drawn based on district, province, rural, and urban divisions and be provided to provincial and academic headquarters in an updated dashboard format. (3) Data should be carefully recorded and analyzed regarding the pathology, time of onset of symptoms, natural course of the disease, and the outcome of the disease to determine effective strategies to prevent and determine the necessity of intervention to control the spread of the disease at different levels. At all levels of the health system (governmental and nongovernmental), medical and diagnostic interventions and their outcomes should be recorded for all patients. These records should be based on the date of onset of symptoms, the date of referral, the date of diagnosis, method of diagnosis, the date of intervention and the outcome (eg, death, recovery, discharge). Such records can widely be used to compare and evaluate the cost-effectiveness of various diagnostic and therapeutic methods. Health system staff should be trained to appropriately and accu-rately record data, especially by means of web-based networks. This will certainly improve the quality of data recording. All epidemiological indicators that determine the epidemic pattern, including baseline R zero, attack rate, incubation period, index case, primary cases, secondary cases, and GIS mapping should be determined in provinces, cities, and nationwide, and epidemic trends should be monitored. Access to the results of the analysis and data should be provided for researchers and experts on the basis of specific protocols available for this purpose in the world and Iran, and a thorough critique and creative theories and ideas should be elicited from all university training and research groups. The models used to predict the end of the epidemic and control it should be evaluated as well. The results of our study are inconsistent with Zhuang brief report (30), in which the data were collected from the World Health Organization (WHO). However, this report may not be accurate, as the WHO has not reported it. Limitations Given the urgency of the need for valid and transparent models to informed interventions and policies, some further considerations like the no consideration or account of systematic cases, testing coverage and time delay to the test results availability, seasonality, and comorbidities have not been included in this study. However, it may be feasible to consider them in revisions of the models or future studies. Moreover, the progression of disease epidemic across space-time has not been seen in Iran, as we used the parameters of Chinese models in modeling estimates in Iran to calculated R 0 . In this study, no attempt was made to detect and report cases and deaths in Iran. Although this could have been performed, given that the study has already been designed, conducted, and being reported, and given the exceptional mortality and morbidity situation, it was not feasible to go back and report the cases and deaths due to COVD-19. However, it can be done for the future similar studies. There is no fixed page in Ministry of Health and Medical Education web site for reference of daily reports and they are scattered on various pages of this site. If screening is done in the community along with biological sampling to diagnose COVID-19, the number of cases will certainly be higher and the results of modeling will change. Conclusion The actual trend of detecting COVID-19 cases in Iran, which has been based on people's health behaviors and government interventions, has been increasing. In this study, estimates were based on current trends, social distancing, sampling of severe cases, hospitalization, and tip of iceberg spread disease, and thus asymptomatic, mild, and moderate cases could not be calculated. We used the reports of positive COVID-19 cases in hospitals; thus, the prediction model in this study can be used for patients hospitalized due to COVID-19. Complete reliance on any type of model will lead to systematic and random error, unless modeling provides a prediction with precise and clear assumptions and inputs and outputs. To predict the flattening of the epidemic http://mjiri.iums.ac.ir Med J Islam Repub Iran. 2020 (31 Mar); 34:27. 8 curve, 3 models of growth with percentage of the root mean square error (RMSE) were used. Based on RMSE, Gompertz growth model was valid and predicted that the epidemic curve will be around May 13, 2020 with about 67 000 hospitalized patients and 7900 deaths (RMSE= 10%), respectively. This study suggests that government interventions and people's behaviors determine the persistence of the epidemic, and thus they should be addressed with greater responsibility, accountability, rigor, and quality.
2020-04-19T05:48:32.142Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "8495cb935235bd0b3c5b1d8ed72cfc7469b13c0d", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6f6bb30fa4d8c18f6f44819b67bf05420214848a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
249248023
pes2o/s2orc
v3-fos-license
Poly(A)-Binding Protein Cytoplasmic 1 Inhibits Porcine Epidemic Diarrhea Virus Replication by Interacting with Nucleocapsid Protein Porcine epidemic diarrhea virus (PEDV) is the etiological agent of porcine epidemic diarrhea (PED) characterized by vomit, watery diarrhea, dehydration and high mortality. Outbreaks of highly pathogenic variant strains of PEDV have resulted in extreme economic losses to the swine industry all over the world. The study of host–virus interaction can help to better understand the viral pathogenicity. Many studies have shown that poly(A)-binding proteins are involved in the replication process of various viruses. Here, we found that the infection of PEDV downregulated the expression of poly(A)-binding protein cytoplasmic 1 (PABPC1) at the later infection stage in Vero cells. The overexpression of PABPC1 inhibited the proliferation of PEDV at transcription and translation level, and siRNA-mediated depletion of PABPC1 promoted the replication of PEDV. Furthermore, mass spectrometry analysis and immunoprecipitation assay confirmed that PABPC1 interacted with the nucleocapsid (N) protein of PEDV. Confocal microscopy revealed the co-localizations of PABPC1 with N protein in the cytoplasm. Taken together, these results demonstrate the antiviral effect of PABPC1 against PEDV replication by interacting with N protein, which increases understanding of the interaction between PEDV and host. Introduction Porcine epidemic diarrhea virus (PEDV), a member of the genus Alphacornavirus, mainly infects suckling piglets and causes porcine epidemic diarrhea (PED) characterized by vomit, watery diarrhea, dehydration, and high mortality [1][2][3][4]. PED was first reported in England in 1971 and spread to other swine production countries subsequently [5]. Since 2010, outbreaks of PED have resulted in extreme economic losses to the swine industry all over the world because of the emergence of highly pathogenic mutant strains [4,6]. PEDV is a positive-sense single-strand RNA virus and the whole genome is approximately 28 kb including the 5 -untranslated region (UTR), open reading frame (ORF) 1a/1b, spike (S), ORF3, envelope (E), membrane (M), nucleocapsid (N) genes, and 3 -UTR. Poly(A) tail is necessary during coronavirus genome replication [7]. The whole genome of PEDV translates into four structural proteins, S, E, M and N, and 16 nonstructural proteins [8]. As a major component of the nucleocapsid structure, N protein has a variety of biological functions [9]. By activating nuclear factor kappa-light-chain-enhancer of activated B cells and upregulating interleukin-8 expression, N protein antagonizes interferon (IFN) Viruses 2022, 14, 1196 2 of 10 production and disrupts the antiviral response of host cells [10,11]. N protein has good immunogenicity and can induce a strong cellular immune response [12]. The study of host antiviral factors can help to better understand the host-virus interaction. Several antiviral factors have been reported to show antiviral activity against PEDV infection. Bone marrow stromal cell antigen 2 suppresses PEDV replication by targeting and degrading N protein with selective autophagy [12]. Transferrin receptor 1 levels at the cell surface influence the susceptibility of newborn piglets to PEDV infection [13]. Tomatidine inhibits PEDV replication by targeting 3CL protease [14]. Viperin interacts with the viral N protein to inhibit PEDV proliferation [15]. Moreover, cholesterol 25-hydroxylase, GTPase-activating protein-binding protein 1, interleukin-11 and IFN-λ can regulate PEDV infection and replication [16][17][18][19]. Poly(A)-binding protein cytoplasmic 1 (PABPC1), one of the poly(A)-binding proteins (PABPs), is composed of four non-identical RNA-recognition motifs (RRMs) and a C-terminus which consists of a proline-rich region and a globular domain [20]. In the cytoplasm, PABP binds to the poly(A) tail at the 3'end of mRNA through RRMs and interacts with the N terminus of eukaryotic translation initiation factor 4 gamma (eIF4G) protein. The interaction of PABP, mRNA and eIF4G constitutes a translation initiation complex, which mediates cellular mRNA circularization and enhances cap-dependent translation by facilitating ribosome recycling [21,22]. PABP can also interact with deadenylated protein complex to promote the degradation of mRNA [20]. Many RNA and DNA viruses inhibit the translation of host cells as a means of interfering with the cell defense system. The NSP3A protein of rotavirus can bind to eIF4G to transfer PABPC1 from the translation complex [23]. The 2A and 3C proteases of picornavirus can inactivate PABPC1 by cleaving the N-terminal of PABPC1 so that it cannot bind to eIF4G, thus affecting the normal translation of the host [24][25][26]. In addition, PABPC1 can promote or inhibit the translation of virus mRNA in various pathways. For example, in the absence of a poly(A) tail, PABP binds to the 3 -UTR of Dengue virus to promote translation [27]. However, PABPC4 inhibits PEDV replication by degrading the N protein [28]. In our previous studies about transcription analysis of immortalized porcine intestinal epithelial cell clone J2 (IPEC-J2) cells after PEDV infection, we found that the mRNA expression of the PABPC1 gene significantly up-related at 12-18 h post infection (hpi) [29]. Based on the results, we hypothesize that the PABPC1 gene is involved in the PEDV replication process. In this study, we show that PABPC1 inhibited PEDV replication by interacting with PEDV N protein, which demonstrates a new function of PABPC1 to PEDV infection and enriches the knowledges of interaction between PEDV and host. Cells and Viruses Vero cells and 293T cells were grown and maintained in Dulbecco minimum Eagle's essential medium (DMEM, Gibco, Shanghai, China), supplemented with 10% heat-inactivated fetal bovine serum (FBS, Gibco, Shanghai, China). The cells were cultured at 37 • C in 5% CO 2 . PEDV strain GDS01 (GII subtype, GenBank accession number: KM089829.1) was cultured at 0.1 multiplicity of infection (MOI) and titrated in Vero cells in the presence of trypsin (10 µg/mL). The cells were harvested when 90% of the cells showed the cytopathic effect (CPE) and then subjected to three freeze-thaw cycles. After centrifugation at 10,000× g for 10 min at 4 • C, the supernatants were collected for further propagation or stored at −80 • C. Construction of Expression Plasmids The gene of PABPC1 (GenBank accession number: XM_007465734.1) was amplified from the genome of IPEC-J2 cells, and the N gene of PEDV (GenBank accession number: KM089829.1) was amplified from the genome of GDS01. The amplified genes were cloned into pcDNA3.1(+) vector with FLAG-tag or HA-tag respectively. The PCR primers used in this study were listed as follows: PABPC1-F: 5 -GCCACCATGGAGGCTCCCACCGGGGCT- Detection of the Antiviral Effect of PABPC1 by Overexpression and siRNA Interference in Vero Cells Upon reaching 80-90% confluence, Vero cells were transfected with recombinant plasmid pcDNA3.1(+)-PABPC1 with FLAG-tag or empty plasmid using Lipofectamine 3000 according to the manufacturer's recommendations (Thermo, Shanghai, China). Then, 24 h after transfection, the cells were infected with PEDV at an MOI of 0.1. The mRNA expression of PEDV N and PABPC1 genes was detected by RT-qPCR. The protein expression of PEDV N and PABPC1 was analyzed by Western blot with mouse anti-PEDV polyclonal antibody (prepared by our lab) and mouse anti-FLAG monoclonal antibody (Dia-an, CN). The PEDV titers in the supernatant were detected by plaque assay. Western Blot Assay The collected cells were efficiently lysed by RIPA Lysis (Beyotime, Shanghai, China) and the proteins were extracted with Extraction Buffer (Beyotime, Shanghai, China) in accordance with the manufacturer's instructions. The protease inhibitor, phenylmethanesulfonyl fluoride (PMSF), was added to block the endogenous proteolysis. Samples were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto a polyvinylidene fluoride (PVDF) membrane. Nonspecific reaction was blocked with 5% skimmed milk in Tris-buffered saline Tween (TBST) buffer. Anti-PABPC1 rabbit monoclonal antibody (Abcam, Cambridge, UK) and anti-GAPDH mouse monoclonal antibody (Proteintech, Rosemont, IL, USA) were used as primary antibodies, respectively, at a ratio of 1:2000. After washing three times with TBST buffer, the membranes were incubated with HRP-conjugated secondary antibody (Proteintech, Rosemont, IL, USA) at a ratio of 1:5000. Protein expression was detected with the commercial ECL kit (Thermo, Waltham, MA, USA), and analyzed with IMAGE J software (v1.8.0, Bethesda, MD, USA). Plaque Assay Vero cells were harvested at 9 h post PEDV infection. After three freeze-thaw cycles, the cells were centrifuged at 10,000× g for 10 min at 4 • C, and the supernatants were harvested. Plaque assay was performed using Vero cells in 6-well plates when cells reached complete confluence. The virus samples were subjected to 10-fold gradient dilution. Then, the diluted samples were inoculated into Vero cells and discarded after 1 h. The cells were washed three times with phosphate-buffered saline (PBS), and then incubated with 2 mL DMEM medium containing 1% agarose. Finally, the cells were stained with 0.03% neutral red and the virus titers were calculated as plaque-forming units (pfu) /mL. Mass Spectrometry Analysis and Immunoprecipitation (IP) Assay Vero cells transfected with pcDNA3.1(+)-PABPC1 plasmid with FLAG-tag were infected with PEDV at an MOI of 0.1 and harvested at 6 hpi. The total proteins were extracted as described above. The supernatants were incubated with anti-FLAG mouse monoclonal antibody for 12 h at 4 • C, then Agarose A+G beads (Beyotime, Shanghai, China) were added. After 6 h incubation, the beads were collected by centrifugation at 2500× g for 5 min and washed eight times with cold protein extraction buffer. The beads were boiled in 5× SDS loading buffer to elute bound proteins for mass spectrometry analysis. To assay the interaction between PABPC1 and PEDV N protein, 293T cells were transfected with pcDNA3.1(+)-PABPC1 or empty plasmid with FLAG-tag and pcDNA3.1(+)-PEDV-N or empty plasmid with HA-tag, respectively. After 24 h, the cells were lysed and incubated with anti-FLAG monoclonal antibody, and the eluted samples were analyzed with anti-FLAG, anti-HA, or anti-PEDV antibodies. Confocal Microscopy 293T cells grown on a cover glass in a 12-well plate were transfected with pcDNA3.1(+)-PABPC1 or empty plasmid with FLAG-tag and pcDNA3.1(+)-PEDV-N or empty plasmid with HA-tag, respectively. After 24 h, the cells were permeabilized with 0.5% Triton X-100 at room temperature for 15 min and blocked with 5% bovine serum albumin (BSA) in PBST at room temperature for 1 h. The cells were incubated with rabbit anti-FLAG monoclonal antibody (Sigma, Saint Louis, MO, USA) and mouse anti-HA monoclonal antibody (Sigma, Saint Louis, MO, USA), respectively, overnight at 4 • C. After washing three times, the cells were incubated with goat anti-rabbit IgG antibody conjugated to Alexa Fluor 594 and goat anti-mouse IgG antibody conjugated to Alexa Fluor 647 (Sigma, Saint Louis, MO, USA), respectively. Fluorescent images were acquired using the Leica TCS SP5 confocal microscope (Leica, Wetzlar, Germany). Statistical Analysis Each experiment was repeated at least three times. Values were expressed as mean ± SD. Statistical analysis were performed using GraphPad Prism5 software (v5.0, San Diego, CA, USA) with Student's t-test. A p value < 0.5 was considered statistically significant and labeled as an asterisk in the figures. *, p < 0.05; **, p < 0.01; ***, p < 0.001. PEDV Infection First Upregulates and then Downregulates the Expression of PABPC1 in Vero Cells Many cell proteins have been identified as participating in host antiviral activities after PEDV infection in vivo or in vitro [12][13][14]30,31]. The present study focused on whether PABPC1 was involved in PEDV infection process. The changes in PABPC1 protein expression in Vero cells were measured after PEDV infection at an MOI of 0.1. As shown in Figure 1, the protein expression of PABPC1 was upregulated at 8 hpi, reached the highest level at 12 hpi, and downregulated at 16 hpi (Figure 1). These results suggest that, after PEDV infection, the expression of PABPC1 was upregulated at the early infection stage and downregulated at the later infection stage. tein expression in Vero cells were measured after PEDV infection at an MOI of shown in Figure 1, the protein expression of PABPC1 was upregulated at 8 hpi, r the highest level at 12 hpi, and downregulated at 16 hpi (Figure 1). These results s that, after PEDV infection, the expression of PABPC1 was upregulated at the early tion stage and downregulated at the later infection stage. Overexpression of PABPC1 Inhibits PEDV Replication in Vero Cells To explore whether PABPC1 could affect PEDV infection, we examined PE protein expression level and virus titers after PABPC1 was overexpressed. Vero cel transfected with pcDNA3.1(+)-PABPC1 recombinant plasmid or empty plasmid fo then the cells were infected with PEDV at an MOI of 0.1. The cellular RNA was ex to detect PEDV N-gene expression level by RT-qPCR at 3 hpi and 6 hpi. The PABP PEDV N-protein expression levels were confirmed by Western blot at 6 hpi. The c ture supernatants were collected to determine the virus titers by plaque assay a Compared with the negative control, the mRNA expression level of N gene was cantly decreased in PABPC1-overexpressed cells (Figure 2A), and the expression l N protein ( Figure 2B) and virus titers ( Figure 2C) showed a significant decrease a These data suggested that the overexpression of PABPC1 inhibited PEDV replicati Overexpression of PABPC1 Inhibits PEDV Replication in Vero Cells To explore whether PABPC1 could affect PEDV infection, we examined PEDV Nprotein expression level and virus titers after PABPC1 was overexpressed. Vero cells were transfected with pcDNA3.1(+)-PABPC1 recombinant plasmid or empty plasmid for 24 h, then the cells were infected with PEDV at an MOI of 0.1. The cellular RNA was extracted to detect PEDV N-gene expression level by RT-qPCR at 3 hpi and 6 hpi. The PABPC1 and PEDV N-protein expression levels were confirmed by Western blot at 6 hpi. The cell-culture supernatants were collected to determine the virus titers by plaque assay at 9 hpi. Compared with the negative control, the mRNA expression level of N gene was significantly decreased in PABPC1-overexpressed cells (Figure 2A), and the expression level of N protein ( Figure 2B) and virus titers ( Figure 2C) showed a significant decrease as well. These data suggested that the overexpression of PABPC1 inhibited PEDV replication. that, after PEDV infection, the expression of PABPC1 was upregulated at the early in tion stage and downregulated at the later infection stage. Overexpression of PABPC1 Inhibits PEDV Replication in Vero Cells To explore whether PABPC1 could affect PEDV infection, we examined PEDV protein expression level and virus titers after PABPC1 was overexpressed. Vero cells w transfected with pcDNA3.1(+)-PABPC1 recombinant plasmid or empty plasmid for 2 then the cells were infected with PEDV at an MOI of 0.1. The cellular RNA was extrac to detect PEDV N-gene expression level by RT-qPCR at 3 hpi and 6 hpi. The PABPC1 PEDV N-protein expression levels were confirmed by Western blot at 6 hpi. The cellture supernatants were collected to determine the virus titers by plaque assay at 9 Compared with the negative control, the mRNA expression level of N gene was sig cantly decreased in PABPC1-overexpressed cells (Figure 2A), and the expression leve N protein ( Figure 2B) and virus titers ( Figure 2C) showed a significant decrease as w These data suggested that the overexpression of PABPC1 inhibited PEDV replication Values represent means ± SD from three independent experiments. * means p < 0.05; ** means p < 0.01; *** means p < 0.001, **** means p < 0.0001. Knockdown of PABPC1 Expression Promotes PEDV Replication in Vero Cells To further clarify the role of PABPC1 in PEDV replication, we knocked-down the endogenous expression of PABPC1 with specific siRNA in Vero cells. Western blot assay confirmed all three synthesized siRNA targeting PABPC1 decreased the expression of endogenous PABPC1 ( Figure 3B). At 24 h after siRNA transfection, Vero cells were infected with PEDV at a MOI of 0.1. At 6 hpi, the mRNA expression level of N gene was detected by RT-qPCR, and the expression level of N protein was confirmed by Western blot analysis. Compared with negative control, the mRNA level of N gene showed significant upregulation in siP1-and siP3-transfected cells ( Figure 3A). The expression level of N protein increased by 1.83, 1.58, and 1.72 times in siP1, siP2, and siP3-transfected cells respectively ( Figure 3B). Overall, these results proved that the knockdown of PABPC1 expression promoted PEDV replication. Knockdown of PABPC1 Expression Promotes PEDV Replication in Vero Cells To further clarify the role of PABPC1 in PEDV replication, we knocked-down the endogenous expression of PABPC1 with specific siRNA in Vero cells. Western blot assay confirmed all three synthesized siRNA targeting PABPC1 decreased the expression of endogenous PABPC1 (Figure 3B). At 24 h after siRNA transfection, Vero cells were infected with PEDV at a MOI of 0.1. At 6 hpi, the mRNA expression level of N gene was detected by RT-qPCR, and the expression level of N protein was confirmed by Western blot analysis. Compared with negative control, the mRNA level of N gene showed significant upregulation in siP1-and siP3-transfected cells ( Figure 3A). The expression level of N protein increased by 1.83, 1.58, and 1.72 times in siP1, siP2, and siP3-transfected cells respectively ( Figure 3B). Overall, these results proved that the knockdown of PABPC1 expression promoted PEDV replication. PABPC1 Interacts Directly with N Protein of PEDV To further investigate the relationship between PABPC1 and PEDV, the FLAG-tag antibody was used to precipitate the proteins of FLAG-PABPC1-overexpressed and PEDV-infected Vero cells, and the precipitated product was analyzed by mass spectrometry. Multiple proteins, including PEDV N protein, were detected in the precipitated product, indicating that PEDV N protein could interact with PABPC1 in Vero cells ( Figure 4A). Thus, we detected the interaction between PABPC1 and PEDV N protein. Firstly, the cellular locations of PABPC1 and N protein were assayed using confocal microscopy by overexpressing them with different tags in 293T cells. Results showed that PABPC1 was mainly distributed in the cytoplasm and there was a small amount of PABPC1 in the nucleus; N protein was mainly distributed in the cytoplasm ( Figure 5). There were obvious co-localizations of PABPC1 with N protein in 293T cells. Subsequently, to confirm the interaction, the immunoprecipitation assays were performed by overexpressing PABPC1 and N proteins with different tags in 293T cells. As shown in Figure 4B, N protein with HA-tag can be detected after immunoprecipitation using anti-FLAG monoclonal antibody PABPC1 Interacts Directly with N Protein of PEDV To further investigate the relationship between PABPC1 and PEDV, the FLAG-tag antibody was used to precipitate the proteins of FLAG-PABPC1-overexpressed and PEDVinfected Vero cells, and the precipitated product was analyzed by mass spectrometry. Multiple proteins, including PEDV N protein, were detected in the precipitated product, indicating that PEDV N protein could interact with PABPC1 in Vero cells ( Figure 4A). Thus, we detected the interaction between PABPC1 and PEDV N protein. Firstly, the cellular locations of PABPC1 and N protein were assayed using confocal microscopy by overexpressing them with different tags in 293T cells. Results showed that PABPC1 was mainly distributed in the cytoplasm and there was a small amount of PABPC1 in the nucleus; N protein was mainly distributed in the cytoplasm ( Figure 5). There were obvious co-localizations of PABPC1 with N protein in 293T cells. Subsequently, to confirm the interaction, the immunoprecipitation assays were performed by overexpressing PABPC1 and N proteins with different tags in 293T cells. As shown in Figure 4B, N protein with HA-tag can be detected after immunoprecipitation using anti-FLAG monoclonal antibody in 293T cells, and vice versa ( Figure 4C). These results confirmed the interaction between PABPC1 and PEDV N protein. in 293T cells, and vice versa ( Figure 4C). These results confirmed the interactio PABPC1 and PEDV N protein. Discussion Viruses are non-cellular life forms that must parasitize within cells to Discussion Viruses are non-cellular life forms that must parasitize within cells to proliferate. During virus invasion into host cells, a series of complex interactions occurs between the virus and the host cell, including the regulation and modification of the cell by the virus and antiviral action by the cellular factors. Up to now, several antiviral factors have been explored during PEDV infection, such as bone marrow stromal cell antigen 2, transferrin Discussion Viruses are non-cellular life forms that must parasitize within cells to proliferate. During virus invasion into host cells, a series of complex interactions occurs between the virus and the host cell, including the regulation and modification of the cell by the virus and antiviral action by the cellular factors. Up to now, several antiviral factors have been explored during PEDV infection, such as bone marrow stromal cell antigen 2, transferrin receptor 1, cholesterol 25-hydroxylase, GTPase-activating protein-binding protein 1, interleukin-11, and IFN-λ [12][13][14][15][16][17][18][19]. The study of host antiviral factors can help to better understand the host-virus interaction. In this study, we explored and identified that the poly(A)-binding protein directly participated in PEDV replication via interaction with N protein. Coronavirus replication involves not only viral proteins, but also cellular proteins, which are subverted from the normal functions of the host to play roles in the viral replication cycle. Several cellular proteins have been shown to bind to the regulatory elements of mouse hepatitis virus RNA, including the 5 and 3 ends of the genomic RNA and the 3 end of the negative-strand RNA [32][33][34][35][36][37]. PABP is known to interact specifically with poly(A), and the binding of PABP to the 3 -UTR of the defective-interfering (DI) RNA replicons corrects the ability of the DI RNA to replicate, suggesting that the interaction between PABP and the poly(A) tail may affect coronavirus RNA replication [34,[38][39][40][41]. However, metazoans often encode multiple cytoplasmic PABPs; which PABPs play a key role in the replication of the coronavirus genome has not yet been studied clearly. Jiao et al. have proved that PABPC4 broadly inhibits coronavirus replication by degrading N protein through selective autophagy [28]. Until now, PABP's effect on the infection of PEDV remains unclear. In our previous studies about transcription analysis of IPEC-J2 cells after PEDV infection, we found that the mRNA expression of the PABPC1 gene significantly upregulated at 12-18 hpi [41]. In this study, we identified that PABPC1 protein expression upregulated in the early stage and downregulated at the later infection stage of PEDV infection in Vero cells (Figure 1). Thus, we speculated that PABPC1 is involved in PEDV replication. Then we overexpressed and knocked-down PABPC1 to explore the role of PABPC1 in PEDV replication. After the overexpression of PABPC1, the mRNA and protein expression level of PEDV N, as well as virus titers, were significantly downregulated ( Figure 2). After the knock-down of PABPC1 expression by specific siRNA, PEDV replication was promoted with significant upregulated expression of N protein in mRNA and protein level (Figure 3). These results demonstrate that PABPC1 inhibits PEDV replication, and it is the first time the negative effects of PABP in PEDV replication have been reported. Furthermore, to identify the mechanism of PABPC1 inhibition of PEDV replication, immunoprecipitation assay and mass spectrometry were carried out. In the study of Tsai et al. about the roles of interactions among the poly(A) tail, coronavirus N protein, and PABP in the regulation of coronavirus gene expression, they conclude that N protein competes with PABP to bind to the poly(A) tail, with high affinity, and results in translation inhibition [42]. In our mass spectrometry results, N protein was detected ( Figure 4A). Protein co-location and immunoprecipitation assay confirmed the interaction of PABPC1 with N protein in the cytoplasm. As well as N protein, eIF4A, eIF3B and eIF3C proteins were also detected in mass spectrometry, which indicates that these three proteins may be involved in the interaction of PABPC1 and N protein; further study is needed to prove it. In conclusion, we demonstrated PABPC1, as an antiviral factor, inhibited PEDV replication at both the transcription and translation level by interaction with N protein. This study identified a new function of PABPC1 in PEDV infection and enriched the knowledge of interaction between PEDV and host.
2022-06-02T15:19:40.879Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "923181ece5a7901c20a705111786d9b8313aea3c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/6/1196/pdf?version=1654160246", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dd2e0ed6b7a134905077152038716032fdc8f27", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257964879
pes2o/s2orc
v3-fos-license
Analysis of Extended Spectrum Beta Lactamase Frequency in Klebsiella spp Isolates The issue of increasing resistance to antibiotics in recent years has become an important problem all over the world. Our aim is to determine the antimicrobial resistance profile and Extended Spectrum Beta-Lactamase (ESBL) rates in Klebsiella spp isolates to prevent the gradual increase in multi-resistant isolates as a result of unconscious antibiotic use thereby contributing to the faster effective treatment of infections. A total of 100 Klebsiella spp were isolated and identified from various clinical specimens. Antibiotic susceptibility tests were performed using the Kirby-Bauer method. The presence of extended-spectrum beta-lactamases (ESBL) was detected using the Double Disc Synergy Test (DDST) and E-test methods. The rates of ESBL-producing strains were 46.1% in 6 K. oxytoca and 56.3% in 49 K. pneumoniae . These strains were found to be 38% in 38 adult patients and 17% in 17 pediatric patients, and this difference was statistically significant (p <0.05). The ESBL rate was 31% in 31 male patients and 24% in 24 female patients, and this difference was not statistically significant (p>0.05). This rate was found to be high in patients hospitalized in the pediatric service and intensive care unit. 67 out of 100 strains were found to be suspicious for ESBL by Disk Diffusion Test (DDT). DDST and E-tests were applied as confirmatory tests. The sensitivity of the DDST and E tests was 100%. Screening for ESBL in Klebsiella spp and other members of Enterobacteriaceae isolates is necessary to reduce further selection and spread of these increasingly broad-spectrum antimicrobial-resistant enteric pathogens. Introduction The issue of increasing resistance to antibiotics in recent years has become an important problem all over the world [1].Beta-lactams have become the most prescribed antibiotics today due to their superior spectrum of action, high and selective toxicity to microorganisms, applicability in almost all age groups, relatively low incidence of side effects compared to other groups, and superior distribution to all body fluids.The numerical weight of these drugs among all licensed antibiotics is close to 70%.However, the resistance of bacteria to these antibiotics has increased rapidly over the years due to unnecessaryinappropriate-intensive use and insufficient application of infection control methods in hospitals [2].The most important mechanism in the development of resistance to beta-lactam antibiotics in Gram-negative bacteria is beta-lactamase production [1].Beta-lactamases; are enzymes that destroy the antibacterial effect of beta-lactam antibiotics by breaking the amide bonds in the beta-lactam ring and can be synthesized by many bacterial species, especially Enterobacteriaceae members [2].More than 500 beta-lactamase enzymes have been identified to date.The most important beta-lactamase enzyme groups are plasmid-encoded cephalosporins, Metallo-beta-lactamases and ESBLs.About 200 betalactamases can be transferred between bacteria due to their plasmid properties [3,4]. Infections caused by ESBL-producing strains are frequently seen in patients who are hospitalized for a long time, undergo major surgery, have arterial and urinary catheters, and especially in intensive care units.However, in recent years, it has been observed that the incidence of community-acquired infections has increased [5].ESBL enzymes have become an important resistance mechanism in today's hospitals because of their easy spread through plasmids, their ability to cause epidemics, and the emergence of serious clinical problems such as treatment failure and increased mortality in infections caused by these strains.Therefore, good identification of these enzymes in the laboratory is important in terms of directing the treatment.Our study aimed to prevent the gradual increase of multi-resistant strains, treat infections more rapidly and prevent unconscious antibiotic use. Material and Method This study was carried out with the approval of Harran University Clinical Research Ethics Committee, dated 14.12.2012, and numbered 05, in the Laboratory of Microbiology Department of Harran University Faculty of Medicine.A total of 100 Klebsiella spp strains were isolated from various clinical samples.These samples were sent to the Microbiology Laboratory of Harran University Research and Application Hospital between January 2014 and June 2015.Klebsiella spp strains were evaluated for extended spectrum beta-lactamase by determining their antibiotic susceptibility.Repeated samples from the same patient were excluded from the study.The tested antimicrobial discs were: amoxcillin clavulanate (AMC 10/20 μg), imipenem (IMP) 10μg, piperacillin-tazobactam (TZP) 10/100μg, cefepime (FEP) 30μg, Amikacin (AK) 30μg, ciprofloxacin (CIP) 5μg, gentamicin (CN) 120μg, cefotaxime (CTX) 30μg, ceftazidime (CAZ) 30μg, ceftriaxone (CRO) 30μg, cefoxitin (FOX) 30μg, and sulfamethoxazoletrimethoprim (SXT) 10μg. Statistical Analysis All statistical analyzes were performed using the "Windows Statistical Package for Social Sciences (SPSS)" (version 15.0; SPSS, Chicago, IL) program.For comparisons, the chi-square test was applied.Statistically, those with a p-value less than 0.05 were considered significant. Results In this study, the Kirby-Bauer method was performed.The zone of inhibition was measured and interpreted to the Clinical and Laboratory Standards Institute (CLSI) criteria.DDST (Figure 1) and E test methods (Figure 2) were used to detect the presence of ESBL.Some epidemiological information such as; age, gender, sample type, and risk factors for Klebsiella infection were recorded. Epidemiological Information Sent to the microbiology laboratory; Of a total of 100 strains, 87 identified as K. pneumoinae and 13 as K. oxytoca, were included with biochemical tests and API 20 E test.ESBL production was detected in 49 (56.3%) of K. pneumoinae strains and 6 (46.1%) of K. oxytoca strains.The distribution of the presence of ESBL according to the isolated Klebsiella species is shown in Table 1.Of the 100 patients included in the study, 54 male and 46 female patients were isolated from their culture.ESBL positivity was detected in 31 (57.4%) of the strains isolated from males and 24 (52.1%) of the strains isolated from females.Although the frequency of ESBL was higher in males than females, the difference was not statistically significant (p> 0.05) Fig1. Antibiotic Susceptibility Test Results The resistance of strains to 11 different antibiotics with DDT was investigated.The results obtained according to the CLSI criteria were interpreted.According to the results of antibiotic susceptibility tests, Klebsiella spp.16% of the strains were sensitive to all antibiotics; highest resistance; It is seen against SXT with a rate of 80%, followed by 68% CRO, 65% CTX, 62% ATM, 52% CAZ, 50% CIP, 35% AMC, 30% CN, 14% PRP, 13% AK and 1%.IPM resistance followed.Klebsiella spp. the most sensitive antibiotic of the strains was IPM with 99%.The susceptibility and resistance rates of the strains to antibiotics are shown in Table 3. Results of ESBL Screening and Validation Tests According to CLSI recommendations, 100 Klebsiella spp.ESBL production was detected in 67 (67%) with DDT, 55 (55%) with DDST, and 55 (55%) with E-test (Table 4).In our study; Samples found as "ESBL suspicious" with CRO, CTX, CAZ, and ATM by disk diffusion test were accepted as DDST reference test and compared with E-test.When DDT and four antibiotics were used together, ESBL was positive in 67 strains, while ESBL was positive in 55 strains with confirmatory tests DDST and in 55 strains with E-test, suggesting that DDT might cause false positivity.When the resistance status of ESBL positive and ESBL negative strains to antibiotics according to DDST was investigated; While ESBL positive strains were most resistant to CRO (87%), CTX (85%), and ATM (83%), no resistance to IPM was observed.In addition, ESBL-negative strains were most resistant to SXT (80%), CRO (46%), and CAZ (41%), while they were least resistant to IPM (3%) Fig 4. The mean hospital stay of the patients included in the study was 41.97±62.41days.While the mean hospitalization period of ESBL-positive patients was 43.89±59.46days, the mean hospitalization period of ESBL-negative patients was 20.79±34.78days.When ESBL positive and negative isolates were compared, the difference between the length of hospital stay, antibiotic use, hospitalization in the ICU, presence of a central venous catheter and/or urinary catheter, presence of severe underlying disease (malignancy, sepsis, and others), which are considered risk factors for ESBL.The difference was significant (p<0.05). Discussion and Conclusion Nosocomial infections cause an increase in morbidity, mortality rates as well as in hospital costs.They constitute a serious public health problem [6].In one study, in a 24-hour point, prevalence study conducted in 1150 centers, infection was proven in 54% of the patients in the intensive care unit; 70% of all patients were receiving at least 1 antibiotic (prophylactic or therapeutic).Hospital mortality has been reported to be 30% in patients with proven infection [7].In our study, the most isolated samples of Klebsiella spp were urine. According to studies conducted in various centers in Turkey, the rate of nosocomial infections was found to be 9% to 11.1%, while gram-negative bacteria causing nosocomial infections were reported to be 36.8%[8,9].The prevalence of ESBL-producing Enterobacteriaceae varies widely between hospitals.Less than 1% to more than 70% of ESBL producers have been reported worldwide [10].There are significant geographical differences in the occurrence of ESBLs.In a large study, ESBL producing Klebsiella spp.The rate of isolates was found to be in a high range from a low value such as 4.2%, Canada 4.9%, Spain 20.8%, Taiwan 28.4%, Turkey 78.6%, Algeria 20%, China 51%, and Germany 1.5% [11].The reason for these differences is the high socioeconomic and cultural variability in different regions, the use of different diagnostic methods in different patient groups, and the use of different and common antibiotics.A similar study of our study was conducted in England, using CDST and E-test, and the sensitivity of the E-test and MDST was determined to be 93% [12].Antibiotic-resistant Klebsiella spp.Isolates of ESBL and/or carbapenemase producers resistant to third/fourth generation cephalosporins and carbapenems are of great concern [12] In one study, ESBL production in K. pneumoniae was 85.4% and the highest resistance levels are SXT (77.0%),AMC (71.6%),CRO (62.2%),FEP (60.3%), and CAZ (60.8%), it was seen as [13] In our study, it eas found that Klebsiella spp were the most resistant to SXT with 80%, whereas they were most susceptible to IPM with 99%. Resistance rates to third-generation cephalosporins are 68% for CRO, 65% for CTX, 52% for CAZ, and 62% for ATM.Both ESBL positive and negative isolates were susceptible to IPM Resistance to carbapenems occurs by different mechanisms.These mechanisms are changes in the active sites of penicillin-binding proteins (PBPs), decreased expression of outer membrane proteins (OMPs), efflux pumps, and production of β-lactamase enzymes.Production of β-lactamase enzymes from all four mechanisms is the most clinically important resistance mechanism.This may result from horizontal gene transfer of β-lactamase genes responsible for the production of β-lactamase enzymes [13,14].β-lactamase specifically targets the β-lactam ring and breaks the bond in the ring, rendering the antibiotic inactive.Based on their activity profile, β-lactamases are grouped into four types: Penicillinases inactivate penicillins, but cephalosporins, aztreonam, or carbapenems are not.Cephalosporinases inactivate cephalosporins and aminopenicillins, but not other penicillins, aztreonam, and carbapenems.ESBL inactivates all β-lactams except carbapenems.Carbapenemases inactivate carbapenems as well as other β-lactam antibiotics [15,16]. It is difficult to detect the presence of ESBL with standard antimicrobial disk susceptibility tests routinely performed in most microbiological laboratories.The presence of resistance or decreased susceptibility to these antibiotics in susceptibility tests may be a stimulus for ESBL production.However, routine susceptibility testing may not yield drug resistance or moderate susceptibility results, as some bacteria producing these enzymes may have low resistance (MIC 4-16 µg/ml).Failure to detect a resistance mechanism by susceptibility testing results in latent resistance that can be transferred by plasmids to other bacteria and causes serious problems in treatment [17,18].Enterobacteriaceae species should be identified correctly by special methods because of the increase in the prevalence of ESBL production, their prevalence in clinical isolates, their easy spread through plasmids, the fact that they cause serious clinical problems such as epidemics, treatment failure, increased mortality, and that they are difficult to identify with routine susceptibility tests.Although ESBL-producing bacteria are resistant to broad-spectrum cephalosporins and aztreonam, they can be found sensitive in routine antibiotic susceptibility tests and may cause problems during treatment [19]. Many methods have been proposed to detect ESBL-producing bacteria.They were: Ceftazidime resistance control, DDST, Combined Disk Test (CDT), three-dimensional test, E-test, using higher bacterial density, disk diffusion in media with clavulanic acid, MIC with a combination of clavulanic acid, automatic Vitek and Micro screening methods [20,21]. Beta-lactamases act by cleaving the cyclic amide bond in the beta-lactam ring.Beta-lactamase genes are encoded in chromosome control in bacteria or in genes found in plasmids or transposons.Plasmidderived beta-lactamases such as TEM-1, TEM-2, and SHV-1 are common enzymes among members of the Enterobacteriaceae and are transferred to other bacteria via plasmids.Although ESBL enzymes are mainly derived from TEM and SHV enzymes, new plasmid-derived ESBLs such as CTX-M, OXA-1, PER-1, and PER-2, which are not from TEM and SHV, have also been identified [22][23][24]. In our study, the specificity of DDT (CAZ, CRO, CTX, and ATM together) was 65.3%, the sensitivity 93.1%, and CAZ (82%) was the indicator that revealed the most 'ESBL suspect' isolate.The high sensitivity of CDST and E-test in our study suggests that mostly TEM or SHV-type enzymes were produced in our hospital strains.Therefore, inhibitor combination tests for diagnosing ESBL are also insufficient and an additional test is definitely needed.In conclusion, factors such as changes in outer membrane protein profiles (such as OmpF and OmpC deficiency), the presence of beta-lactamases not inhibited by clavulanic acid, and secretion of beta-lactamases due to low-level AmpC chromosome contribute to resistance [25]. When we investigated the risk factors for ESBL colonization and infection, the incidence of ESBL was found to be significant in mechanically ventilated patients.In patients with a nasogastric tubes, new surgery was found to be associated only with the E-test method, while ESBL positivity was found to be significant only with DDST.There was no significant difference between other factors and ESBL.In one study, risk factors for ESBL (age, gender, length of hospital stay, severity of disease, presence of urinary catheter or mechanical ventilator, and antibiotic use up to two weeks before bacteremia) were investigated.Previous treatment with third-generation cephalosporins was the only independent risk factor (p=0.008).A similar study found the use of antibiotics containing oxyimino ring for K. pneumoniae strains as a risk factor for ESBL production [25,26].According to one view, the reason for the differences in terms of risk factors in other studies, as in our study, is; The retrospective character of the study is seen as insufficient number of patients, lack of consensus in distinguishing colonization from real infection, insufficient data on antibiotic use of patients before admission to hospital or infection, and isolates were collected only from certain services [27]. Three-dimensional testing and dilution methods for detecting the presence of ESBL are difficult and timeconsuming tests to be applied in practice.DDST and CDT are tests that are frequently used in laboratories, difficult and costly to read as a result of diffusion of beta-lactamase inhibitor to the betalactam antibiotic side, and have close sensitivity to each other.Therefore, as phenotypic confirmatory tests, both the DDST and the CLSI -recommended DST are methods that are practical and easy to apply in every laboratory.One advantage of CDT is that it provides the convenience of using only two discs.The high level of ESBL production in our laboratory suggests that most of the isolates in our hospital are susceptible to inhibition by clavulanic acid. The high level of ESBL production in our laboratory suggests that most of the isolates in our hospital are susceptible to inhibition by clavulanic acid.Two tests other than the Disk Diffusion Test seem to be good options for determining ESBL production in routine laboratories.However, the application of the E-test requires meticulousness, it is an expensive method compared to others, and sometimes the diffusion of beta-lactamase inhibitor to the beta-lactam antibiotic side creates difficulties in evaluating the result.The Double Disk Synergy Test, on the other hand, has disadvantages such as the distance between the disks affecting the result.Our recommendation; According to literature data, CDT seems to be an excellent test for reaching results when cefamycin and beta-lactamase inhibitor combinations of ESBLproducing strains are tested, fourth generation cephalosporins for AmpC enzyme, and ceftazidime for K1 enzyme overproduced in K. oxytoca are tested. Figure 1 : Figure 1: Distribution of ESBL presence by gender Figure 2 : Figure 2: Distribution of ESBL presence by child and adult age groups Figure 3 : Figure 3: Distribution of ESBL presence by ICU and clinics Figure 4 : Figure 4: Antibiotic resistance rates of ESBL positive and negative samples by double disc synergy test Table 1 : Distribution of ESBL presence by isolated Table 2 : Distribution of ESBL presence according to clinical samples from which strains were isolated Table 3 : Klebsiella spp.antibiotic susceptibility results of strains Table 4 : Comparison of DDT and Confirmation tests
2023-04-06T15:22:48.386Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "0656142077ac5ed4b3b9d54633cad0ff6c10434b", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/2777846", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aca85e97d501641030d4c7dc7efab0b9f93c647f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
121905453
pes2o/s2orc
v3-fos-license
What is the Discrete Gauge Symmetry of the MSSM? We systematically study the extension of the Supersymmetric Standard Model (SSM) by an anomaly-free discrete gauge symmetry Z_N. We extend the work of Ibanez and Ross with N=2,3 to arbitrary values of N. As new fundamental symmetries, we find four Z_6, nine Z_9 and nine Z_18. We then place three phenomenological demands upon the low-energy effective SSM: (i) the presence of the mu-term in the superpotential, (ii) baryon-number conservation upto dimension-five operators, and (iii) the presence of the see-saw neutrino mass term LHLH. We are then left with only two anomaly-free discrete gauge symmetries: baryon-triality, B_3, and a new Z_6, which we call proton-hexality, P_6. Unlike B_3, P_6 prohibits the dimension-four lepton-number violating operators. This we propose as the discrete gauge symmetry of the Minimal SSM, instead of R-parity. Introduction The action of the Standard Model (SM) [1,2] is invariant under Poincaré transformations, as well as the gauge group G SM = SU(3) C × SU(2) W × U(1) Y . When allowing only renormalisable interactions, baryon-and lepton-number are (accidental) global symmetries of the SM. 1 However, when considering the SM as a low-energy effective theory, G SM allows for non-renormalisable interactions, which can violate lepton-and baryon-number. The leading dimension-six operators are suppressed by two powers of an unknown mass scale M, which is unproblematic for proton decay if M 10 16 GeV, see however [7,8]. Enlarging the Poincaré group, the action of the Supersymmetric SM (SSM) is invariant under supersymmetry, as well as G SM [9,10]. The renormalisable superpotential of the SSM is given by [11,12,13,14] where we employ the notation of Ref. [15], and SU(3) C and SU(2) W indices are suppressed. The fifth, sixth and eighth terms violate lepton-number, and the seventh term violates baryon-number. Thus in the SSM, lepton-and baryon-number are violated by renormalisable dimension-four interactions. In particular, LQD andŪDD together lead to rapid proton decay. The lower experimental bound on the proton lifetime [16,17] results in the very stringent bounds [18,13,19] λ ′ i1j · λ ′′ 11j < 2 · 10 −27 Md j 100 GeV 2 , i = 1, 2 , j = 1 , (1.2) and the SSM must be considered incomplete. In order to obtain a natural and viable supersymmetric model, we must extend G SM , such that at least one of the operators LQD orŪDD is forbidden. 2 The Minimal SSM (MSSM) is conventionally taken as the renormalisable SSM with the superpotential, Eq. (1.1), additionally constrained by the discrete symmetry R-parity, R p = (−1) 2S+3B+L [22], which acts on the components of the superfields. Here S is spin, B baryon-number and L lepton-number. Hence the superpotential of the renormalisable MSSM is given solely by the first line of Eq. (1.1), and baryon-and lepton-number are conserved. Matter-parity (M p ) [23], acts on the superfields and leads to the same superpotential as R p . Our working definition of the MSSM shall be the SSM constrained by M p . We return to this in Sect. 6. Another possibility is to extend G SM by baryon-triality 3 (B 3 ) [24,25], leading to the R-parity violating MSSM [15]. However, due to the unification of the G SM gauge coupling constants in supersymmetry [27,28,29,30], and also the automatic inclusion of gravity in local supersymmetry [31,32], we expect the SSM, and also the MSSM, to be low-energy effective theories, embedded in a more complete theory formulated at the scale of Grand Unified Theories (M GUT ∼ 10 16 GeV) [33], or above. Within the SSM, we must therefore take into account the possible non-renormalisable operators, which are consistent with G SM , within the MSSM, those which are also consistent with M p . In particular, we are here interested in the dimension-five baryon-and/or lepton-number violating interactions. In Eq. (6.1), we list the complete set for the SSM [11,12,15,25]; a subset is also present in the MSSM. Even if suppressed by the gravitational scale M grav = 2.4 × 10 18 GeV, these operators are potentially dangerous, depending on their flavour structure [11,12,34]. Thus, even though M p provides the SSM with an excellent candidate for cold dark matter it has a serious problem with baryon-number violation. When considering the (high-energy) symmetry extension of the SSM, we take into account the effects on the dimension-four and the dimension-five operators. It is the purpose of this paper to systematically investigate discrete Z N symmetry extensions of G SM without invoking the existence of new light particles. Since a global discrete symmetry is typically violated by quantum gravity effects [35], we focus on an Abelian discrete gauge symmetry (DGS): it is a discrete remnant of a spontaneously broken U(1) gauge symmetry [35,36]. For an explicit Lagrangian see, e.g., Ref. [37]. Assuming the original gauge theory to be anomaly-free, Ibáñez and Ross (IR) determined the constraints on the remnant low-energy and family-independent DGSs [24,25]. They classified all Z N DGSs for N = 2, 3 according to their action on the baryon-and leptonnumber violating operators and then determined which are discrete gauge anomaly-free (see the end of Sect. 2). They found only two such anomaly-free DGSs which prohibited the dimension-four baryon-number violating operators and allowed the H d H u term: matter-parity (R 2 in their notation) and baryon-triality, B 3 . The latter has the advantage of also prohibiting the dangerous dimension-five operators. In this paper, we extend the work of IR to Z N symmetries with arbitrary values of N. We first determine all family-independent anomaly-free DGSs consistent with the first three terms in Eq. (1.1) (Sects. [2][3][4]. From the low-energy point of view, where heavy and possibly Z N charged particles do not play a rôle, this infinite number of anomalyfree DGSs can be rescaled to an equivalent finite set, which we denote as fundamental (Sect. 5). We are left with four Z 6 , nine Z 9 , and nine Z 18 new symmetries, beyond the five Z 2,3 symmetries of IR. Together these twenty-seven fundamental DGSs comprise a complete set. This is one of the main results of this paper. Next, we investigate their effect on the baryon-and lepton-number violating operators (Sect. 6). There is only one DGS which simultaneously allows the H d H u term, prohibits all dimension-four baryon-and lepton-number violating operators, prohibits the dimension-five baryon-number violating operators and allows the dimension-five Majorana neutrino mass term LH u LH u . This is one of the Z 6 symmetries, R 5 6 L 2 6 , in the notation of IR. We shall denote it protonhexality, P 6 . This we propose as the DGS of the MSSM. Every Z 6 is isomorphic to a direct product of a Z 2 and a Z 3 [38], so it is not too surprising that P 6 is isomorphic to the direct product of M p and B 3 . We then investigate the necessity of heavy fermions in theories with anomaly-free DGSs (Sect. 7), leading to a different conclusion than Ref. [39]. In Sects. 2-7 we take a bottom-up approach in determining the discrete symmetry. At the CERN LHC, we will hopefully discover supersymmetric fields and their interactions. Through the measured and thus allowed interactions we can infer the discrete symmetry. From this point-of-view, two discrete symmetries are equivalent, if they result in the same low-energy interactions. In Sect. 8, we instead investigate the top-down perspective, focussing on the distinct gauge theories leading to low-energy equivalent DGSs. For demonstrational purposes we finally present a gauged U(1) model, which, after spontaneous symmetry breaking, leads to an effective SSM with proton-hexality (Sect. 9). We briefly comment on some related work in the literature. Throughout we restrict ourselves to family-independent DGSs. For examples of family-dependent DGSs see Refs. [25,40]. We shall, however, in general, allow for the original gauge symmetry to be family-dependent. We do not consider discrete R-symmetries. For an anomaly-free gauged U(1) R-symmetry in a local supersymmetric theory see Refs. [41,42,43]. This could be broken to a discrete R-symmetry. Since R-parity is inserted ad hoc in the SSM to give the MSSM, there is an extensive literature on "gauged" R-parity, i.e. where R-parity is the remnant of a broken gauge symmetry. Martin has considered R-parity as embedded in a U(1) B−L gauge symmetry and classified the possible order parameters in extended gauge symmetries [SO (10), SU(5), SU(5) × U(1), E 6 ], which necessarily lead to R-parity [44,45]. Babu et al. [46] combine DGSs with an attempt to solve the µ-problem. Chemtob et al. [47] deal with anomaly-free DGSs of the next-to-MSSM (NMSSM). Although not in our systematic context, some of the anomaly-free DGSs we find are mentioned in the literature explicitly [46] or implicitly [48]. In particular, P 6 occurs in Ref. [46], and in Refs. [49,50] a related non-supersymmetric Z 6 is studied. The Linear Anomaly Constraints In this section, we review the work of IR [24,25] on DGSs. We focus here on constraints arising from the linear U(1) X anomalies A CCX , A W W X and A GGX , where we adopt the notation of Ref. [51]. For example, the SU(3) C -SU(3) C -U(1) X anomaly is denoted as A CCX , and G stands for "Gravity". In Sect. 4, we investigate the purely Abelian anomalies, i.e. A Y Y X , A Y XX and especially the cubic anomaly A XXX . For the high-energy gauge symmetry, we consider an in general generation-dependent U(1) X extension of G SM , with the chiral superfield charges quantised (i.e. the quotient of any two charges is rational) and normalised to be integers. We assume it is spontaneously broken by the vacuum expectation value (VEV), υ, of a scalar field Φ with U(1) X charge X Φ ≡ N > 1. The mass scale of the broken symmetry is M X = O(υ) ≫ M W . (We assume here a single field Φ, or a vector-like pair; cf. Sect. 9.) This leaves a residual, low-energy Z N symmetry, which we assume to be generation-independent 4 on the SSM chiral superfields [35,37]. In the low-energy theory, we restrict ourselves to the particle content of the SSM, allowing however for additional heavy fermions with masses O(M X ). To avoid later confusion, we emphasise here that the U(1) X charge of Φ is not necessarily the same N, which appears in the final Z N we obtain when restricting ourselves to the so-called "fundamental" DGSs. We discuss this in more detail in Sect. 5. For the SSM fields, the Z N charges q i are related to the integer U(1) X charges, X i , via a modulo N shift Here the index i labels the SSM particle species and q i , m i are integers. Just like the U(1) X charges, the m i are in general generation-dependent, whereas the q i are assumed to be generation-independent. We also allow for Dirac and Majorana fermions which become massive at O(M X ). For the former, two fields with U(1) X charges X j D1 and X j D2 , respectively, must pair-up, resulting in a Dirac mass term after U(1) X breaking. The Majorana fields with charge X j ′ M can directly form a mass term. The Z N invariance of these mass terms requires The indices j and j ′ run over all heavy Dirac and Majorana particles, respectively. Assuming the initial U(1) X is anomaly-free, IR derive the resulting constraints on the Z N charges q i of Eq. (2.1). From the anomaly cancellation conditions The sums in Eqs. (2.4) and (2.5) run over all colour triplets and weak doublets, respectively, i.e. we restrict ourselves to only fundamental representations 5 of SU(3) C and SU(2) W . As all particles couple gravitationally, we sum over the entire chiral superfield spectrum in Eq. (2.6). Depending on the charge shifts, m i , of the low-energy fields, as well as the heavyfermion particle content, the square brackets in Eqs. (2.4)-(2.6) can take on arbitrary integer values. In the case of even N, any half-odd integer is allowed for the square bracket in Eq. (2.6). Hence, we can rewrite them symbolically as i=3,3 with η = 0, 1 for N = odd, even, respectively. From the point of view of the low-energy theory, the various s, including the two in Eq. (2.9), each represent an arbitrary and independent integer, which is fixed by the heavy-fermion content and the choice of m i . In addition to the anomaly constraints, we obtain constraints on the U(1) X charges, by requiring a minimal set of interaction terms in the SSM superpotential, which are responsible for the low-energy fermion masses, namely the first three terms in Eq. (1.1). In Sect. 6 we investigate the consequences of additionally imposing H d H u invariance. The Z N charge equations corresponding to the first three terms of Eq. (1.1) are (2.11) q Q + q Hu + qŪ = 0 mod N . (2.12) These are three equations for seven unknowns. We can thus write the family-independent Z N charges of the SSM superfields in terms of four independent integers, which we choose as m, n, p, r = 0, 1, ..., N − 1. In the following, we make use of the integer normalised hypercharges 14) The choice of integers m, n, p in Eq. (2.13) corresponds to the notation of IR. The slightly unusual coefficients for the integer r correspond to the negative normalised hypercharge given in Eq. (2.14), and were chosen for the following charge transformation: To simplify the up-coming calculations, we perform a shift of the integer Z N charges by their integer hypercharges, such that the resulting charge q Q ′ is zero, In the following, we drop the prime on the charge symbols. This shift in the Z N charges does not change the effect of Z N on the renormalisable or non-renormalisable operators of the SSM superpotential or D-terms, since these are all U(1) Y invariant. It also does not affect the anomaly-equations which we consider. However, it does correspond to a change in the underlying U(1) X gauge theory. The difference can lead to in principle observable effects, for example cross-sections which depend on X-charges. We return to this change in Sect. 8. The choice of charges where q Q = 0, is the basis in which IR work. They show that in this case, any Z N symmetry g N can be expressed in terms of the product of powers of the three (mutually commuting) generators R N , A N and L N [25]: The charges of the SSM chiral superfields under the three independent Z N generators are given in Table 1 of Ref. [25]. In terms of the powers m, n, p, the generation-independent Z N charges of the SSM superfields are 6 Note that the integers m, n, p here are the same as in Eq. (2.13). Inserting the charges above into Eqs. (2.7)-(2.9), and assuming the SSM light-fermion content we arrive at the conditions 7 Since all s in Eqs.(2.18)-(2.20) stand for arbitrary and independent integers, we can combine these Diophantine equations to obtain a simpler set, This differs slightly from IR in notation, as we find it more convenient to retain the arbitrary integers on the right-hand side. These three equations are the basis for our further study. DGSs satisfying all three equations will be called "anomaly-free DGSs", although these constraints are only necessary but not sufficient for complete anomalyfreedom of the high-energy theory [53,39]. Symmetries Allowed by the Linear Constraints In this section, we go beyond the work of IR and determine the solutions, (n, p, m; N), to the Eqs. (2.21)-(2.23) for general values of N, not just N = 2, 3. We separately consider the two possibilities: either N is not or is a multiple of 3. We employ the notation: We conclude that the only non-trivial anomaly-free DGSs here are The simplest case with N = 2 yields the discrete Z 2 charges: This charge assignment is, from the low-energy point of view, equivalent to standard matter-parity [23]. A reversed hypercharge shift, Eq. (2.15), back to Eq. (2.13) with r = 1 yields: (a) Focusing first on n = 0, we see that p = ℓ p N ′ , for ℓ p = 0, 1, 2. Concerning Eq. (2.23), it is again necessary to distinguish between odd and even N. Thus we find a set of anomaly-free DGSs with ℓ p , ℓ m = 0, 1, 2 and s m = 0, 1, ..., 5. Taking into account Eq. (2.23), we now find Similarly, the integer m can be treated for even or odd N. Likewise, some DGSs of Eq. (3.2) are not independent of the others. Table 1 summarises the anomaly-free DGSs classified by N and the powers n, p and m. For example, the two rows with (3 |N) correspond to the DGSs of Eq. (3.2). The last column shows the number of independent non-trivial g N . The 4 in the second row arises because there are three DGSs with ℓ p = 1 but only one with ℓ p = 0; with p = 0, the case m = 0 is trivial, whereas m = N ′ and m = 2N ′ lead to equivalent DGSs. Similarly, we get 9 DGSs instead of 12 for the third row. The Purely Abelian Anomalies So far, we have determined the constraints on DGSs arising from the three linear anomaly conditions of Eqs. (2.4)-(2.6). Next we consider the three purely Abelian anomalies A Y Y X , A Y XX and A XXX , respectively. 2 , which is in general different for each field. 8 Recall, that we have chosen the hypercharges to be integer for all SSM particles, see Eq. (2.14). Thus the left-hand side is integer. However, given this normalisation, the hypercharges of the heavy fermions need not be integer and the quantity in square brackets need not be in . Thus the right-hand can take on any value within . Therefore Eq. (4.1) poses no constraint. Now By considering only the Y j D1 , we see that [...] is not necessarily an integer, just as in the previous case. Thus Eq. (4.2) is of no use from the low-energy point of view. 9 3. Next, we consider the cubic anomaly A XXX . Here we do not have a mixture of known and unknown charges: We do not know any of the U(1) X charges. We obtain for the anomaly-equation If fractional X j D1 were allowed, again no extraction of a meaningful constraint is feasible, since in this case the right-hand side of Eq. (4.3) is not necessarily of the form N · . However, as outlined in Sect. 2, we only consider integer X-charges here. We shall investigate the case of fractional X-charges for the heavy fields in Sect. 5, since the difference can be meaningful in cosmology [54,55,56]. The calculation for the cubic anomaly with only integer charges is similar to the calculation in Sect. 3, i.e. it involves many case distinctions. It can be found in Appendix A. In Table 2, we have summarised the results. We show those N, as well as the powers (n, p, m), in the case of only integer X-charges, which satisfy both the linear anomaly constraints of Sect. 3 (cf. Table 1), as well as the cubic anomaly equation considered here. The main effect of the cubic anomaly constraint consists in reducing the (infinite) list of possible DGSs. Considering N = 3 for instance, there are four independent g N symmetries allowed in Table 1. However, only one of these, namely the case where (n, p, m) = (0, 1, 1), complies with Table 2. This corresponds to B 3 , i.e. baryon-triality discussed by IR. Another example is N = 6. Here we have nine linearly allowed DGSs, while only three are left after imposing the cubic anomaly constraint: R 3 6 , R 2 6 L 2 6 and R 5 6 L 2 6 . The first two are physically equivalent to M p and B 3 from the low-energy point of view. We shall denote P 6 ≡ R 5 6 L 2 6 , as proton-hexality. This is a special discrete symmetry, which we return to in Sect. 6. For N = 9 there are 4 + 9 linearly allowed g N , of which only four are also consistent with the cubic anomaly condition. N = 27 is the first case for (3|N), where the cubic anomaly does not reduce the number of allowed DGSs. Charge Rescaling So far, we have assumed that hypercharge shifted discrete symmetries, as in Eq. (2.15), are equivalent and all chiral superfields have integer U(1) X charges. However, from the low-energy point of view, this latter assumption is too restrictive [53,39]. To see this in our analysis, consider an example from Table 2, where N = 18. The powers of the elementary discrete gauge group generators, Eq. (2.16), are given by which are all multiples of the common factor F = 3. The charges of the SSM fields, q i + m i N, are given in Eq. (2.17) as linear combinations of n, p, and m, and are therefore also all multiples of F , in our example. From the low-energy point of view, with the heavy fields integrated out, such a charge assignment is indistinguishable from a scaled one with charges (q i + m i N)/F . After the breakdown of U(1) X , the residual DGS is then a Z N/F instead of a Z N . However, the Z N/F does not necessarily satisfy the cubic anomaly, with all integer charges. In our example, we have N/F = 6, which, according to Table 2, satisfies the cubic anomaly only for very special values of (n, p, m). This integer rescaling only applies to the charges of the SSM chiral superfields. For the heavy fermions, it is typically not possible and leads to fractional charges. From a bottom-up approach, experiments would determine the rescaled DGS group Z N/F . When searching for the possible (low-energy) anomaly-free DGSs, we therefore relax our original assumption of integer charges and instead allow fractional charges for the heavy sector, only. We then denote the DGS Z N/F with the maximally rescaled charges as the fundamental DGS, i.e. F is the largest common factor of N and all q i + m i N. In Table 3, we present the complete list of fundamental DGSs, obtained from Table 2. We see that after N n p m DGSs 3 1 (2, 5, 8) A 3 9 L 9 R 2 9 , A 3 9 L 9 R 5 9 , A 3 9 L 9 R 8 9 9 3 4 (2, 5, 8) A 3 9 L 4 9 R 2 9 , A 3 9 L 4 9 R 5 9 , A 3 9 L 4 9 R 8 9 3 7 (2, 5, 8) A 3 9 L 7 9 R 2 9 , A 3 9 L 7 9 R 5 9 , A 3 9 L 7 9 R 8 9 rescaling, the infinite number of DGSs listed in Table 2 is reduced to a finite set of 27 fundamental Z N symmetries: one with N = 2, four with N = 3, four with N = 6, nine with N = 9 and nine with N = 18. Refs. [53,39] pointed out that the cubic anomaly-constraint is in general too restrictive on low-energy anomaly-free DGSs due to possible rescalings. Comparing Table 2 with Table 3, presents a classification within the SSM of the solutions to this problem. As emphasised earlier, the cubic anomaly constraint is compatible with all five classes of linearly allowed DGSs presented in Table 1, however only for restricted values of N. Rescaling the charges and allowing for fractionally charged heavy fermions, eliminates the influence of the A XXX condition on the fundamental DGSs completely. In other words, all linearly allowed fundamental DGSs are compatible with the cubic anomaly constraint. Therefore, Eq. (4.3) contains only information about whether or not the heavy-fermion U(1) X charges are fractional or integer. Of the fundamental DGSs listed in Table 3, solely M p ≡ R 2 , B 3 ≡ R 3 L 3 and P 6 ≡ R 5 6 L 2 6 are consistent with both the linear and the cubic anomaly conditions, without including fractionally charged heavy particles. 6 Physics of the Fundamental DGSs and the MSSM Now that we have found a finite number of fundamental, anomaly-free low-energy DGSs, we would like to investigate the correspondingly allowed SSM operators. In particular, we study the effect of the 27 fundamental DGSs given in Table 3 on the crucial baryon-and/or lepton-number violating superpotential and Kähler potential operators [25,15]: (6.1) The subscripts F and D denote the F -and D-term of the corresponding product of superfields. Table 4 summarises which operators are allowed for each fundamental anomaly-free DGS. The symbol indicates that an operator is allowed. Thus, for example, matterparity (R 2 ) allows the operators [H d H u ] F , but also the dimension-five baryon-number violating operators [QQQL] F and [ŪŪDĒ] F , as well as the lepton-number violating operators [LH u LH u ] F . We have included the bilinear operators LH u (unlike IR), since, even under the most general complex field rotation [57], they can not be eliminated, when taking into account the corresponding soft-breaking terms [58]. We now demand the existence or absence of certain operators on phenomenological grounds and thus further narrow down our choice of DGSs. From a low-energy point of view we must have µ = 0, and it must be of order the weak scale [63,64]. There are attempts in the literature to combine the NMSSM or another dynamical mechanism to generate µ = 0 with an anomaly-free DGS, see, for example, Ref. [47] or Ref. [46] (and references therein), respectively. This is beyond the scope of this paper. If we explicitly require the [µH d H u ] F -operator in our theory, then as can be seen from Table 4, all fundamental Z 9 and Z 18 symmetries are excluded. • Now consider neutrino masses. Without right-handed neutrinos, we can generate masses at tree-level through the terms LH u LH u and LH u (via mixing with the neutralinos), or via loop diagrams involving LLĒ or LQD [26,66,67,68]. Hence, the DGSs R 2 (M p ), R 3 L 3 (B 3 ) and R 5 6 L 2 6 (P 6 ) can incorporate neutrino masses without right-handed neutrinos. 10 However, right-handed neutrinos can easily be included as heavy Majorana fermions obeying Eq. (2.3). If the corresponding U(1) X charges allow Dirac neutrino mass terms, we obtain massive light neutrinos via the see-saw mechanism [69,70,71,72]. But in this case, LH u LH u must be allowed by the Z N symmetry as well: invariance of the Dirac mass terms for neutrinos as well as the Majorana mass terms implies a Z N invariant LH u LH u term. Table 4: Physical consequences of the 27 fundamental DGSs. The Higgs Yukawa couplings LH dĒ , QH dD , and QH uŪ are allowed for every DGS we consider by construction. The symbol denotes that the corresponding operator is possible for a given DGS. All anomaly-free fundamental Z 9 and Z 18 symmetries forbid the operators listed in the left column. If we combine these phenomenological requirements, we are left with only two DGSs: baryon-triality B 3 , and proton-hexality P 6 . It is remarkable that these discrete symmetries also survived in Sect. 5, i.e. they are discrete gauge anomaly-free with integer heavy-fermion charges. However, we would like to go a step further. In Sect. 1, we defined the MSSM as the SSM restricted by M p . When considering the MSSM as a lowenergy effective theory, the dangerous operators QQQL andŪŪDĒ are allowed. This is a highly unpleasant feature of the MSSM. IR already pointed this out as an advantage of the R-parity violating MSSM with B 3 , which does not suffer this problem. Here we propose a different solution: We define the MSSM as the SSM which is restricted by proton-hexality, P 6 . The only phenomenological difference to the conventional MSSM with M p is with respect to baryon-number violation. However, given the stringent bounds on proton decay, we find this new definition of the MSSM significantly better motivated. Note that in the language of IR, P 6 is a generalised matter-parity (GMP). We conclude this section with some observations: 1. It is interesting to note that, of the nine fundamental DGSs which allow the H d H u term, those with N = 6 are each equivalent to the requirement of imposing R 2 (i.e. matter-parity) along with one of the four fundamental Z 3 symmetries. Explic-itly one has In the first line we have given the corresponding isomorphism in terms of matterparity, baryon-triality and proton-hexality. The reason for this is that the Cartesian product of the cyclic groups Z 2 and Z 3 is isomorphic to Z 6 , i.e. Z 2 × Z 3 ∼ = Z 6 [38]. This becomes evident by giving both possible isomorphisms Z 2 × Z 3 → Z 6 . 2. In Ref. [51], a U(1) X gauge extended SSM was investigated, where all renormalisable MSSM superpotential terms have a total X-charge which is an integer multiple of N [cf. Eq. (8.7)]. Then the conditions on the U(1) X charges were derived, in order to have a low-energy M p discrete symmetry. In Ref. [73], we derive the corresponding conditions for B 3 and P 6 : 3. Next, we consider domain walls, which pose a severe cosmological problem if they occur [74]. It is commonly held that a spontaneously broken discrete symmetry leads to domain walls. These two equations can be combined to (6.13) The second equation defines the required gauge transformation. We can simplify the first equation, using the hypercharge relation This can only be fulfilled if the Z N -charges of the two Higgs, just like their hypercharges, are the inverse of each other (in the sense of a mod N calculation). 11 This is equivalent to the requirement that the µ-term is allowed by Z N . This is e.g. the case for M p canonically, as the Higgs fields are uncharged: (q H d , q Hu ) = (0, 0), R 2 (1, 1), B 3 (2, 1) and P 6 (1, 5). We stress that this argument does not rely on U(1) X being non-anomalous (cf. Sect. 8). The Heavy-Fermion Sector An interesting question to ask is as follows: Given a DGS in Table 3, do I necessarily need heavy fermions in order to cancel the anomalies? In the case of matter-parity, R 2 , we can answer the question by considering Eq. (2.23). Here, the left-hand side equals 3, while the right-hand side is 2 · + η · . Recalling that the η-term originates from heavy Majorana fermions [cf. Eq. (2.6)], we find that the symmetry R 2 is only possible if we include a heavy-fermion sector, e.g. one right-handed neutrino for each generation. In the case of the other fundamental DGSs of Table 3, let us assume the absence of heavy fermions in what follows. Under this assumption, the anomaly cancellation conditions cannot be satisfied. Inserting the discrete charges of Eq. (2.17) into Eq. (2.6), we obtain 13n + 3p − 3m = N · 2m H d + 2m Hu + k (6m Q k + 3mŪ k + 3mD k + 2m L k + mĒ k ) , (7.1) 11 If the two Higgs do not have opposite Z N -charges, the µ-term is forbidden. This then possibly enables PQ-invariance, which allows one to repeat the argument above with α(x) · Y H d,u replaced by where k is a generation index. For even N, the right-hand side in Eq. (7.1) is even. However, the left-hand side is odd for the Z 2 , Z 6 and Z 18 DGSs. Therefore heavy fermions are necessary in these cases. For the remaining 4 + 9 Z 3 and Z 9 symmetries, the right-hand side (RHS) of Eq. (7.1) can be both, even or odd. We thus employ the cubic anomaly constraint of Eq. (4.3). For the Z 9 symmetries the RHS of Eq. (4.3) is always a multiple of 27. The left-hand side (LHS) of the cubic anomaly condition, given in Eq. (A.7), is −122 · 3 + 27 · , which is not a multiple of 27. Thus the fundamental Z 9 symmetries also require heavy fermions. For the four Z 3 symmetries the RHS of Eq. (4.3) is always a multiple of 9. Eq. (A.5) shows that the LHS of Eq. (4.3) is a multiple of 9 only in the case of the R 3 L 3 symmetry. Hence the other three fundamental Z 3 symmetries require heavy fermions. But also R 3 L 3 cannot satisfy the anomaly constraints without a heavy-fermion sector: 12 Although R 3 L 3 is neither ruled out by A GGX = 0 nor A XXX = 0 alone, it is in conflict when combining the two conditions; the LHS of Eq. where i runs over all chiral superfields. The last two terms within the parentheses are multiples of 27, which is not true for the first one. However, evaluating the sum and applying our knowledge of the q i , we find where k denotes a generation index. The numerical coefficients inside the brackets are the product of the squared discrete charges and the multiplicity of the particle species. For example, we have 3 colours of quark fieldsŪ k with qŪ k = −1, thus 3 · qŪ k 2 = 3. We can now adopt the gravity-gravity-U(1) X anomaly constraint of Eq. (7.1) to rewrite Eq. (7.3). Recalling that N = 3, n = 0 and m = p = 1 for R 3 L 3 , we get also a multiple of 27. This completes our proof. In conclusion: The 27 fundamental DGSs we have found are only anomaly-free with a U(1) X -charged heavy-fermion sector. A Top-Down Approach As outlined in Sect. 1, we have so far discussed a bottom-up approach to DGSs. However, by definition, a DGS is inherently connected to the anomaly structure of the underlying U(1) X gauge theory. Here, we consider the DGSs from the latter perspective. We investigate two topics in detail: (i) the definition of the DGSs via the transformation of the superfields (superfield-wise) vs. the definition via the transformation of the G SM invariant operators (operator-wise); (ii) the hypercharge shifts of Eq. (2.15). At high energies, we start from a G SM × U(1) X invariant Lagrangian, with the Xcharges scaled to be integers of minimal absolute value. We leave it open at the moment whether U(1) X is anomalous or not. Below M X , U(1) X is assumed to be broken by a single left-chiral flavon superfield Φ (or by two left-chiral superfields Φ, Φ ′ with opposite X-charges, see Sect. 9), which is uncharged under G SM . If in our model e.g. the operator L i L jĒk is not U(1) X -invariant, then the non-renormalisable superpotential 13 operator is. However, due to the cluster decomposition principle (CDP) [76], the Lagrangian exhibits only non-negative integer exponents of the fields [77,78]. Therefore the above term is forbidden if is fractional. After U(1) X -breaking, the operator L i L jĒk is not generated, since its non-renormalisable "parent term" is non-existent. Therefore the constraints of the CDP persist. Whether an operator is allowed or not in the low-energy Lagrangian boils down to whether its overall X-charge is an integer multiple of X Φ . Thus at low energy, we decompose the X-charges as in Eq. (2.1) and the remaining DGS under which the superfields transform is a Z |X Φ | . Next consider the operators in the superpotential. Analogous to Eq. (2.1), the overall X-charge, X total , of any G SM -invariant product of MSSM chiral superfields satisfies If a certain operator is forbidden by the CDP, then the |X Φ | th power of this term has q total = 0 mod(|X Φ |). However, the superpotential operators are further restricted by G SM . Therefore the Z |X Φ | -charges are possibly such that a power smaller than |X Φ | suffices to get q total = 0 mod(|X Φ |), for all superpotential operator. As an example, suppose |X Φ | = 24 and the superfields obey a Z 24 . Due to G SM , it may very well be that for all operators q total is even. Operator-wise we then have a Z 12 instead of a Z 24 . Furthermore, we can integrate out the heavy particles below their mass scale. When considering only the superfields of the SSM their respective q's could e.g. be only multiples of 3. The SSM superfields alone then obey a Z 24/3 = Z 8 symmetry (cf. Sect. 5) and the SSM superfield-wise Z 8 constitutes an SSM-operator-wise Z 4 . We now consider a generation-independent U(1) X extension of the SSM, which is the high-energy origin of the DGS. We include right-handed neutrinos,N i . We demand that for the U(1) X charge assignments: (i) the Yukawa mass terms QH dD , QH uŪ , LH dĒ , and LH uN are invariant, and (ii) the anomalies and A XXX all vanish. We can then express the X-charges in terms of two unknowns Furthermore, we obtain the well known result that U(1) X is necessarily a linear combination of U(1) Y , i.e. hypercharge, and U(1) B−L (see for example Ref. [79,80,81]) where C 1,2 are free real parameters, such that the X-charges are integers, as was required earlier. Eq. (8.3) can then be reexpressed in terms of C 1,2 (8.5) 13 The following arguments in this Sect. proceed analogously for the Kähler potential. For 2C 1 = −5C 2 , we obtain a theory with SU(5) invariant X-charges. For C 1 = 0 the right-handed neutrinos are charged and the see-saw mass termN iNj is forbidden. And of course for C 2 = 0 we obtain U(1) B−L . At low-energy, we performed the hypercharge shift of the DGS, Eq. (2.15). As we argued, this hypercharge shift is irrelevant for the structure of the low-energy superpotentials. From the top-down approach, however, a different choice of C 2 corresponds to a hypercharge shift of the SSM X-charges, which in turn corresponds to a hypercharge shift of the corresponding Z N . How does this change the high-energy theory? The gauge boson and fermionic kinetic terms in the Lagrangian are Here F 2 X,Y are the squared field strength tensors, and A X,Y µ are the corresponding gauge potentials. We see that a simultaneous orthogonal rotation in the fields (A X µ , A Y µ ) and the charges (g X X k , g Y Y k ) leaves the Lagrangian unchanged. But different choices of C 2 in Eq. (8.4), which correspond to hypercharge shifted (not rotated) theories, lead to distinct gauge theories in Eq. (8.6). They differ in their X-charges and thus in their scattering cross sections. They are therefore, in principle, experimentally distinguishable at energies √ s = O(M X ). However, at the LHC, we can only determine the low-energy DGS. We can not determine C 2 of Eq. (8.4). When attempting to interpret the LHC results in terms of an underlying unified theory it is important to keep this ambiguity in mind. Let us now focus on the Φ+SSM-sector, i.e. including the flavon field(s). Using the methods of Refs. [51,73], we can compute the total X-charge of any G SM -invariant superpotential term and obtain where again denote arbitrary and independent integers. Using Eq. (2.1), this gives We have seen that a hypercharge shift of the X-charges leads to a new U(1) X gauge theory. Such a shift is however only possible for an originally anomaly-free model (see e.g. the completely fixed X-charges in Ref. [51]) and yields an alternate anomaly-free model. Plugging the X-charges of Eq. (8.4) into Eq. (8.7), we find of course independent of C 2 and thus of hypercharge. So all the results on the operatorwise DGS coming from U(1) X are solely determined by C 1 and |X Φ |. This characteristic, which we demonstrated for a simple example, also holds for all non-anomalous models. This is why we could shift away r in Sect. 2. For C 1 = −C 2 , i.e. X Q = 0, the field-wise and operator-wise definition of the DGS coincide. Equipped with the X-charges in Eq. (8.4), we now demonstrate in two examples the emergence of distinct operator-and superfield-wise DGSs from the U(1) X . To have e.g. LLĒ generated after U(1) X breaking would require √ Φ · LLĒ , which is not allowed due to the CDP. With Eq. (2.1) we get a superfield-wise Z 6 , with q Q = 1, qD = qŪ = 5, q L = qĒ = qN = 3, q H d = q Hu = 0. Plugging these into Eq. (8.8), one finds that any superpotential term has an overall q-charge which is an integer multiple of either 3 or 6. Thus the actual DGS of the operators is a Z 6 3 = Z 2 symmetry. This is matter-parity, in fact. Another example, more elaborate and flavour-dependent, is the fourth model in Table 2 in Ref. [82]. It does not cause any DGS after U(1) X breaking, as our second example. The prefactors of the free parameter q (their notation!) are nothing but the usual hypercharges. The argument that a superfield-wise Z |X Φ | causes an operator-wise Z |X Φ |/N is independent of whether the U(1) X has anomalies which are cancelled via Green-Schwarz [83] or whether the U(1) X is non-anomalous. The anomalous X-charges given in Table 7, Ref. [51], display a SSM superfield-wise Z 300 symmetry, but operator-wise constitute a Z 2 , as can be seen by plugging the corresponding discrete charges into Eq. (8.8). A priori it is hence not clear whether, e.g., a superfield-wise Z 300 gives rise to an operator-wise Z 300 , Z 150 , Z 100 ,..., Z 2 or even Z 1 (trivial). In summary, from a top-down point of view hypercharge shifted theories are not equivalent. They are, in principle, experimentally distinguishable by high-energy scattering experiments. If they are anomaly-free, they lead to equivalent low-energy discrete gauge theories and are not distinguishable at the LHC. But even a non-anomalous and an anomalous set of X-charges are equivalent from the low-energy point of view if they lead to the same operator-wise DGS. A Gauged P 6 Model In this section, we explicitly present a generation-dependent U(1) X gauge model, constructed in collaboration with C. A. Savoy and S. Lavignac. U(1) X is spontaneously broken to proton-hexality, P 6 . We consider this a demonstration of existence, not necessarily an optimised model. Concerning the origin of the needed non-renormalisable interaction terms, there are several sources imaginable (see, e.g., [84]): Either the terms occur near the string scale or they are generated by integrating out heavy vector-like pairs of G SM charged states (the so-called Froggatt-Nielsen mechanism [85]). Here we adopt the first viewpoint and thus use a simple operator analysis. We assume the U(1) X breaking superfields to be suppressed by M grav , e.g. We first list in Table 5 the U(1) X charges of all the chiral superfields in our model. The G SM singlets Φ ± constitute the vector-like pair of U(1) X breaking superfields with equal VEVs. The A ... are G SM singlets as well but do not aquire VEVs, we introduce them solely for the sake to cancel A GGX and A XXX . All the other (mixed) anomalies vanish within the particle content of the SSM. The breaking of U(1) X generates the MSSM Yukawa coupling constants with textures that produce the observed fermionic mass spectrum as well as acceptable mixing matrices. Furthermore, U(1) X leaves a Z 12 symmetry as a remnant which, after integrating out the A ... , yields P 6 : To get the µ term and the neutrino masses of the correct order of magnitude, we rely on the existence of intermediate mass scales: M µ ∼ 10 8 GeV (which's necessity has been already anticipated by Refs. [82,86] for anomaly-free Froggatt-Nielsen models without heavy G SM charged matter) and M ν ∼ 10 12 GeV. After diagonalisation one gets for the masses of the electrically charged SM fermions m u : m c : m t ∼ ǫ 8 : ǫ 4 : 1, m d : m s : m b ∼ ǫ 4 : ǫ 2 : 1, m e : m µ : m τ ∼ ǫ 4 : ǫ 2 : 1, m τ : m b : m t ∼ ǫ 2 : ǫ 2 : 1. For the mixing matrices we get an anarchical MNS matrix, which is compatible with experiment, see e.g. Refs. [87,88,89], as well as a CKM matrix which looks like Thus we have to rely on some moderate fine-tuning among the unknown O(1) coefficients to be entirely satisfactory. Furthermore, we get the following mass terms for the heavy fields: • After U(1) X breaking we are left with an overall Z 12 DGS, since |X Φ ± | = 6 and all SSM particles' X-charges are integers and the A ... 's X-charges half-odd integers. But as can be seen above, the A ... are quite heavy, so that they all can be integrated out at around ǫ 6 M grav ∼ 10 14 GeV, leaving the fundamental (in the sense of Section 5) DGS P 6 . Summary In summary, we have systematically investigated discrete gauge symmetries Z N , for arbitrary values of N. We have classified the anomaly-free theories, depending on whether the necessary (see Sect. 7) heavy fermions are restricted to integer X-charges or not. Through a rescaling of the X-charges, we have for a low-energy point of view reduced this infinite set to a finite fundamental set: All theories related by rescaling lead to the same low-energy superpotential. For this fundamental set we have investigated the phenomenological properties in detail. We have found two outstanding DGSs, the second of them being beyond IR: (i) baryon-triality, B 3 , which allows for low-energy lepton-number violation, but no dimension-five or lower proton decay operators, and (ii) proton-hexality, P 6 . The latter has a renormalisable superpotential which conserves lepton-and baryonnumber and prohibits non-renormalisable dimension-five proton decay operators. This is one of the main results of this paper and we propose P 6 as the new discrete gauge symmetry of the MSSM, instead of matter-parity. Both baryon-triality and proton-hexality are free of domain walls. values. We shall denote it as RHS ≡ RHS 1 + RHS 2 + RHS 3 , with a term for each line in Eq. (4.3). We now investigate these terms individually. (a) RHS 2 : Factoring N, we see that the term RHS 2 contributes a multiple of N to the RHS. However, it can not necessarily take on every possible multiple of N, regardless of what the choice of heavy particles is. For (3 |N), we can again write N = 3N ′ (N ′ ∈ AE), and rewrite the last term as p 3 j N 3 = 3p 3 j N 2 N ′ . We can thus factor 3N and therefore the term RHS 2 can take on at most values ∈ 3N · . By adding appropriate sets of heavy Dirac particles with simple charges, it is straightforward to show that any multiple of 3N can be obtained. For DGSs with ¬ (3 |N), any element ∈ N · can be obtained. (b) RHS 3 : For odd N, p ′ j ′ has to be even [see Eq. (2. 3)], so that the term RHS 3 is an element of N 3 · . For even N, RHS 3 can take on all values ∈ N 2 3 · . (c) RHS 1 : The first two terms in RHS 1 are multiples of 3N, which is included in (a), above. Similarly, the third term is a multiples of N 3 and therefore already included in (b). Summarising, the RHS of Eq. Table 1 with ¬ (3 |N), i.e. the cubic anomaly results in no new constraint. (3 |N ): We consider the remaining four categories of Table 1 in turn. (i) (3 |N ), N = odd : Eq. (A.3) shows that the RHS must be a multiple of 9N ′ . Therefore the LHS must also be a multiple of 9N ′ . From the corresponding row in Table 1, we see that in this case n = 0, p = ℓ p N ′ and m = ℓ m N ′ . Inserting this into the LHS as given in Eq. (A.1) yields LHS = −3ℓ p 3 + 9ℓ p 2 ℓ m + 9ℓ p ℓ m 2 + 3ℓ m 3 · N ′ 3 . Combining the results for ¬ (12 |N) and (12 |N), we find that for each N ∈ 6 · AE there are three allowed non-trivial DGSs. Taking N ∈ 18 · AE, any DGS satisfying the linear constraints is compatible with the cubic constraint. (iii) (9 |N ), N = odd : From Table 1 (iv) (9 |N ), N = even : From Table 1 1291 is not a multiple of 9 (it is actually a prime), whereas the remaining coefficients in square brackets are multiples of 9. Therefore the LHS is not a multiple of 9 N ′ 2 in the case of ¬ (12 |N), respectively 9N ′ in the case of (12 |N) [cf. Eq. (A.3)], unless (9 |N ′′ 2 ). Thus the cubic anomaly constraint requires N ∈ 54 · AE in this category. All linearly allowed DGSs are possible for these values of N.
2014-10-01T00:00:00.000Z
2005-12-13T00:00:00.000
{ "year": 2005, "sha1": "d446b681c3bc0917962939bac8f9c382cabc512f", "oa_license": null, "oa_url": "https://arxiv.org/pdf/hep-ph/0512163", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d446b681c3bc0917962939bac8f9c382cabc512f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252219534
pes2o/s2orc
v3-fos-license
Emerging Therapeutic Strategies of Different Immunotherapy Approaches Combined with PD-1/PD-L1 Blockade in Cervical Cancer Abstract Currently, therapeutic methods for advanced and recurrent cervical cancer patients are limited and unsatisfactory. Immunotherapy is a promising approach for cancer treatment. However, its investigation and application in cervical cancer remain slow. Although pembrolizumab is a remarkable milestone as the first anti-PD-1 mAb approved by the FDA for treating cervical cancer, it shows relatively low response rate. It is noticed that multiple novel immune checkpoints have emerged in recent years, such as CTLA-4, TIGIT, LAG-3, TIM-3, and A2AR. Accumulated studies have suggested that strategies combining the PD-1/PD-L1 inhibitors and different immunotherapies or biotherapies could enhance the antitumor efficacy in human cancers. In this review article, we provide an overview of anti-PD-1/PD-L1-based immunotherapy in cervical cancer treatment. We further summarize the developmental strategies of different immunotherapies or biotherapies combined with PD-1/PD-L1 blockade for treating cervical cancer. We also discuss how these new combined therapies increase the therapeutic benefit gained from experimental evidence in cervical cancer. Introduction Cervical cancer (CC) ranks fourth for both incidence and mortality among cancers in women globally, 1 indicating that this cancer remains a heavy burden worldwide despite the development and application of prophylactic HPV vaccines and effective screening and early detection methods. Traditional treatments for cervical cancer are surgery, radiation therapy, and chemotherapy. 2 For early-stage cervical cancers, radical surgery and radiation therapy can achieve good prognosis. However, for advanced and recurrent cervical cancers, the efficacy of the current treatment modalities is unsatisfactory, resulting in poor survival outcomes. 3 Therefore, searching for novel treatment strategies for advanced and recurrent cervical cancers is necessary. Treatments targeting epidermal growth factor receptor (EGFR) or vascular endothelial growth factor (VEGF) are improvements made in cervical cancer therapy in recent years. 4,5 But their efficacy is not very satisfying. Chemotherapy combined with the anti-VEGF antibody bevacizumab for cervical cancer showed progression-free survival of only 8.2 months in the phase III randomized trial, GOG 240. 6 Immunotherapy is emerging and rapidly developing in recent years. It is expected to harness the host's immune system to attack tumor cells by stimulating the adaptive immune system through the administration of anti-cancer vaccines, which is known as active immunotherapy, or by using immune compounds such as adoptive cellular transfer (ACT), immune checkpoint inhibitors (ICIs) and cytokines to enhance antitumor immunity, ie passive immunotherapy. 7,8 In addition, several other agents such as IDO (indoleamine 2.3-dioxygenase) inhibitors are classified as "immunomodulation", and they also kynurenine produced by tryptophan catabolism and TGFβ, induces suppression of antitumor immunity. 22 Decreased levels of major histocompatibility complex class I (MHC I) on the surface of HPV-infected cells caused by HPV oncoprotein E5 may affect antigen presentation in HPV-related cervical cancer. 22,27 These mechanisms, as well as others that will be mentioned and elaborated on in the following text, provide possible avenues for further improving antitumor immunity. Importantly, tumors with high mutational burdens are theoretically highly responsive to PD-1/PD-L1 blockade therapy but practically show low levels of response because of these immunoevasive and immunosuppressive mechanisms. 28 Therefore, combining PD-1/PD-L1 blockade with other immunotherapies may enhance the proportion of cervical cancer patients responsive to PD-1/PD-L1 blockade therapy. Secondly, as the main cause of cervical carcinogenesis, human papilloma virus was found to exert influence on host immunity. HPV oncoproteins activate multiple signaling pathways such as PI3K/AKT, MAPK and STAT3/NF-kB. 29 This may contribute to dysregulation of the tumor immune microenvironment as these pathways are closely related to immune response. In addition, HPV infection leads to recruitment of suppressive immune cells to the infected sites, impairment of NK cell activity and CTL responses as well as hindrance of antigen presentation machinery. 30 Therefore, this review highlights the potential and promise of the combinatorial strategies of different immunotherapy approaches based on PD-1/PD-L1 blockade to further enhance the host immune response against tumors and increase the proportion of cervical cancer patients benefiting from immunotherapy (Figure 1), as well as the clinical development (Table 1). Figure 1 Diagram of PD-1/PD-L1 blockade based combinational cervical cancer therapy. PD-1/PD-L1 blockade based combinational cancer therapy mainly includes five immune checkpoint blockades, CTLA-4 blockade, TIGIT blockade, LAG-3 blockade, TIM-3 blockade and A2AR blockade, which contribute to overcome immunosuppressive factors of the tumor microenvironment to improve antitumor immunity. PD-1/PD-L1 blockade also combines with other immunotherapy approaches including adoptive cell therapy and therapeutic vaccines. In addition, PD-1/PD-L1 blockade has the potential to combine with TLR9 agonists, TGF-β inhibition and IDO inhibition, which can maximize the immunotherapeutic benefit for cervical cancer patients. Combination of Anti-PD-1/PD-L1 Therapy with Other Immune Checkpoint Inhibitors CTLA-4 Blockade CTLA-4 is an important negative regulator of immune responses and its ligands are CD80 and CD86 ( Figure 2). Anti-CTLA-4 antibodies, the start of immune checkpoint blockade therapy, have shown impressive results in a range of cancers such as melanoma, renal cell carcinoma and colorectal carcinoma, either alone or in combination. 31 Blocking CTLA-4 activates both CD4 + and CD8 + effector cells and selectively depletes regulatory T cells in tumors, thereby enhancing antitumor immunity and promoting tumor rejection. 32,33 Certain types of CTLA-4 genetic polymorphism have been proven to be relevant to an increased risk of cervical cancer. 34,35 CTLA-4 is expressed on more than 50% of invasive cervical cancer cells and is associated with the clinical stage of the tumor and lymph node metastasis. 36 A clinical trial has found that ipilimumab (anti-CTLA-4) following chemoradiation therapy in locally advanced cervical cancer patients induced an expansion of central and effector memory T cells. 37 Accordingly, inhibition of CTLA-4 has the potential to help patients fight cervical cancer. Blocking PD-1 and CTLA-4 has shown remarkable antitumor activity in a wide range of tumors. 38 Several studies have explored the effects of PD-1/PD-L1 and CTLA-4 checkpoint co-inhibitory pathways on immunomodulatory functions and their potential tumor immunotherapeutic effects. Blockade of CTLA-4 may upregulate expression of PD-L1 on tumor cells and immune cells through IFN-γ produced by T cells. 38 In a study evaluating the safety and antitumor activity of an anti-CTLA-4 mAb, ipilimumab, in recurrent cervical cancer, PD-1 expression was found to be upregulated in peripheral CD4 + and CD8 + T cells following the use of ipilimumab. 39 In a preclinical study, concurrent blockade of PD-1 and CTLA-4 showed synergistic 3059 antitumor activity in mouse colorectal tumor models. 40 The potential mechanism may be that combination therapy leads to a favorable ratio of effector and regulatory T-cell, increased secretion of pro-inflammatory cytokines, and activation of tumorspecific T cells. 40 Another study, using an HPV + oral tumor model, showed an increased survival benefit under the condition of combining anti-PD-1 therapy with anti-CTLA-4 therapy. 41 The addition of CTLA-4 blockade therapy to PD-1 blockade therapy induces an expansive frequency of effector CD4 + T cells that is not sufficient for PD-1 inhibitory monotherapy, which is observed in murine colon carcinoma models. 9,42 However, dual blockade may not induce the expansion of phenotypically exhausted CD8 + T cells, which occurs with anti-PD-1 monotherapy, and may lead to an increase in activated terminally differentiated effector CD8 + T cells. 42 Therefore, dual blockade of PD-1 and CTLA-4 induces an improved frequency of effector CD8 + and CD4 + T cells in the tumor, resulting in an increased ratio of CD8 + T cell to Treg (regulatory T cell) and CD4 + T cell to Treg ratio, which correlates with high antitumor activity. 40,43,44 Collectively, these findings support the potential of dual blockade of PD-1 and CTLA-4 in cervical cancer. Clinical trials using this combination regimen in cervical cancer are currently underway (NCT02488759, NCT03894215, NCT05033132). In addition, a PD-1/CTLA-4 bispecific antibody (AK104) is undergoing clinical trials for recurrent or metastatic cervical cancer (NCT04380805, NCT04868708). Furthermore, several other clinical trials have added chemotherapy or radiation therapy to the combination of PD-1 and CTLA-4 blockade (NCT03518606, NCT03452332, NCT03277482). Hopefully, these clinical trials will provide evidence for the novel therapy for treating cervical cancer patients. Immune-related adverse events of the combination should not be neglected. It was reported that for advanced melanoma patients who received ipilimumab plus nivolumab, immune-related adverse drug reactions occurred more frequently and earlier with higher severity, compared with those who received ipilimumab or nivolumab alone. 45 Figure 2). It attenuates the functions of these immune cells by binding to its ligand PVR located in antigen-presenting cells or tumor cells. 46,47 Dual blockade of TIGIT and PD-1 has the potential to recover and augment the activity of immune cells in cervical cancer. Firstly, a significant increase in various subsets of peripheral blood NK cells and T cells expressing both PD-1 and TIGIT has been reported in patients with cervical cancer. 48 Moreover, diverse inhibitory checkpoints including PD-1 and TIGIT tend to co-express in HPV+ head and neck cancer samples. 49 Co-blockade of TIGIT and PD-L1 synergistically and specifically enhanced the effector function of CD8(+) T cells, leading to significant clearance of tumors and viruses in murine models of colorectal carcinoma and chronic viral infections. 49,50 Therefore, concurrent blockade of TIGIT and PD-1 may significantly reverse the exhausted state and enhance the function of these PD-1 + TIGIT + immune cells, which may not be fully achieved by blocking only a single checkpoint receptor. Secondly, it was found that in mouse models of colon cancer, breast cancer, melanoma and fibrosarcoma, NK cells highly expressed TIGIT, while PD-1 expression was relatively low, resulting in limited efficacy of mono blockade of PD-1 on NK cells. 51 Therefore, dual blockade of TIGIT and PD-1 will not only enhance the function of T cells highly expressing PD-1 and TIGIT, but also enhance the function of NK cells by blocking TIGIT. 51 This means that this combination can sufficiently reactivate different types of antitumor immune cells through their highly expressed checkpoint receptors. A bispecific nanobody targeting both PD-1 and TIGIT has been shown to have the ability to enhance T cell activity in vitro. 52 Combination therapy of anti-PD-1 with anti-TIGIT has shown efficacy in various tumor models, such as colon carcinoma models and glioblastoma models. 50,53 In addition, CD96, another member in the TIGIT axis expressed on T cells and NK cells, was recently found to attenuate the function of CD8+ tumor-infiltrating lymphocytes in cooperation with PD-1 in cervical cancer. 54 As a result, dual blockade of CD96 and PD-1 further enhanced the function of CD8+ tumor-infiltrating lymphocytes and inhibited tumor growth in cervical cancer murine models. 54 In terms of clinical application, the anti-TIGIT mAb tiragolumab in combination with the anti-PD-L1 mAb atezolizumab has been approved by FDA for treating the metastatic non-small cell lung cancer (NSCLC). A phase II trial involving NSCLC patients reported that combination treatment of tiragolumab and atezolizumab achieved an overall response rate (ORR) of 37% and a median progression-free survival (PFS) of 5.6 months compared with 21% and 3.9 months for the atezolizumab treatment alone. 46 3060 69% of the patients had immune-related adverse events (most frequently rash and infusion) compared with 47% in the atezolizumab group. 46 Although there have been only a few preclinical and clinical investigations involving this combination in cervical cancer, encouraging results can be expected. LAG-3 Blockade LAG-3 is another immune checkpoint receptor whose ligands are MHC-II on APCs and LSECtin on tumor cells ( Figure 2). 55,56 LAG-3 is highly expressed in a variety of HPV-related malignancies, especially in cervical cancer with an expression rate as high as 75%. 53,55,57 LAG-3 blockade has been reported to exhibit excellent performance in enhancing immune activity. 57 Specifically, the blockade of LAG-3 enhanced the proliferation of CD4 + and CD8 + T cells and secretion of IFN-γ and TNF-α more conspicuously than the PD-1 blockade in vitro. 57 Furthermore, LAG-3 blockade increased markedly WT1 tumor antigen-specific T cells, whereas PD-1 blockade only slightly increased. 57 This significant difference between the two blockades is probably due to WT1 being expressed in various solid tumor cells, including cervical cancer cells, to play an oncogenic role during carcinogenesis. 58,59 Therefore, adding LAG-3 blockade to PD-1 blockade therapy may further activate antitumor immunity through the advantages of LAG-3 blockade over PD-1 blockade and significantly increase treatment efficacy. In addition, LAG-3 blockade inhibits Treg cells that perform inhibitory functions, and its influences on Treg cells were observed in mouse models with loss of both LAG-3 and PD-1. 60,61 Several studies have revealed that dual blockade of LAG-3 and PD-1 resulted in augmented T cell proliferation, an enhanced proportion of effector T cells, and improved T cell killing capacity compared with PD-1 blockade alone, thereby suppressing tumor growth. 57,61 In addition, dual blockade of LAG-3 and PD-1 resulted in upregulation of IFN-γ expression, and blockade of the two checkpoints achieved this through diffirent types of T cells-LAG3 blockade through naïve T cells and central memory T cells while PD-1 blockade through effector memory T cells. 57,62 Increased production of TNF-α may also be obtained under conditions of dual blockade. 62 Recently, a phase 2-3 clinical trial reported that the combination of relatlimab (a LAG-3-blocking antibody) and nivolumab achieved a median progression-free survival of 10.1 months compared with 4.6 months for nivolumab monotherapy in untreated advanced melanoma. 63 Another clinical trial has shown antitumor activity of the combination of ieramilimab (anti-LAG-3) and spartalizumab (anti-PD-1) in solid tumors, with 3 (2%) complete responses and 10 (8%) partial responses, compared with no complete response and partial response in ieramilimab monotherapy. 64 MGD013, a bispecific DART ® protein that binds to PD-1 and LAG-3, is going through a clinical trial for patients with a variety of tumors, including cervical cancer (NCT03219268). TIM-3 Blockade T cell immunoglobulin mucin 3 (TIM-3) plays an important role in immunosuppression following binding to its ligand Gal-9 ( Figure 2). 65 High expression of TIM-3 and its ligand was found in cervical and vulvar squamous neoplasia. 65 Overexpression of TIM-3 is associated with HPV-positive status and may be related to poor tumor differentiation and shorter survival time in cervical cancer. 66,67 Co-expression of TIM-3/Gal-9 and PD-1/PD-L1 occurs frequently in cervical cancer. 48,65 Tim-3 + PD-1 + TILs (tumor infiltrating cells) exhibit an exhausted phenotype characterized by decreased secretion of IL-2, TNF, and IFN-γ. 68 Therefore, concurrent blockade of TIM-3 and PD-1 may result in a reversal of immune cell exhaustion and an increase in cytokines, thereby enhancing the antitumor immune response against cervical cancer. This combination regimen has shown potential benefit in several advanced solid tumors. 69,70 One clinical trial reported that a combination of LY3300054 (anti-PD-L1) and LY3321367 (anti-TIM-3) showed antitumor activity against PD-1/PD-L1 inhibitor-naïve MSI-H/dMMR solid tumors. 69 Another clinical trial also showed preliminary efficacy of sabatolimab(anti-TIM-3) plus spartalizumab(anti-PD-1) for advanced solid tumors. 70 However, studies involving cervical cancer are sparse. Thus, the efficacy of the dual blockade approach in cervical cancer needs to be explored. A2AR Blockade A2A receptor (A2AR) is one of four subtypes of adenosine receptor and belongs to the G-protein coupled receptors (Figure 2). 71 A2AR is highly expressed on the immune cells and is involved in the regulation of immune functions after activation. 72 The adenosine pathway involves CD39/CD73/A2AR and was found to impede NK cell maturation and enhance the immunosuppressive function of regulatory T cells. 72 3061 which was found in colon cancer mouse models. 74 Dual blockade therapy with A2AR and PD-1 contributed to tumor regression and prolonged survival in colon cancer models. 74 A phase I clinical trial has shown antitumor activity of the combination of ciforadenant (a small-molecule A2AR antagonist) and atezolizumab (anti-PD-L1) in renal cell cancer. 75 In addition, CD73 expression was found to be associated with limited efficacy of anti-PD-1 therapy, 76,77 and CD73 inhibitors were shown to enhance antitumor activity with PD-1 blockade in mouse tumor models of breast cancer and colon cancer. 78,79 A clinical trial involving cervical cancer patients is underway to evaluate the therapeutic efficacy of the combination of a CD73 monoclonal antibody with an oral A2AR antagonist or an anti-PD1 antibody (NCT03454451). Combination of Anti-PD-1/PD-L1 Therapy with Biotherapies With Adoptive Cell Therapy Adoptive cell therapy (ACT) is an immunotherapeutic approach that involves isolating autologous immune cells from a patient, manipulating them specifically ex vivo, and then infusing them back into the patient with the expectation that these manipulated cells will attack and eliminate tumor cells. 80,81 In chimeric antigen receptor and T-cell receptor (CAR-T/TCR-T) immunotherapy, which is the main modality of ACT, T cells are genetically modified to express a special chimeric antigen receptor (CAR) or T-cell receptor (TCR) to achieve specific and precise recognition of tumor cells. 80,81 There have been several clinical trials assessing the safety and efficacy of TCR-T cell therapy in cervical cancer, with significant regression of the tumors. 82,83 Although to date CAR-T cell therapy for cervical cancer has not been clinically evaluated, preclinical studies have shown preliminary efficacy and more preclinical assessments are underway. 84,85 In another strategy of ACT, tumor-infiltrating T cells (TIL), T cells are isolated from tumor samples, selected, amplified, and then reinfused. 80 Clinical studies have demonstrated efficacy of TIL in HPV-associated cancers. 86,87 The rationale for combining the use of adoptive cell therapy with PD-1 blockade can be illustrated from two perspectives. Firstly, monotherapy with PD-1 blockade has shown unsatisfactory efficacy in certain poorly immunogenic cancer types, as exhaustion-reversed effector T cells still cannot recognize these cancer cells well. 17,88 Therefore, the addition of manipulated T cells with good tumor recognition ability may enhance antitumor responses and treatment efficacy of these cancer types. 80 Secondly, CAR-T therapy is less effective in solid tumors than in hematological tumors, possibly due to the negative function of inhibitory checkpoints on CAR-T cells. [89][90][91][92] Therefore, applying PD-1 blockade to CAR-T therapy will augment the activity of CAR-T cells and improve the efficacy of this therapeutic approach in solid tumors, including cervical cancer. There are several strategies for blocking PD-1 function on CAR-T cells ( Table 2). One strategy is to engineer CAR-T cells to secrete PD-1-blocking antibodies that then act on the CAR-T cells themselves, which has been shown to enhance antitumor efficacy. 90,93,94 Besides PD-1-blocking antibodies, CAR-T cells modified to secrete soluble PD-1 (sPD-1) were also shown to improve antitumor efficacy through blockade of PD-L1 present on cancer cells. 95,96 Another strategy is to genetically modify CAR-T cells to overexpress a PD-1 dominant negative receptor that competes with normal PD-1 to bind to PD-L1 on tumor cells but does not show inhibitory effects, thereby attenuating the effects of the PD-1 signaling pathway. 97 In addition, the use of CRISPR/Cas9 gene-editing approaches could disrupt PD-1 on CAR-T cells, and stronger antitumor immunity has been observed in vitro and in vivo. 98,99 Tang et al constructed a chimeric activated receptor named PD1-CAR, which consists of the extracellular domain of PD1 and the transmembrane and intracellular domains of the positive costimulatory molecules CD28 and 4-1BB. T cells expressing PD1-CAR retained the capacity to bind to PDL1 and were activated to specifically target PD-L1 + tumor cells. 100 CD8+ T cells transfected with PD1-CAR (CAR-T-PD1 cells) showed higher antitumor activity against cervical cancer in a mouse model, while CAR-T-PD1 cells activated by HPV16mE7-pulsed and SOCS1-silenced DCs showed even more significant increases in cytokine secretion, cytotoxic activity and survival rate. 89 In addition to intrinsic PD-1 blockade of adoptively transferred T cells, which license the cells with checkpoint blockade without extra antibody administration, one study used TIL therapy in combination with the PD-1 monoclonal antibody nivolumab to treat patients with metastatic cervical cancer with low microsatellite instability and low PD-L1 expression, and observed an improved prognosis. 101 Besides T cells, NK cell therapy also has the potential to cooperate with anti-PD-1 therapy to improve antitumor efficacy. Jeffrey et al developed a manufacturing system for production of NK cells derived from induced pluripotent stem cells (iPSC-derived NK cells), which can recruit and activate T cells to tumors and make them responsive to PD-1 blockade due to their potential to overcome checkpoint blockade resistance, thereby enhancing cytokine production and tumor elimination in ovarian cancer models. 102 These results suggest that the use of a combination of adoptive cell therapy and PD-1 blockade therapy in solid tumors, including cervical cancer, is promising, although more studies are needed to further validate this. There have been clinical trials demonstrating the antitumor efficacy of the combination of CAR-T cell therapy and PD-1 blockade in solid tumors. 103,104 A clinical trial involving patients with recurrent, metastatic and persistent cervical cancer is underway, with a cohort receiving TIL therapy and pembrolizumab (NCT03108495). In addition, an ongoing clinical trial is evaluating the efficacy of HPV-E6-specific TCR-T cells with anti-PD-1 autocrine elements in the treatment of HPV-positive head and neck carcinoma and cervical cancer (NCT03578406). With Therapeutic Vaccines Therapeutic vaccines are designed to activate antigen-specific immunity and then kill tumor cells through a variety of vaccine platform technologies. The therapeutic vaccines have included live vector vaccines, peptide/protein-based vaccines, DNA vaccines, cell-based vaccines, and combinatorial strategies such as prime-boost regimen. 105,106 For treating HPV-associated cervical cancer, the E6 and E7 oncoproteins of HPV are the ideal vaccine targets because they are constitutively expressed on malignant cells following HPV infection and play a pivotal role in the carcinogenesis and maintenance of malignant phenotype in cervical cancer. [105][106][107][108] The desired effect of HPV-targeted vaccines is to elicit a robust immune response produced primarily by Th1 cells and cytotoxic lymphocytes, which may be the key elements to clear HPV-induced cervical cancer. 105,106,109 Although clinical studies have demonstrated the efficacy of the therapeutic HPV vaccine in cervical intraepithelial neoplasia (CIN), 110,111 efficacy is unsatisfactory in invasive cervical cancers and no therapeutic vaccines have been approved in clinical practice yet. The efficacy is limited due to negative regulatory role of various factors in the tumor immunosuppressive environment, including the negative function exerted by the PD-1 axis on vaccine-activated immune cells, resulting in the exhaustion of these cells that are expected to play a role in the eradication of cancer cells. [112][113][114] Therefore, combining PD-1 blockade with therapeutic vaccines may maintain the immune activity of vaccine-activated cells, thus allowing them to efficiently kill tumor cells. Lee et al developed a non-oncogenic HPV 16 E6/E7 vaccine called Ad5 [E1-, E2b-]-E6/E7 immunizations. 115 They have demonstrated that this therapeutic vaccine induced HPV-E6/E7 specific cell-mediated immune responses. In a mouse model of HPV-E6/E7 TC-1 tumors, co-administration of anti-PD-1 antibody with Ad5 [E1-, E2b-]-E6/E7 showed tumor regression rates of up to 57% compared to 29% in mice treated with Ad5 [E1-, E2b-]-E6/E7 alone. 116 In addition, they observed that the expression of PD-1 and LAG-3 on TILs and PD-L1 on tumor cells was reduced when Ad5 [E1-, E2b-]-E6/E7 was bound to anti-PD-1 antibodies, implying a reduction in the exhaustive phenotype of effector T cells. 116 Hung et al developed a pBI-11 DNA vaccine targeting E6/E7 of HPV16 and HPV18, and found that combination of the pBI-11 DNA vaccination boosted by TA-HPV (tissue-antigen HPV vaccine) with PD-1 antibody blockade induced E7-specific CD8 + T cell immune responses and higher antitumor effects as well as better survival, whereas treatment with anti-PD-1 antibody alone without a prior immune response did not show significant antitumor effects when treating mice bearing HPV-E6/E7 TC-1 tumors. 117 This suggests that the efficacy of PD-1 blockade might be improved with the combination of therapeutic vaccines. A phase II clinical trial showed that nivolumab in combination with ISA 101, a synthetic long-peptide HPV-16 vaccine, was superior to PD-1 blockade alone in treating patients with incurable HPV-16-positive cancers, having 33% of ORR and 17.5 months of median overall survival (NCT02426892). 118 A single-arm, phase II trial that has a combination of GX-188E therapeutic DNA vaccine and pembrolizumab for treating recurrent and advanced cervical cancer is ongoing. 119 Interim analysis declared preliminary anti-tumor activity, with 42% of patients showing an overall response at 24 weeks. 119 In addition, some other clinical trials focusing on this combination regimen involving cervical cancer patients are underway (NCT04800978, NCT03946358). Combination of Anti-PD-1/PD-L1 Therapy with Other Immunotherapies With TLR9 Agonists TLR9, a member of Toll-like receptors, is a pattern recognition receptor expressed on the innate immune cells, including dendritic cells, macrophages, and natural killer cells. 120,121 TLR9 agonists elicit the secretion of cytokines such as interferon (IFN) that contribute to antigen presentation to naive T cells and induce antigen-specific adaptive immune responses. [120][121][122] The combination of TLR9 agonists and anti-PD-1 therapy has been observed to have superior antitumor effects at both injection sites and distant non-injected sites, revealing systemic antitumor immunity. [122][123][124] CMP-001, a virus-like particle (VLP) encapsulating an immunostimulatory CpG-A oligodeoxynucleotide (ODN) TLR9 agonist, combined with PD-1 blockade elicited durable regression of injected and distant tumors and prolonged survival of the HPV + tumor mouse model, compared to anti-PD-1 alone. 122 The mechanism underlying the therapeutic effects was elucidated by increased recruitment of activated T cells to the draining lymph nodes and enhanced circulating TNFα and IL-6 levels when the combination treatment was administered. 122 Another study, by Torrejon et al, revealed that TLR9 agonist can overcome anti-PD-1 resistance caused by JAK1/2 loss of function mutations and subsequent lack of IFN signaling by inducing a potent type I IFN systemic response. 124 Similarly, after intratumoral injection of TLR9 agonist SD-101 combined with anti-PD-1 in mice with JAK1 and JAK2 knockout tumors, antitumor effects were observed at both the injection site and the contralateral noninjection site, and survival rates were higher. 124 In addition, TLR9 agonists have been reported to improve the response rate to anti-PD-1 therapy because they upregulate PD-L1 expression in hepatocellular carcinoma cells through STAT3 (Tyr705) phosphorylation. 125 Therefore, the addition of TLR9 agonists to PD-1 blockade has the potential to enhance antitumor effects via multiple mechanisms and achieve systemic antitumor responses, indicating promising clinical applications. Results of a phase Ib clinical trial of the TLR9 agonist DV281 plus nivolumab showed potential efficacy in NSCLC. 126 With TGF-β Inhibition TGF-β promotes tumor progression, invasiveness and metastasis in the late stages of tumor (mainly TGF-β1 and TGF-β2, while TGF-β3 may have a protective function). 127,128 The activation of TGF-β pathway is associated with the reduced chemo-sensitivity in gynecologic cancer. 129 Combined inhibition of PD-L1 and TGF-β may enhance antitumor activity due to independent and complementary immunosuppressive effects on the two pathways. 130 In a study by Strauss et al, a fusion protein composed of a mAb against PD-L1 fused to a TGF-β "trap" was used to treat the heavily pretreated patients with advanced solid tumors, including cervical cancer. 130 Results showed that one cervical cancer patient was ongoing confirmed to have a complete response, and one other was in near partial response. With IDO Inhibition Indoleamine 2.3-dioxygenase (IDO) is a crucial enzyme in the catabolism of tryptophan to kynurenine. 131 Tryptophan depletion induces an increase in T cell apoptosis and a decrease in T cell proliferation through suppression of mTORC1 and eIF-2 phosphorylation, respectively, while kynurenine accumulation leads to a promotion of Treg differentiation, resulting in a diminished antitumor immune response. 131 Thus, IDO inhibition enhances antitumor immunity and promotes cancer elimination to generate encouraging results of suppressing various types of tumor in animal models and clinical trials. 3064 Considering the remarkable effects of IDO inhibition, combining it with PD-1 blockade may produce high treatment efficacy in cervical cancer, as evidence has shown. As detected in cervical squamous neoplasia, expression of tumoral IDO was up to 75%. 132 In addition, the co-expression of IDO and PD-L1 on cervical cancer cells was 63%. 132 These data provide evidence to support the immunotherapy of concurrent inhibition of IDO and PD-1 in the majority of cervical cancer patients. Furthermore, it was observed that IDO mRNA expression increased after the treatment of anti-PD-1 in a murine melanoma model, which may be part of resistance mechanisms of anti-PD-1 therapy and thus can be overcome with the combination regimen. 133 Spranger et al found that dual blockade of IDO and PD-1/PD-L1 induced tumor rejection by restoring IL-2 production and proliferation of tumor-infiltrating CD8(+) T cells in the tumor microenvironment without attracting new T cells from secondary lymphoid structures. 133 Given these studies revealing the possible potential mechanisms of improving combination therapy efficacy, the undergoing clinical trial will be promising. A phase I clinical study has shown the efficacy of a combination of the IDO inhibitor navoximod and the anti-PD-L1 mAb atezolizumab in multiple types of cancer, including cervical cancer. 134 Another clinical trial is ongoing to evaluate the efficacy and safety of pembrolizumab plus epacadostat, an orally available IDO1 inhibitor, in recurrent and metastatic head and neck squamous cell carcinoma (HNSCC), an HPV-related cancer type (NCT03358472). Conclusions and Perspectives The combination of different immunotherapeutic approaches is rational and promising, with an increased ability to mobilize the host immune system to recognize, fight and ultimately destroy malignant cells. As mentioned above, some combination strategies have shown higher efficacy in cancers other than in cervical cancer. The limited preliminary efficacy in cervical cancer shown in the published studies is still far from clinical application. In addition, some other immunotherapeutic strategies involving some immune-related enzymes and chemokines as well as co-stimulatory molecules (such as ICOS) are not listed here, as their combinations with PD-1/PD-L1 blockade are rarely investigated in cervical cancer and need more attention and research. In spite of this, combinatorial immunotherapies hold great potential in cervical cancer treatment due to unique features in the tumor microenvironment of HPV-induced cancers. Therefore, preclinical and clinical studies are greatly required for evaluating the antitumor efficacy and safety of these combinatorial immunotherapy approaches in cervical cancer. Despite the potential advantages of a combinatorial strategy, there are challenges and problems. Firstly, concurrent administration of two immunotherapy approaches may increase the frequency of immune-related adverse events and even lead to special ones that never occur in monotherapy. In fact, higher frequency and greater severity as well as earlier onsets of adverse drug reactions have been observed when combining two immune checkpoint inhibitors, and even, in some severe cases, the drugs had to be terminated to control the immune toxicities. 45 Secondly, it remains a challenge how to identify patients who are suitable for a certain combination regimen and have a high possibility of gaining benefits. Mismatch repair deficiency, elevated tumor mutation burden, high microsatellite instability, increased intratumoral plasma cells and elevated Notch signaling have shown the potential to predict clinical benefit of PD-1 blockade therapies. [135][136][137][138][139] In addition, vaginal and gut microbiota also influence immune checkpoint protein profiles. 140,141 PD-L1 and LAG-3 in the cervicovaginal microenvironment are negatively associated with abundance of Lactobacillus while positively correlated with dysbiotic bacteria in the vagina. 140 Bifidobacterium in the gut are positively correlated with response to anti-PD-L1 treatment in cancer patients. 141 Therefore, to select appropriate patients for combinatorial immunotherapy, it may be valuable to test gene expression profiling, molecular profiling, immune profiling or microbiota as the bases for selection. 142 Thirdly, when is the best time to use combinatorial immunotherapies and how they can be added to the existing treatment algorithms need to be investigated. Accordingly, future studies should focus on resolving these issues, which are important for the entry of combination therapies into clinical practice, in addition to further validating their efficacy, durability and safety in patients with cervical cancer. Disclosure The authors report no conflicts of interest in this work.
2022-09-15T05:11:31.431Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "410f2d2909e58a28f742662673d2a782084eed65", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "410f2d2909e58a28f742662673d2a782084eed65", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250528577
pes2o/s2orc
v3-fos-license
Case study of the convergent evolution in the color patterns in the freshwater bivalves The class Bivalvia (phylum Mollusca) is one of the most successful at survival groups of animals with diverse color patterns on their shells, and they are occasionally preserved in the fossil record as residual color patterns. However, the fossil record of the residual color patterns in freshwater bivalves could be traced only to the Miocene, greatly limiting color pattern evolution knowledge. We present the color patterns of the Cretaceous freshwater bivalves belonging to three extinct families of the order Trigoniida (hereinafter the Kitadani Freshwater Bivalves) from Japan, which is the oldest and the second fossil record of freshwater molluscan color patterns. The Kitadani Freshwater Bivalves consists of two types of color patterns: stripes along the growth lines and radial rays tapered toward the umbo, which resemble that of the colored bands of extant freshwater bivalves. This resemblance of the color patterns between the Kitadani Freshwater Bivalves and the extant species indicates that the color patterns of the freshwater bivalves represent the convergent evolution between Trigoniida and Unionida. To explain this convergent evolution, we advocate three conceivable factors: the phylogenetic constraints, monotonous habitats typical of freshwater ecosystems, and the predation pressure by visual predators in freshwater sediments. www.nature.com/scientificreports/ Data Figs. 17a,c, 18a,c, 19a,c). The radial stripes reside on the interspace of the plicated ribs on two-thirds of the posteroventral side of the shells and extend from the ventral shell margin to the median shell. In †P. naktongensis having elongated elliptical shells with radial plicated ribs and growth lines (Extended Data Figs. 11, 12b,d, 13b,d, 14b,d, 15b, see Supplementary Information S1), there are more bands running along the growth lines than those in †M. matsumotoi and sinuous near the posterior plicated ribs (Fig. 1c,f, Extended Data Figs. 12a,c, 13a,c, 14a,c, 15a). Additionally, †P. naktongensis bears colored axial segments that are 2-3 mm wide and arranged radially on the anteroventral portion. In †T. tetoriensis featured by subtrigonal shells with V-shaped ribs (Extended Data Figs. 7, 8b,d, 9b,d, 10b,d, see Supplementary Information S1), five to seven 1-5 mm wide dark stripes, appear along the growth lines, whereas radial stripes are absent, unlike †M. matsumotoi and †P. naktongensis (Fig. 1a, Color patterns of extant freshwater bivalves. Among extant freshwater bivalves, similar color patterns were observed in the Order Unionida (Figs. 2,3,4,5). Patterns can be classified into two types. One type bears four to five, dark green to greenish-brown colored bands that are 2-3 mm wide along the growth lines. The other exhibits bands with various widths and five to twenty dark green to greenish-brown colored rays from the umbo, part of which is bundled to form an approximately 10 mm wide color band. Some unionids are equipped 2,5,6). In all these taxa, juveniles tended to exhibit brighter and more distinct color patterns than adults. Discussion Remarks on the residual color patterns in the Kitadani Freshwater Bivalves. Residual color patterns in the form of visible pigmentation on fossil molluscan shells are generally uncommon 2,3 . In the Paleozoic to Mesozoic fossil records, the color patterns were limited to marine species 3 , which are preserved as black to dark-colored bands running on the shell surface as melanin pigments 20,21 . The black to dark-colored stripes on the shells of the Kitadani Freshwater Bivalves resemble the color patterns in some extant freshwater bivalves, suggesting that the dark bands are residual color patterns remaining as melanin pigments. Consequently, the Kitadani Freshwater Bivalves represents the oldest and second fossil record of residual color patterns among fossil freshwater bivalves. The residual color patterns of the Kitadani Freshwater Bivalves resemble the color patterns of extant freshwater bivalves in terms of width, number, and distribution of the colored bands. Both the Kitadani Freshwater Bivalves and extant freshwater bivalves examined in this study consist of two types of color patterns: stripes along the growth lines and radial rays tapered toward the umbo. Notably, the former pattern is similar among all the species examined, as it forms in the peripheries of prominent growth lines occurring periodically. In the latter pattern, however, the morphology and distribution of the bands are slightly different between the Kitadani Freshwater Bivalves and the extant species. The Kitadani Freshwater Bivalves exhibits relatively distinct and wide radial rays running roughly parallel to the lengths of the sculpture elements (radial plications and/or wrinkles), while the extant species bear obscure and fine radial rays running diagonally to the lengths of the sculpture elements. Nonetheless, the taxa with V-shaped sculpture elements (wrinkles, ribs or arranged nodules) lack or www.nature.com/scientificreports/ bear ambiguous radial rays, whether extant (e.g., Triplodon spp., Indochinella spp. and Tritogonia spp.) 13,15,22 or extinct ( †Trigonioides tetoriensis). Hypothesis I: phylogenetic constraints. The resemblance of the color patterns between the Kitadani Freshwater Bivalves and the extant unionids possibly resulted from the phylogenetic constrains. Each of the three species of the Kitadani Freshwater Bivalves belongs to a separate family ( †Trigonioides tetoriensis: †Trigonioididae, †Plicatounio naktongensis: †Plicatounionidae, and †Matsuomtoina matsumotoi: †Pseudohyriidae) in the order Trigoniida 19 . Trigoniida in turn, forms the subclass Palaeoheterodonta with Unionida 23 . This raises a possibility that the color patterns observed in the Kitadani Freshwater Bivalves and the extant unionids is inherited from their most recent common ancestor. In other words, these color patterns, stripes along the growth lines and radial rays tapered toward the umbo, may be the apomorphy for Palaeoheterodonta. In fact, some extant trigoniid species belonging to Neotrigonia exhibit color pattern similar to those in the Kitadani Freshwater Bivalves and extant unionids in this study (e.g. Neotrigonia margaritacea) 1 . Interestingly, the coloration of color patterns is quite different between unioniids (green to blue colorings) and trigoniids (red to yellow colorings), and the oldest known color patterns of the Palaeoheterodonta (Myophorella nodulosa, a marine species of Trigoniida from the Oxfordian of the Early Jurassic) appears different (concentric rows of patches) 10 from those of the Kitadani Freshwater Bivalves or the extant unioniids. These observations suggest that colorations evolved independently, in contrast to the color patterns, between Trigoniida and Unionida, and that Trigoniida more diverse color patterns than Unionida did in the Palaeoheterodont evolutionary history. Although further examination of the fossil record for the residual colors and color patterns in Palaeoheterodonta is essential, it is plausible that the habitat differences may have caused such discrepancy in the colorations and color patterns between Trigoniida (mainly marine) and Unionida (freshwater) in spite of the phylogenetic constrains. Hypothesis II: convergent evolution. The other possible interpretation of the color pattern similarity between the Kitadani Freshwater Bivalves and extant Unionida, is the convergent evolution. One potential factor that may have caused this convergent evolution of the color patterns is an adaptation to their habitats. In general, much of the convergent evolution in animals occurs through the morphological evolution in response to their Considering marine mollusks, the shell colors and their patterns have great diversity due to varying habitat environments, especially in coral reeves that exhibit various colors and complex ecosystem 2,6 . Conversely, in the freshwater ecosystem, the environmental colors are relatively monotonous with rocks, sand, mud, and green algae 8 , and such habitat conditions are likely indifferent between the Mesozoic and Cenozoic. As a result, the freshwater bivalves retained simple and monotonous color patterns for adapting to such environments throughout their evolution. Another conceivable factor to explain the convergent evolution in the color patterns of the studied freshwater bivalves is the selection pressure by visual predators. In general, the shell colors and their patterns in bivalves act as camouflages against the predators 2,7,8,[26][27][28] . Previous studies have demonstrated that extant freshwater bivalves are preyed upon by crayfish, fish, birds, reptiles, and mammals 29,30 . Because shell colors in freshwater bivalves tend to be greenish, such colors may be an adaptation against visual predators for blending into the freshwater sediments on which abundant greenish phytoplanktons occur 2,8 . Therefore, the evolutionary conservatism in color patterns of freshwater bivalves may result from camouflages into freshwater microenvironments, which has been advantageous against visual predators since the late Early Cretaceous. The above discussion assumes that the visual predators of freshwater bivalves remained similar for at least 120 million years. Which animals could have been potential threads to the Kitadani Freshwater Bivalves, and, in turn, the Early Cretaceous freshwater bivalves? Among the extant visual predators of the freshwater bivalves, those whose lineages were present in the Early Cretaceous include crustaceans (especially brachyuran decapoda 31 ), fish, lizards, turtles, crocodiles, birds, and mammals. Among them, the fossil record of durophagous lizards and mammals can be traced back only to the Late Cretaceous 32,33 . Conversely, lines of fossil evidence suggest that some fish 34,35 , turtles 36 , and crocodiles 35 fed on molluscan invertebrates during the Early Cretaceous, and the Kitadani Freshwater Bivalves indeed occurs with abundant lepisosteiform scales, testudinate shells and crocodile teeth. Additionally, at least one Early Cretaceous avian species with crustacean gut contents can be attributed to the durophagous diet 37 , and the Kitadani Formation has yielded avialan skeletal remains 38 , and footprints 39,40 . Therefore, fish, turtles, crocodiles, and birds are likely candidates for visual predators of the Early Cretaceous freshwater bivalves, and have remained so until present. Additionally, while crustaceans have not been identified in the Kitadani Formation, they flourished in the Early Cretaceous and their remains occur with the fossil freshwater bivalves of the time elsewhere 31 . Thus, crustaceans may have also played a role as visual predators of the freshwater bivalves since the Early Cretaceous. In addition to the crustaceans, fishes, turtles, crocodiles and birds, the visual predators of the Early Cretaceous freshwater bivalves likely include extinct lineages. For example, some pliosauroid plesiosaurs are suggested as being durophagous 34 , although the freshwater members of the group are considered endemic 41 and less likely to be a major thread to the Early Cretaceous freshwater bivalves. Another extinct candidate is non-avian dinosaurs. Ornithischians are suggested to have possessed a dietary flexibility including the durophagy. For instance, wellpreserved hadrosaurid coprolites from the Late Cretaceous of Montana, U.S.A. include sizeable crustaceans and mollusks, possibly suggesting that the Cretaceous freshwater mollusks were consumed by these herbivorous dinosaurs 42 . In addition, some basal ceratopsian psittacosaurids are hypothesized for the durophagy based on the predicted large bite force in the caudal portion of the toothrow 43 . Among saurischians, some oviraptorosaurian theropods are indicated to consume mollusks with hard shells based on their mandibular features 44 . While hadrosaurids, psittacosaurids, and oviraptorosaurians have not been identified in the Kitadani Formation, psittacosaurids, and oviraptorosaurians are common elsewhere in the Early Cretaceous of East Asia 45,46 , and hadrosauroid Koshisaurus is present in the formation 47 . Because dinosaurs occupied a niche of large terrestrial predators throughout the Mesozoic, they may have acted as one of major mollusk consumers in absence of large lizards and mammals in the Early Cretaceous ecosystem. Thus, the predation pressure by visual predators to the freshwater bivalves in the Early Cretaceous is likely similar to that in the present. Consequently, one of evolutionary adaptations of the freshwater bivalves against such pressure has remained to camouflage in the phytoplankton-rich sediments, leading to the long-term evolutionary conservatism of their color patterns. Conclusions Our study provides evidence for potential phylogenetic constraints in the shell color patterns in the freshwater bivalves, namely Trigoniida and Unionida. Alternatively, our study exemplifies possible convergent evolution that occurred at least 120 million years apart in the evolutionary history of these taxa. The convergence may be promoted by monotonous habitats typical of freshwater ecosystems. Another possible explanation to this convergent evolution is the predation pressure by visual predators like crustaceans, fishes, turtles, crocodiles and dinosaurs (replaced by birds and mammals today), and the evolutionary adaptation against such pressure to camouflage in the freshwater sediments. To further test our hypotheses about the evolution of the color patterns in the freshwater bivalves, it is mandate to accurately evaluate the selective pressures that cause the adaptation of the color patterns in modern taxa. Nonetheless, our study provides an opportunity to explore the mechanisms that determine color patterns of freshwater mollusks and represents a milestone to resolve their adaptive evolution in the color patterns. Methods The studied specimens for extant freshwater bivalves are deposited at the Fukui Prefectural Dinosaur Museum (FPDM) (Extended Data www.nature.com/scientificreports/ Fossil freshwater bivalves were collected from the Kitadani Dinosaur Quarry, Katsuyama, Fukui, central Japan, where the Lower Cretaceous Kitadani Formation (Aptian) of the Tetori Group crops out. Among approximately 6000 bivalve individuals collected from the quarry, we selected the best preserved individuals for analyzing color patterns, resulting in 17 specimens (Extended Data Table 2). The specimens were mechanically prepared using powerful flying pneumatic scribes including HW-65 with a pointed 3 mm tips and HW-322 with a 1.3 mm needle (German Engineered Precision Tools, Tethys, 1-73-5 Beppu, Mizuho, Gifu, Japan). Thin sediments, and diagenetic minerals on the shell surface were removed using a sand blasting tool KRANTZ sandblaster 70-250 µm W1625 with reduced iron powder #150 (75-150 µm in diameter with a new Mohs hardness of 4.5; Fuji Manufacturing) adjusted to 0.7-0.8 MPa. After blasting, apricot powders #150 (75-150 µm in diameter with a new Mohs hardness of 3.5; Fuji Manufacturing) adjusted to 0.7-0.8 MPa was used to remove fine sediments and minerals without damaging the shell. After preparation, the fossil specimens were photographed using a Canon Eos Kiss X10 with a SP AF60mm F/2 Di II LD [IF] MACRO 1:1 lens using two methods: whitening for shell ornamentation and water-immersion for residual color patterns. Whitening photography was conducted for Extended Data Figs. 7, 8b,d, 9b,d, 10b,d, 11, 12b,d, 13b,d, 14b,d, 15b, 16, 17b,d, 18b,d, 19b,d by letting the shell surface coated with ammonium chloride, and lightning from the northwest to enhance contrast. Residual color patterns of the fossil specimens were imaged by immersing them in water and photographed with lightning sourced from the northwest, and adjusted so that the photographs of the light directions were identical between the water-immersion and whitening photography. Transmitted, whitening, and water-immersed images were post-processed using Adobe Photoshop 2020, first applying the 'sharpen more' and 'sharpen' functions, followed by background removal. Minor adjustments were occasionally made to the exposure. The high-resolution images were down-sampled using Adobe illustrator 2021 to lower-resolution Tiff files for use in the plates. Reconstruction drawings of residual color patterns in fossil freshwater bivalves ( Fig. 1d-f) were prepared using Adobe Illustrator 2021 based on high-resolution images. The drawings were applied to the CMYK color model. Data availability All data generated or analyzed during this study are included in this published article [and its supplementary information files S1].
2022-07-15T06:16:39.550Z
2022-07-13T00:00:00.000
{ "year": 2022, "sha1": "5132251b30f5ec4ed09b86932e713c22206a7ea1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0540f84ff1dd78a25bef6142dd27587ec7185d27", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231773502
pes2o/s2orc
v3-fos-license
Phosphatidylethanolamine binding protein 1 enhances sensitivity of gastric cancer cell to 5-fluorouracil via inhibition of cell proliferation, migration and invasion Purpose: To determine the association between phosphatidylethanolamine binding protein 1, which is an Raf kinase inhibitor protein (RKIP), and 5-fluorouracil (5-FU) via analysis of the association between RKIP and clinical responses in individuals treated using fluorouracil-based chemotherapy. Methods: Human gastric cancer cell lines MGC-803 and SGC-7901 were used in this study. Cell viability was measured using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Apoptosis and migration were determined with flow cytometry and Transwell chamber assays, respectively. The mRNA and protein expressions of apoptosis-related factors were assayed using realtime polymerase chain reaction (RT-PCR) and Western blotting, respectively, while the expression of RKIP was determined by immunohistochemical staining. Results: Chemotherapeutic drug (5-FU) treatment induced low RKIP expression levels in tumorigenic GC cells, thereby sensitizing the cells to apoptosis (8.57 vs 1.25 %, p < 0.01). The highest RKIP level correlated well with initiation of apoptosis (4.20 vs 1.25 %, p < 0.01). Following in vitro downregulation of RKIP, there was increase in the viability and proliferation of RKIP-inhibited cells over time, and these changes were linked to alterations in cell cycle phases and increased optical density in MTT proliferation assay (1.55 vs 1.18, p < 0.01). In vitro Transwell assay measurement revealed an association between RKIP downregulation and enhancement of cell migration potential (652 vs 436, p < 0.01). Ectopic RKIP expression restored the apoptotic sensitivity of resistant cells (14.30 vs 1.36 %, p < 0.01). This sensitization was annulled by upregulation of survival routes. Reduction of RKIP by expression of antisense and siRNA conferred resistance on cancer cells sensitive to 5-FU-mediated apoptosis (6.88 vs 2.13 %, p < 0.01). Conclusion: Thus, RKIP is a promising therapeutic strategy for improving the efficacy of clinically relevant chemotherapeutic drugs for GC. INTRODUCTION Gastric cancer (GC) is the second most prevalent cancer all over the globe. China is among several Asian nations with a high incidence of GC and high level of mortality from the disease. Although the incidence and mortality associated with GC have been declining steadily, prognosis in several cases is bad due to late diagnosis and metastasis. The Raf kinase inhibitor protein (RKIP) is a globular protein with molecular weight of 20-25 kDa, and it belongs to the PEBP family made up of over 400 members [3]. It (RKIP) is phosphatidylethanolamine-binding protein in bovine brain. Studies have shown that RKIP usually binds to Raf-1 and blocks Raf-1-induced MEK phosphorylation [4,5]. In addition, RKIP regulates signaling routes and influences several processes in cells [6]. Moreover, RKIP exerts anti-angiogenic, anti-intravasating, antiextravasating and anti-metastatic effects on tumors [7,8]. However, not much is known about the molecular mechanisms involved in the RKIPinduced inhibition of tumor metastasis. Several signaling pathways are negatively modulated by RKIP. However, the precise pathways or effectors involved have not yet been identified. Thus, the identification of the signaling pathways and elucidation of the effector genes regulated by RKIP will not only enhance knowledge of the mechanism of suppression of metastasis, but will also be beneficial for inhibition of metastasis in the clinics. In this study, based on previous findings, the effect of RKIP on malignancy of GC and sensitivity to chemotherapy were investigated. Therefore, the expressions of RKIP protein in GC cells and normal cells were assayed, and the effects of RKIP suppression on the malignancy of GC and sensitivity to chemotherapy were determined. Specifically, the study was designed to determine if there is an association between RKIP expression and clinical response in GC cases subjected to fluorouracil-based chemotherapy. EXPERIMENTAL Cell lines, culturing and transfection Human GC cell lines MGC-803 and SGC-7901 were obtained from Cell Bank of Chinese Academy of Sciences. They were maintained in RPMI-1640 having 10 % FBS and 1 % penicillinstreptomycin at 37 o C in a 5 % CO2 humidified incubator. The culture-related chemicals were bought from Hyclone, while 5-FU was bought from Sigma-Aldrich. Overexpression plasmid and shRNA The RKIP shRNA targeting open reading frame of RKIP was 5′-CGAGCAGCTGTCTGGGAA GTA-3′. An unrelated 19-nt sequence (5′-TTCTCCGAACGTGTCACGT-3′) was used as shRNA-negative control. The coding sequence of RKIP was cloned into pcDNA3.1 plasmid by Clonetech. The negative control used was pcDNA3.1. In the overexpression process, 2×10 6 cells from each cell line were plated separately and cultured overnight. The serum-free medium was changed, followed by addition of siRNA or plasmid with lipofectamine 3000 (Invitrogen). After incubation for 6 h, the medium was replaced with 10 % FBS. After culturing for 48 h, the efficiency of inhibition or overexpression was measured with RT-PCR and Western blotting. MTT assay In cell proliferation assay, each cell line was plated at a density of 2000 cells/well in triplicates in 96-well plates and were subjected to incubation for 2 days at 37°C in a 5 % CO2 humidified chamber, followed by addition of 10 μL of MTT (10mg/mL) to each well, and incubation for 2 h at 37°C. Thereafter, the MTT was discarded, and 100 uL of dimethyl sulphoxide (Sigma) was added to every well, so as to solubilize the formazan crystals formed. The absorbance of formazan solution was read at 595 nm in a microplate reader (Thermo Fisher). Protein extraction and Western blot assay Cells and tissues were lysed in 50 mM radioimmunoprecipitation assay buffer containing 150 mM sodium chloride, 1% NP-40, protease inhibitor, 0.5 % Na deoxycholic acid and 1 mM PMSF. The protein content of the lysate was measured using BCA protein assay kit. Equal protein levels were separated via SDS-PAGE, and trans-blotted onto PVDF membrane using semi-dry transfer. The membrane was blocked by incubation using 5 % skimmed milk for 60 min, and was thereafter incubated overnight with the following primary antibodies: RKIP (ab76582, Abcam); GAPDH (ab8245, Abcam); Bax (ab77566, Abcam); caspase-3 (ab32042, Abcam); Bcl-2 (ab32124, Abcam), and RKIP (ab76582, Abcam). Following rinsing, the membrane was treated with the 2 o antibodies goat anti-mouse IgG HRP (m21001) and goat anti-rabbit IgG HRP (m21002) at room temperature for 60 min. Thereafter, ECL and Western blot detection system (GE Lifescience) were used to measure bound antibodies. RNA extraction and RT-PCR Total RNA extraction was done using TRIzol reagent. First-strand complementary miRNA was produced from RNA with PrimeScript RT master Mix Perfect Real Time. The RT-PCR was done using SYBR green (Takara, Dalian, China) on Applied Biosystem Stepone Plus RT-PCR system, with GAPDH as loading control. Table 1 shows the sequences of the primers used. Flow cytometry Cell apoptosis was measured flow cytometrically. Samples for cell cycle were harvested via tryptic digestion and rinsed two times in PBS. After addition of Annexin V-FITC and PI in the dark, the samples were allowed to stand at laboratory temperature for 15 min, followed with washing twice with binding buffer. Assay of cell migration This was carried out using a Transwell chamber membrane with 8.0 μm pores in 24-well plates (Millipore). 2 × 10 4 cells in 100 uL serum-deficit medium were seeded onto the upper part of the Transwell chamber, while the lower chamber contained 1 mL 10 % FBS medium. After culturing for 8 h, the medium was discarded, and the non-migrating cells were wiped off with cotton bud, while migrative cells on the lower chamber were stained with 0.1 % crystal violet. Statistical analysis The results are presented as mean ± SD. (SPSS 17.0). Statistical analysis was done using t-test with SPSS 17.0. Values of p ˂ 0.05 were assumed as indicative of significant differences. Table 1 shows that there was strong positive expression of RKIP in 85.0 % (68/80) of paratumor tissue specimens, while the corresponding value in GC tissue was only 18.75 % (15/80) (p < 0.001). Figure 1 A presents the results of staining for GC and non-GC tissues. The original data in TCGA revealed higher expression of RKIP in GC than in non-cancerous tissues (Figure 1 B and C). These findings suggest that RKIP might be associated with the progression of GC. RKIP inhibited cell proliferation and enhanced chemosensitivity to 5-FU To elucidate the role of RKIP in GC, MGC-803 and SGC-7901 cell lines were used to conduct further functional studies. In the first step, RKIP, sh-RKIP or NC was transfected into GC cells in order to unravel the gain-or loss-of-function effect on GC cell proliferation. Western blot and RT-PCR assays indicated that the transfection efficiency of RKIP increased 3 to 4 times, while the transfection efficiency of sh-RKIP decreased 2 to 3 times, when compared to control cells ( Figure 2 A -F). To determine the influence of RKIP on the malignant behavior of the GC cell lines, in vitro motility assays were carried out. Results from MTT assay indicated that combined treatment with RKIP+5-FU markedly suppressed the proliferation of the two cell lines, when compared with the control. This situation was reversed by shRKIP+5-FU where FU markedly suppressed the proliferative potential of the GC cells, when compared with the RKIP knockdown plasmid (Figure 3 A -D). Thus, RKIP blocked the proliferation of the GC cells and enhanced their sensitivity to 5-FU. A combination of RKIP and 5-FU promote apoptosis of GC cells Based on the results of MTT assay, flow cytometry was used to assess the apoptotic influence of RKIP on GC cell lines. Double staining of infected MGC-803 cells with Annexin V-FITC and PI showed obvious increases in apoptosis in RKIP-overexpressing cells, when compared with control cells, while transfection with RKIP inhibitor decreased the population of apoptotic MGC-803 cells. When 5-FU was added, there was an enhancement in proportion apoptotic cells, relative to non-chemotherapy drug group. Thus, RKIP promoted apoptosis GC cell line MGC-803, and enhanced the sensitivity of the cells to 5-FU (Figure 4 A and B). Similar results were obtained with SGC-7901 cell line ( Figure 5 A and B). RKIP promoted apoptosis via regulation of apoptosis-related factors in GC Data showed that RKIP exerted tumorsuppressing effect by regulating cell apoptosis of the GC cell lines. The protein expressions of caspase-3 and Bax were downregulated in GC cells when transfected with RKIP plasmid. On the other hand, Bcl-2 expression was increased in RKIP-overexpressing cells, and suppressed in RKIP-inhibited cells. Moreover, 5-FU consistently enhanced the protein expressions of Bax and caspase-3, while inhibiting that of Bcl-2 ( Figure 6 and Figure 7). Quantitative RT-PCR (qRT-PCR) was also used to determine the mRNA profiles of Bcl-2, caspase-3 and Bax. It was found that overexpression of RKIP inhibited Bcl-2 expression, while it promoted the protein expressions of Bax and cleaved-caspase-3 in MGC-803 and SGC-7901 cells. The chemotherapy drug 5-FU promoted mRNA expressions of Bax and caspase-3, while it inhibited mRNA expression of Bcl-2 ( Figure 8 and Figure 9). Figure 11). Thus, the observed anti-metastatic capacity may be due to targeted control of malignant behavior in the GC cells. DISCUSSION One vital factor in the prognosis of GC, a disease which accounts for most tumor-associated deaths, is metastatic change in the lymph node. Studies have established that RKIP inhibits metastasis in many types of cancer [12][13][14]. In this study, the levels of RKIP were directly correlated with cellular tumorigenicity and susceptibility to apoptosis. The MGC-803 and SGC-7901 cells exert tumorigenicity in nude mice, and they express limited amounts of RKIP, but this may be markedly upregulated with chemotherapy. It has been suggested that sensitization may result from enhancement of death signal pathway [15][16][17]. On the other hand, RKIP may render tumor cells susceptible to apoptotic changes via suppression of their proliferation, migration and invasion. It is not clear if the sensitization effect of RKIP on proliferation or apoptosis is specific to GC cells. The results from studies on GC cell lines MGC-803 and SGC-7901 revealed low RKIP concentrations which were not significantly affected when the cells were exposed to DNA impairment. Consistent with the apoptosisinducing effect of RKIP, there was low level of apoptosis following 2 days of exposure to DNAimpairing drugs at doses that induced aggravated apoptotic changes in MGC-803 cells. However, the anti-tumor agent 5-FU induced apoptosis in MGC-803 and SGC-7901. Therefore, these results are in agreement with the view that RKIP is involved in apoptosis, and they suggest that the expression of RKIP is probably controlled via multiple routes following exposure to apoptotic agents. Moreover, it was shown that normalization of RKIP concentrations in GC cell lines triggered cell proliferation, migration and apoptosis. This study has demonstrated that RKIP inhibited cell colony formation and invasion of GC cells. These results suggest that downregulation of RKIP might promote the conversion of a normal cell to a tumor cell. Consistent with results from in vitro studies on the GC cell lines, it was also shown that the expressions of RKIP were downregulated in the GC cells, when compared with normal cells. These findings are consistent with those obtained recently in a study which identified RKIP as a new and medically-important inhibitor of metastasis in prostate carcinoma. The results of the present study suggest that cancer cell metastasis may be suppressed using druginduced expression of RKIP, leading to enhancement of apoptosis. In the present study, it has been demonstrated that chemotherapy-induced rapid upregulation of RKIP-triggered apoptosis in human gastric cancer cells. However, in tumor cells insensitive to DNA-damaging drugs, exposure to 5-FU did not upregulate RKIP expression. In contrast, ectopic expression of RKIP sensitized these cells to apoptotic changes, while RKIP downregulation conferred insensitivity to 5-FU by relieving its suppressive effect on two main survival routes in tumors. These results indicate that RKIP is a new indicator of apoptosis in cancers. CONCLUSION These results suggest that RKIP suppresses cell proliferative as well as cell migratory and invasive capacity in GC cell lines. Thus, it may be reasonably hypothesized that RKIP may serve as an inhibitor gene in human GC. Thus, it is a new index of prognosis, and a therapeutic target for gastric cancer.
2021-01-30T06:41:15.721Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "7b627d973e3c8f252268c3b6af1852af46fac85b", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/204683/193004", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7b627d973e3c8f252268c3b6af1852af46fac85b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
14866434
pes2o/s2orc
v3-fos-license
Epigenetic regulatory functions of DNA modifications: 5-methylcytosine and beyond The chemical modification of DNA bases plays a key role in epigenetic gene regulation. While much attention has been focused on the classical epigenetic mark, 5-methylcytosine, the field garnered increased interest through the recent discovery of additional modifications. In this review, we focus on the epigenetic regulatory roles of DNA modifications in animals. We present the symmetric modification of 5-methylcytosine on CpG dinucleotide as a key feature, because it permits the inheritance of methylation patterns through DNA replication. However, the distribution patterns of cytosine methylation are not conserved in animals and independent molecular functions will likely be identified. Furthermore, the discovery of enzymes that catalyse the hydroxylation of 5-methylcytosine to 5-hydroxymethylcytosine not only identified an active demethylation pathway, but also a candidate for a new epigenetic mark associated with activated transcription. Most recently, N6-methyladenine was described as an additional eukaryotic DNA modification with epigenetic regulatory potential. Interestingly, this modification is also present in genomes that lack canonical cytosine methylation patterns, suggesting independent functions. This newfound diversity of DNA modifications and their potential for combinatorial interactions indicates that the epigenetic DNA code is substantially more complex than previously thought. Background To establish and maintain cellular identity during development, specific memory mechanisms have evolved that regulate gene expression patterns epigenetically. Once determined, these lineage-specific expression profiles have to be maintained through cell divisions. Active or inactive states of gene expression are defined by specific epigenetic modification patterns that are either accessible to transcription factors and activators, or result in a closed chromatin structure that prevents activated transcription [1][2][3]. Central to this is the concept of epigenetic marks, specific DNA or chromatin modifications that can be inherited through cell divisions. These marks maintain the epigenetic information and serve as interaction sites for specific binder or reader proteins, which include epigenetic modifier enzymes, repressors, chromatin remodeling complexes and the transcription machinery. The most prominent of these marks is the methylation of the carbon-5 of cytosine (5mC), which is traditionally considered incompatible with activated transcription when present near gene regulatory regions. At these regions, 5mC can modulate the binding of transcription factors [4,5] or induce the binding of specific 5mC-binding proteins that can lead to the recruitment of co-repressor complexes to methylated target promoters [6]. While there is an enormous number of published studies on epigenetic modifications, most of them are correlative in nature. This is exemplified by the increasing use of powerful genome-wide mapping technologies that have revealed numerous associations between changes in epigenetic modification patterns and cell fate transitions [7][8][9]. However, functional insight remains relatively limited. Furthermore, the field has broadened significantly through the discovery of two additional DNA modifications with epigenetic regulatory functions, 5-hydroxymethylcytosine (5hmC) and N6-methyladenine (6mA), as well as the identification of the corresponding modifying enzymes ( Figure 1). Our review aims to illustrate the epigenetic regulatory functions of these DNA modifications, with a predominant focus on animal models. Epigenetic regulation in plants has recently been reviewed elsewhere [10][11][12]. 5-Methylcytosine has been termed the "fifth base" of the human genome. This reflects the relatively high abundance of this modification, as about 4% of the cytosine residues in the human genome have been found to be methylated. However, cytosine methylation levels can differ greatly among animal genomes (see below), and it would therefore be misleading to define the significance of 5mC by its abundance. Rather, the key feature of cytosine methylation is its enrichment or even specificity for "symmetric" CpG dinucleotides [13]. Symmetric methylation means that methylation marks are present on both strands of DNA and that methylation patterns can be faithfully propagated through DNA replication by copying from the parental strand to the unmethylated newly synthesized strand. This methylation maintenance is carried out by the Dnmt1 DNA methyltransferase that has a strong preference for hemimethylated DNA and provides a key paradigm for the stability and heritability of epigenetic information [14]. Dnmt1 is complemented by the Dnmt3 DNA methyltransferases that do not show any selectivity for hemimethylated DNA and have therefore been termed "de novo methyltransferases" [14]. Together, both enzymes catalyze the establishment and maintenance of cytosine DNA methylation patterns during animal development and cell fate specification. 5-Methylcytosine: the fifth base While the overall specificity of animal methylation patterns for CpG dinucleotides has been confirmed in numerous studies, several notable exceptions have also been described. A prominent example is non-CpG methylation in mouse embryonic stem cells (ESCs), which was verified in the first genome-wide methylation analysis of ESCs [15]. While levels of non-CpG methylation are very low in most somatic tissues, extensive postnatal accumulation of this modification has been observed in the mouse and human brain [16][17][18]. Targeted depletion of Dnmt3a in specific brain regions resulted in significant reduction of non-CpG methylation [18,19]. In contrast to ESCs where non-CpG methylation seems to correlate with gene expression [15], the modification exhibited an inverse correlation with transcription in neurons, which could partly be explained through the recruitment of the methyl-CpG binding protein 2 (MeCP2) [18,19]. Context dependent non-CpG methylation might therefore have an impact on specific readers of DNA methylation, thus influencing tissue-specific gene expression. Beyond mammalian methylomes, the comparative analysis of single-base resolution methylation maps has shown a substantial degree of variation between animal species [15,20,21]. The available information can be used to define three major categories ( Figure 2): the first group is defined by mammalian methylomes and is characterized by pervasive methylation. In the human genome, more than 80% of the CpG dinucleotides are methylated, Figure 1 DNA modifications with epigenetic regulatory functions and their interdependencies. Cytosine (C) is methylated to 5-methylcytosine (5mC) by DNA methyltransferases (DNMT) and then further oxidised to 5hmC, 5fC and 5caC by Tet dioxygenases. 5-Hydroxyuracil (5hmU) is produced by Tet-catalysed oxidation of thymine (T). N6-methyladenine (6mA) is likely catalysed by DNA N6 adenine methyltransferases (DAMT-1 in C. elegans), even though the biochemical activity of these enzymes remains to be characterized. The Tet-like ALKB enzymes NMAD (N6-methyl adenine demethylase 1) and DMAD (DNA 6mA demethylase) have been shown to be involved in 6mA demethylation in C. elegans and in Drosophila, respectively, possibly by using a conserved dioxygenase mechanism. creating a landscape of ubiquitous methylation, but with local gaps that are often found at active regulatory elements, such as promoters and enhancers ( Figure 2). It seems plausible to assume that the default state of these methylomes is "methylated" and that active mechanisms (see below) are required to keep specific regions free of methylation. The second group is exemplified by the honeybee methylome, that can be defined by only 60,000 CpGspecific methylation marks that are highly enriched in exons [22]. In this case, the default state of the genome appears to be "unmethylated" and the selective targeting of DNA methyltransferases to specific CpGs would be a key step for shaping the methylation landscape ( Figure 2). Such sporadic methylation patterns have been described in several animals, particularly in insects. However, the functional significance of sparse methylation remains to be fully understood, which is largely due to the limited potential of the corresponding organisms for genetic manipulation. Importantly, it has been shown that queen-like phenotypes can be enhanced in honeybees following siRNA-mediated knockdown of the Dnmt3 orthologue [23]. While the mechanisms underlying this phenomenon remain to be elucidated, these results strongly suggest a functional role of this enzyme in caste specification, possibly through the modulation of castespecific methylation patterns. Finally, several animal genomes have failed to reveal canonical cytosine methylation patterns (Figure 2), which implies that 5mC is not essential for development and cell fate specification of well-known laboratory models such as S. cerevisiae, S. pombe, C. elegans and D. melanogaster [24]. The absence of conserved cytosine methylation patterns in these organisms was instrumental for the identification and characterization of other epigenetic mechanisms, including covalent histone modifications and small noncoding RNAs [25][26][27]. Moreover, it also played an important role in the recent discovery of N6-methyladenine as an epigenetic DNA modification in eukaryotes (see below). The functional analysis of cytosine methylation has proven to be surprisingly complex and difficult, even in well-characterized mammalian organisms. While knockout models demonstrated a role of Dnmt1 and Dnmt3 in mouse development [28,29] and in general epigenetic phenomena, such as genomic imprinting [30], X-chromosome inactivation [31] and transposon control [32], the specific function of cytosine methylation in epigenetic gene regulation remains to be fully understood. However, recent integrative studies that combine the targeted disruption of Dnmt genes with genome-wide mapping approaches have provided interesting insight into the functional specificities of individual Dnmts. For example, Dnmt3a-mediated gene body methylation at transcriptionally active genes was shown to be prevalent in postnatal neuronal stem cells and is required for postnatal neurogenesis [33]. In addition, other Dnmts were found to interact with actively transcribed gene bodies, suggesting that gene body methylation promotes transcription [34]. Most recently, Dnmt3b-mediated gene body methylation in mouse ESCs was shown to depend on the presence of histone H3 lysine 36 methylation in the same regions [35]. This represents a novel and unexpected feature of de novo methyltransferases, as it suggests the recruitment of cytosine methyltransferases by the co-transcriptional modification of histones. In another study, it was shown that human embryonic stem cells lacking both DNMT3A and DNMT3B progressively lose cytosine methylation marks, thus illustrating an imperfect maintenance activity of DNMT1 and a supporting role of DNMT3 enzymes in maintenance methylation [36]. Similar results were obtained with Dnmt-deficient mouse ESCs, which also revealed differential specificities of Dnmt1 and Dnmt3a/b for distinct subclasses of retrotransposons [37]. Further analyses of human ESCs revealed a novel role of DNMT3A in the hypermethylation of genes associated with endoderm differentiation and a rapid, replication-dependent loss of global DNA methylation in DNMT1-deficient cells [36]. It will be important to use similar approaches for the characterization of additional cell types and model systems in order to fully understand the epigenetic regulatory function of 5mC. 5-Hydroxymethylcytosine: oxidation creates a new modification With the discovery of the catalytic dioxygenase activity of Ten eleven translocation (Tet) proteins, novel epigenetic DNA modifications started to emerge [38,39]. 5-Hydroxymethylcytosine (5hmC, Figure 1) was originally discovered in mammalian DNA in 1972 [40], but its biological significance was investigated only almost 40 years later [41]. Cytosine hydroxymethylation levels are often around 0.1% in mammalian tissues, but can vary greatly [42], with highest values in the brain, where up to 1% of the cytosines can be hydroxymethylated [41]. The three mammalian Tet homologues generate 5hmC from existing 5mC, which they can further process to 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC, Figure 1) [43,44]. About 30,000 molecules of 5mC, 1,300 of 5hmC, 20 of 5fC, and 3 of 5caC were found per million Cs in mouse embryonic stem cells [44,45], indicating a very low abundance of 5fC and 5caC. As both modifications are targeted by base excision repair mechanisms mediated by thymine-DNA-glycosylases, they are mainly interpreted as intermediates of an active demethylation pathway via Tet-dependent 5mC oxidation [43,44]. We are only beginning to understand the functional significance of 5hmC as an epigenetic mark and the specific roles of the three Tet enzymes. Tet1 and Tet2 are highly expressed in mouse ESCs, but their single depletion does not affect pluripotency or development [46][47][48][49]. Tet3 homozygous mutant mice develop properly, but die at birth [50], suggesting that Tet3 is also dispensable for embryonic development. ESCs deficient for both Tet1 and Tet2 show insignificant levels of 5hmC, but retain pluripotency. However, the majority of mice lacking both proteins showed developmental defects, which was found to be associated with ectopic hypermethylation [51]. Combined deficiency of all three Tet proteins in ESCs depleted 5hmC completely, but did not affect ESC viability and pluripotency [52][53][54]. Nevertheless, triple knockout ESCs and embryoid bodies showed impaired differentiation potential, promoter hypermethylation and correlated deregulation of genes implicated in embryonic development and differentiation [52]. In agreement, severe defects in somatic cell reprogramming and mesenchymal-epithelial transition have been described in double and triple Tet knockout mouse embryonic fibroblasts [53]. These data point to a major role of Tet-mediated oxidation in DNA demethylation, most likely by keeping regulatory genomic regions free of 5mC. Particularly important are enhancers, that have been shown to be hypermethylated in Tet-deficient mouse ESCs, resulting in a reduced activity of associated differentiation genes [54,55]. Tet-dependent oxidation of 5mC as a first step of active demethylation is therefore an early event of enhancer activation [54][55][56], but might also more generally allow functional interactions with regulatory DNA elements and counteract aberrant spreading of DNA methylation into CpG islands [57]. Nevertheless, 5hmC was also found as a relative stable base at a subset of mammalian promoters, at gene bodies of actively transcribed genes and at poised and active enhancers [58,59]. 5fC was also mapped to a subset of these 5hmC-marked regions [60][61][62], suggesting a role as an independent epigenetic mark. Indeed, several "reader" proteins for oxidised 5mC-derivatives have been identified, which might mediate epigenetic regulation [63,64]. Among these were, in addition to DNA damage-and repair-related proteins, chromatin modifiers and transcriptional regulators like e.g. MBD3, MeCP2, UHRF2 and FOX transcription factors [64][65][66]. While the functional relevance and specificity of the interactions remains to be fully understood (e.g. many 5hmC interacting proteins also have significant affinities for 5mC) these readers might recruit chromatin regulatory complexes to their targets and support activated transcription. A role of 5hmC as active mark is supported by mass spectrometric analyses of isotope labelled DNA form mammalian cell culture and mice showing that 5hmC is mostly a stable modification and not a transient intermediate [67]. The high abundance in post-mitotic brain tissues [41,42] also suggests a direct epigenetic function of 5hmC. Indeed, 5hmC levels increase during neuronal differentiation and a very stable intragenic enrichment of 5hmC was observed at many active neuron-specific genes [66,[68][69][70]. These findings suggest that 5hmC functions as epigenetic mark in mammalian neuronal development. This is further supported by the observations that the activated human HOXA cluster becomes stably enriched in 5hmC upon retinoic acid stimulated neuronal differentiation [71] and that increased 5hmC levels at neuronal marker genes in Sirtuin-6-deficient mice induce skewed differentiation versus neuroectoderm [72]. While there is evidence for a direct epigenetic function for 5hmC at least in some tissues, a similar role for its oxidation derivatives appears less likely. The levels of 5fC and 5caC have been found to increase at 5fC sites in thymine-DNA-glycosylase-deficient mouse ESCs, suggesting that 5caC sites primarily represent sites of active demethylation [60][61][62]. It remains possible that, due to the chemical differences between the oxidised 5mCderivatives, each modification might attract specific readers. However, considering the relatively strong DNAdamage response triggered by 5fC and 5caC (in contrast to 5hmC) and their very low abundances, it seems more likely that these modifications transiently accumulate at the regions of the hydroxymethylome that undergoes demethylation. In contrast, a subset of 5hmC sites appears to be stable and might act as an independent epigenetic mark. Very recently, it has been shown that Tet proteins can also oxidize thymine to 5-hydroxymethyluracil (5hmU, Figure 1) [73]. Tet-dependent 5hmU is present at levels similar to 5caC in mESCs, increases during early ESC differentiation and recruits specific interacting proteins [73], suggesting an epigenetic function for Tetdependent 5hmU. Nevertheless, 5hmU paired with adenine is a target for the Smug1 DNA glycosylase [74] and might therefore trigger base excision repair mechanisms. Indeed knock down of Smug1 in mESCs led to increased 5hmU levels [73], indicating that 5hmU might also serve to promote active demethylation by recruiting repair factors to Tet targets. N6-methyladenine: revival of an old acquaintance In bacterial genomes 5mC is outshined by a second base modification, N6-methyladenine (6mA, Figure 1). Adenine methylation has been shown to be essential for the viability of several bacteria, as methylation of GATC sequences by the Dam methylase creates specific marks that are important for DNA replication, chromosome segregation, mismatch repair and the regulation of gene expression [75,76]. However, several older studies also suggested the presence of 6mA in eukaryotic genomes, even though detection was often indirect and modification levels appeared close to the detection limit [76]. Several unicellular eukaryotes, including the green alga Chlamydomonas reinhardtii, had consistently shown comparably high levels of DNA adenine methylation [76], which established this organism as an attractive model to investigate 6mA further. Over the past few years, several powerful technologies were developed to analyze 6mA in RNA, where this modification plays an important regulatory role. When these methods were adapted to characterize the distribution of 6mA in the Chlamydomonas genome, some key characteristics of this modification could be defined [77]. For example, the results showed that the algal adenine methylome consists of about 85,000 fully methylated 6mA sites, corresponding to a global adenine methylation level of approximately 0.4%. Methylation was often found in symmetric ApT target sequences, but there was no evidence for symmetric 6mA methylation. The modification was enriched at promoter regions, and particularly in linker regions between adjacent nucleosomes. The authors propose a model in which the DNA 6mA modification either restricts or marks the positions of nucleosomes near transcriptional start sites in Chlamydomonas. As such, the presence of 6mA may position nucleosomes to facilitate initiation of transcription. While these findings are highly interesting, they are difficult to generalize because of a highly specific periodic pattern of nucleosome occupancy around transcriptional start sites in Chlamydomonas. Furthermore, the Chlamydomonas genome has an unusual pattern of 5mC: it is characterised by low levels of CpG methylation but also contains CHG and CHH methylation in gene bodies, which corresponds to known plant methylation patterns [20]. A parallel study also revealed novel details of adenine DNA methylation in Caenorhabditis elegans [78]. Similar to Chlamydomonas, adenine methylation was found to be variable, and maximum levels were rather low (0.3%). Mapping of 6mA residues by SMRT sequencing revealed that methylation was targeted to GAGG and AGAA consensus sequences, indicating strand-specific adenine methylation. Interestingly, 6mA accumulated in worms deficient for spr-5 (coding for a H3K4me2 demethylase), an important paradigm of trans-generational epigenetic inheritance [78]. Further work led to the identification of a C. elegans DNA adenine demethylase (Nmad-1), belonging to the ALKB family of dioxygenases that also contains the Tet proteins. In addition, the authors identified a candidate DNA adenine methyltransferase (Damt-1) related to bacterial 6mA DNA methyltransferases. This enzyme belongs to a highly conserved family of proteins that is characterized by a C-terminal circularly permuted methyltransferase domain fused to a distinctive N-terminal domain [79]. While the biochemical activity of the enzyme remains to be characterized, deletion of Damt-1 suppressed the trans-generational phenotypes of spr-5 mutant worms, suggesting that 6mA might be a transgenerationally inheritable epigenetic mark. Additional insight into the function of adenine methylation came from a recent analysis in Drosophila. Flies represent a particularly interesting model for DNA modifications, because of the longstanding controversial discussions surrounding the cytosine methylation status of the Drosophila genome. In addition, the fly genome encodes an unusual DNA methylation machinery, with no canonical Dnmt1/3 homologue, but with a clear Tet homologue. The former is consistent with the reported absence of Dnmt-dependent cytosine methylation patterns in Drosophila [24,80], but the latter seemed to indicate that methylation may have been overlooked so far. By using highly sensitive mass spectrometry approaches, Zhang et al. have now demonstrated the presence of low (0.07%) but significant levels of adenine methylation during the earliest stages of Drosophila embryogenesis [81]. Most interestingly, the authors showed 6mA demethylation by the Drosophila Tet homologue DMAD in vitro and a specific increase of 6mA levels in the genomic DNA of DMAD mutants suggesting that DMAD is a 6mA-specific enzyme [81]. Furthermore, both deletion and overexpression of DMAD resulted in lethality, thus demonstrating an important developmental function of 6mA in Drosophila. One such function could be the regulation of transposons, as 6mA appeared enriched in transposon regions and transposons marked with 6mA were derepressed in DMAD mutants. Taken together, if 6mA will also be found in significant quantities in the genome of other eukaryotes, it might turn out to be an important carrier of epigenetic information, involved in the regulation of gene expression and possibly playing a complementary role to 5mC at certain loci or during specific stages of development. Conclusions Epigenetic DNA modifications generally affect the accessibility of genomic regions for regulatory proteins or protein complexes, for example by preventing interactions or by recruiting specific readers. Consequently, this can influence the chromatin structure and/or directly regulate enhancer and promoter activity or transcriptional processivity. Cytosine methylation is so far the only known symmetric modification with an established maintenance mechanism, which represents a unique feature that currently distinguishes 5mC from all other epigenetic modifications. 5mC has mostly been related to gene repression, in particular at enhancer and promoter regions of genes ( Figure 3), but might also play an important role in positively influencing transcription, either by recruiting methylation-specific transcription factors [82,83] or by a yet to be understood mechanism when present in the body of active genes [35]. Dynamic epigenetic processes also require the active removal of a mark. With the discovery of the enzymatic functions of the Tet proteins, the main enzymes for the removal of DNA methylation were identified. 5hmC and its Tet-dependent oxidation products are demethylation intermediates, but might also have significant roles as independent epigenetic marks ( Figure 3). Specific readers for 5hmC, 5fC and 5caC have been identified that function in transcription regulation and chromatin remodeling, mostly promoting the active state. In addition, 5fC, 5caC and 5hmU might primarily function in 5mC 5-methylcytosine. Repressive mark at enhancers and promoters, enriched in active gene bodies, recruits specific binders. the recruitment of DNA repair-associated complexes and thus enhance demethylation ( Figure 3). Finally, these marks might also directly contribute to gene regulation by triggering "scheduled" DNA repair, which has been suggested to be coupled with activated transcription [84]. The discovery of 6mA in eukaryotes recently identified an additional methylation mark (Figure 3). With C. elegans and D. melanogaster, two species with negligible 5mC/5hmC levels were shown to contain low, but significant genomic 6mA levels. In both species, this novel modification can be cautiously interpreted as an active epigenetic mark, as data from C. elegans suggests a functional interplay with an established active histone mark (H3K4me2) [78], whereas in Drosophila mutations in the 6mA-demethylase DMAD (a Tet-homologue) caused increased transposon expression [81]. In both organisms mutations in the 6mA-specific enzymes resulted in significant phenotypes (developmental defects, infertility), suggesting important roles in development. Also in Chlamydomonas, 6mA marks actively transcribed genes near the transcriptional start site (TSS). Future research needs to address the conservation of 6mA and the enzymes that can set and remove this modification. Interestingly, the candidate C. elegans 6mA methyltransferase Damt-1 belongs to a widely conserved family of enzymes [78] that also includes a human homologue (METTL4). Nevertheless, reports on 6mA in higher eukaryotes have been sparse and the results were often inconclusive [76]. Highly sensitive mass spectrometry detected less than one molecule of 6mA per million nucleotides in DNA from selected mouse tissues [85], suggesting that 6mA is not a constitutive modification, or is rapidly turned over by demethylation processes. It might be possible to enrich 6mA by depleting the 6mAdemethylase, as shown for Drosophila [81]. Furthermore, additional enzymes potentially involved in adenine methylation and demethylation in mammals can be identified using genome editing tools. Finally, the observation that 6mA demethylation in Drosophila can be mediated by a Tet-like enzyme [81], raises the fascinating possibility that cytosine and adenine (de)methylation are coordinated. It will be most interesting to investigate the potential interplay between specific DNA modifications and to explore the full complexity of this epigenetic code.
2017-08-03T02:27:32.973Z
2015-07-21T00:00:00.000
{ "year": 2015, "sha1": "47d8bb3f2dd822b16304af88ab2480c3b079d354", "oa_license": "CCBY", "oa_url": "https://epigeneticsandchromatin.biomedcentral.com/track/pdf/10.1186/s13072-015-0016-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a43e2893c109f778fdc9ebbde30a62da37d7962", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
94659457
pes2o/s2orc
v3-fos-license
An odorless and efficient synthesis of symmetrical thioethers using organic halides and thiourea in Triton X10 aqueous micelles Abstract The synthesis of symmetrical thioethers using organic halides and thiourea in Triton X10 aqueous micelles under basic conditions has been described. Primary alkyl, allyl and benzyl halides can be converted efficiently into symmetrical thioethers in high yields. The entire route is an almost odorless process, and the protocol is applicable to large-scale operation without any problem. Introduction The development of new, efficient and environmentally benign synthetic protocols for the formation of CÁS bonds is an important target in modern organic synthesis (1Á4). As a result, many recent reports focus on using organic halides and thiols to form CÁS bonds (5Á8); however, the use of highly volatile and foul-smelling thiols leads to serious environmental and safety problems. Recently, it has been found that thiols can be replaced with odorless, non-toxic thiourea as a sulfur source for the formation of CÁS bonds (9Á11). Water is generally considered as an ideal, ''green'' solvent for organic transformations based on cost, safety, and environmental impacts. As a result, designing organic reactions in water has become one of the most attractive areas in Green Chemistry (12,13). The poor solubility of many reactants in water, however, is the main obstacle in the use of water as a reaction medium, as it may inhibit reactions due to phase separation and inefficient mixing of reactants (14). To solve this problem, surfactants are added to form aqueous micelles, which can absorb reactants into a microheterogeneous system (15). This can change the reactants' physical properties, quantum efficiencies and reactivities (16), thus accelerating many organic reactions that would be relatively slower in aqueous *Corresponding author. Email: c.cai@mail.njust.edu.cn medium. This is referred to as ''micellar catalysis'' (15Á18). As a part of our interest in using aqueous micelles as a reaction medium, the synthesis of symmetrical thioethers, which is often performed in hot alcohol using organic halides and toxic alkali metal sulfides in industry, has been carried out in Triton X10 (TX10) aqueous micelles using organic halides and thiourea under basic conditions. To the best of our knowledge, there is no report about the synthesis of symmetrical thioethers using organic halides and thiourea in water. Results and discussion Initially, we wanted to perform the reaction of benzyl chloride and thiourea to produce benzyl disulfide in 10 wt% TX10/H 2 O using MnO 2 as an oxidant. Surprisingly, the reaction failed to provide the desired product and instead yielded benzyl thioether as the final product. Benzyl thioether was also obtained in absence of oxidant (Scheme 1). The reaction of benzyl chloride and thiourea was therefore selected as a model reaction to optimize the reaction conditions for the synthesis of benzyl thioether. As shown in Table 1, use of NaOH could provide a much better yield than use of Na 2 CO 3 . The strongly basic aqueous medium could enhance the reaction rate by promoting the formation of the corresponding S-alkylisothiouronium salt ( Table 1, Entries 1Á3) (9). After screening different surfactants, sodium dodecyl sulfate (SDS) provided a lower yield than TX10 and CTAB (Table 1, Entries 3Á5). Presumably, the abstraction of an electron from OH ( by S-alkylimidothiocarbamate to form the S-alkylisothiouronium salt was impeded by the anionic micelle-forming agent SDS (9,15). TX10 is a cheap, non-toxic surfactant that is widely used in industry as the emulsifier and detergent, so further studies of this reaction were continued with TX10 aqueous micelles. To our surprise, the reaction yield of benzyl chloride with thiourea in water was quite high (63%) ( Table 1, Entry 6). Interestingly, when n-heptyl iodide instead of benzyl chloride was employed under similar conditions, some differences were found. An excellent yield of 90% was obtained in 10 wt% TX10 aqueous micelles, while only a trace amount of desired product was obtained when the reaction was run ''on water'' (Table 1, Entries 7 and 8). These results suggested that the success of these reactions run in the absence of a surfactant is dependent on the substrates. The reason for this could be explained that benzyl chloride had higher reactivity that n-heptyl iodide, so the reaction of thiourea and benzyl chloride could occur, even ''on water''. The influence of TX10 concentration was also investigated ( Figure 1). The reaction yields increased with TX10 concentration due to the enlargement of the interfacial area and lower mass transfer resistance (15,19). A satisfactory yield (87%) was obtained with 10 wt% TX10 aqueous micelles and no significant change in yield was observed at higher surfactant concentration. Due to the huge interfacial area in micelles, organic halides could efficiently come into contact with thiourea and the micelle droplets formed by TX10 were hydrophobic enough to exclude water molecules (5), making it easier to form the S-alkylisothiouronium salt. Thus, the reaction occurred easily within a micelle, which functions as a micro-or nano-reactor. With the optimized conditions in hand, the reactions of various organic halides with thiourea were performed to ascertain the generality and scope of the protocol. Benzyl halides and allyl halides were easily transformed into corresponding thioethers with good to excellent yields at 30 8C, but 4-nitrobenzyl chloride needed a longer reaction time and a higher reaction temperature ( halides were more reactive substrates compared with those containing long chain ( Table 2, Entries 8Á14), because the reactions of primary alkyl halides and thiourea were bimolecular nucleophilic substitution (S N 2) in which less steric hindrance was beneficial to the reaction rate. However, secondary alkyl halides and aryl halides failed to react with thiourea to produce S-alkylisothiouronium salts, owing to their steric hindrance effects and electronic effects, respectively ( Table 2, Entries 15Á19). In order to show the possibility for large-scale operations, we also scaled up the model reaction to 50 mmol, and the reaction proceeded well with an 89% yield of the desired product (Table 2, Entry 20). In addition, we also focused on investigating the amount of NaOH to further optimize the reaction conditions, making the protocol more environmen-tally friendly. Take the reaction of benzyl chloride (2 mmol) and thiourea (2 mmol), as an example, 3.0, 1.0, 0.5, and 0.1 mmol of NaOH were used in the reaction respectively. The corresponding yields were 87, 89, 84, and 63%, indicating that the amount of NaOH could be reduced to 0.5 mmol without significant change in yield. Initial attempts were also made to synthesize an unsymmetrical thioether using iodobenzene, benzyl chloride, and thiourea catalyzed by CuI (20 mol%) in TX10 aqueous micelles. Two thioether products (1:2048:52) were obtained, indicating the high reactivity of thiourea with benzyl chloride in aqueous micelles (Scheme 2). Materials Triton X10 (Polyoxyethylene (10) octylphenyl ether, TX10) was purchased from the petrochemical plant of Jiangsu Haian. Alkyl bromides were obtained from the corresponding alcohols. All other chemicals (AR grade) were commercially available and used without further purification. Typical procedure for the reaction of benzyl chloride and thiourea in TX10 aqueous micelles To a solution of 10 wt% TX10 aqueous micelles (5 mL) were added benzyl chloride (2 mmol), thiourea (2 mmol), and NaOH (3 mmol) at 30 8C. After consumption of benzyl chloride, which was monitored by gas chromatograph (GC), the reaction mixture was extracted with petroleum ether (5 mL, for three times). The organic layer was collected, dried, and concentrated under reduced pressure to yield the crude product, which was further purified by flash column chromatography on silica gel (petroleum ether). Other symmetrical thioethers were synthesized using a similar procedure. Characterization 1 H-NMR spectra were recorded on Bruker DRX500 (300 MHz). GC analyses were performed by HP4890 Gas Chromatograph equipped with a flame ionization detector (FID) detector using an Agilent Technologies HP-5 column (15 m)0.530 mm) and a timed program beginning with 1 min at 70 8C followed by 20 8C min (1 ramp to 260 8C then 20 min at this temperature. Gas chromatography-mass spectrometry (GC-MS) data were obtained by using a Saturn2000 GC/MS series. HP 5890 GC was equipped with a CPSTIL-8CB mass selective detector. Mass spectra in the electron impact mode were generated at 70 eV and scan mode in the range of 40Á250 amu. Elemental analyses were performed on a Yanagimoto MT3CHN recorder. All products were known compounds identified by comparing their mass spectra with those contained in the mass spectrometer data system library and previously published literature. The selected data for benzyl thioether ( Conclusion In conclusion, we have described a pronounced catalytic effect of TX10 aqueous micelles for the synthesis of symmetrical thioethers using organic halides and thiourea under basic conditions. The protocol can be applied for large-scale operations easily due to its simple work-up procedure and the elimination of metal. In addition, the use of nontoxic, odorless thiourea instead of thiols or alkali metal sulfides makes the protocol more eco-friendly.
2019-04-04T13:13:44.025Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "f2786d29d2edd1756dd233390eec76015fd7f1b4", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17518253.2012.668221?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "601ca2eefa6b40981c4cc9de8651adedb61744e1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
264604934
pes2o/s2orc
v3-fos-license
Assessing quality of life in a randomized clinical trial: Correcting for missing data Background Health-related quality of life is a topic of current interest. This paper considers a randomized phase III study of radiation therapy with concurrent chemotherapy (docetaxel) versus radiation therapy alone in non-small cell lung cancer, stage III A/B. Longitudinal data on quality of life have been obtained through repeated administration of a multi-item questionnaire (EORTC QLQ-C30) developed by the European Organisation for Research and Treatment of Cancer. Missingness in the data is owing to patients having failed to complete the questionnaire at some of the scheduled filling-in times. Methods We have analysed a monotone (in terms of missingness) subset of the data as regards estimation of the mean score of a summary measure of self-reported quality of life in a hypothetical drop-out-free population at different points in time. Missingness is a difficult issue of great importance. We have therefore chosen to compare three different methods that are relatively easy to implement: the linear-increments method, the inverse-probability-weighting method and the Markov-process method. Single imputation has been applied in a supplementary analysis to fill in for all the non-consecutive missing score values prior to the execution of the estimation procedure. Results For the response in focus, the observed mean score at a certain time is larger than the estimated mean scores, which implies that the true mean score is easily overestimated unless the missingness is appropriately adjusted for. Comparison of the treatment arms shows a significant difference in mean score at the end of treatment. Conclusion Use of proper methodology developed for analysing data subject to missingness is necessary to reduce potential estimation bias. The quality of life of patients receiving radiation therapy with concurrent chemotherapy (docetaxel) appears somewhat worse than that of patients receiving radiation therapy alone in the period during which treatment is given. The conclusions are robust for the choice of statistical methods. Background Quality of life (QoL) is a rather complex multi-dimensional concept that can be defined as the degree of wellbeing felt by an individual [1]. It is commonly divided into two different components: a physical component and a psychological component. The former includes diet, health, etc., while the latter involves different emotional states such as worry, fear, sorrow and happiness. In health care it is very important to consider QoL in the course of a treatment evaluation. Since QoL is based on subjective assessments, it is not easily quantifiable, as opposed to more concrete measures like e.g. weight and blood pressure. Health-related QoL has been an area of research over the past 20 years, and several international validated selfreport questionnaires have been developed in this regard and used in longitudinal studies. A longitudinal study involves time-discrete observation of time-continuous processes, where measurements of the variables of interest are taken at consecutive points in time. These times are often represented by so-called study waves; wave 1 represents the time at which the first set of measurements is taken, wave 2 represents the time at which the second set of measurements is taken, and so on. A problem arises when study participants die, are lost to follow-up or for other reasons fail to contribute all of the planned sets of measurements. This resulting incompleteness of data is a challenge to the analyst, and it may lead to biased results if it is not taken into account in the statistical analysis and adjusted for in an appropriate way. The missingness is said to be of a monotone kind if a subject that fails to contribute measurements at a certain study wave, also fails to contribute measurements at all of the subsequent waves. Otherwise, the missingness is said to be of a non-monotone kind. In this paper we consider a monotone (in terms of missingness) subset of longitudinal measurements of QoL. The data are obtained from a randomized phase III study of radiation therapy with concurrent chemotherapy versus radiation therapy alone in non-small cell lung cancer (NSCLC), stage III A/B. Location of the randomization centre for this international multi-centre study was at The Norwegian Radium Hospital in Oslo, Norway. The clinical trial was approved by the Hospital Review Board, the Regional Ethics Committee and the Norwegian Medicines Agency. A total of 261 patients diagnosed with NSCLC, stage III A (inoperable) or stage III B, were included in the study between April 2000 and June 2006. Twelve of the initially included patients were later excluded from the study for not fulfilling the inclusion criteria. The final study sample thus consisted of 249 patients (157 men and 92 women) from Denmark, Finland, Norway and Swe-den. The study medication administration was divided into two different treatment arms: arm A (study arm) and arm B (standard arm). The former involved six weeks of radiation therapy, given five days a week, combined with weekly infusion of the cytotoxic drug docetaxel (Taxotere ® ), whereas the latter involved solely six weeks of radiation therapy. Upon inclusion, the patients were independently randomized to one of the two treatment arms; 119 (48%) of the patients were randomized to arm A, and 130 (52%) of the patients were randomized to arm B. Also, prior to inclusion of its first patient, each involved centre had to decide whether two courses of induction chemotherapy would be given before start of treatment, in which case the same regimen would be used for all patients included by that particular centre. Induction chemotherapy involves initial treatment by giving the patient standard chemotherapy before the start of radiation therapy with the intention to reduce the volume of the tumour (downstaging) in such a way that the radiation area is reduced. The primary objective of the study was to compare the survival time of radiation therapy combined with docetaxel versus radiation therapy alone, and the secondary objective was to compare the time to progression and QoL in the two treatment groups. Validated self-report, multiitem questionnaires have been developed by the European Organisation for Research and Treatment of Cancer (EORTC) in order to assess the QoL of cancer patients participating in clinical trials. Translated versions of the EORTC QLQ-C30 [2], supplemented by a lung cancer module, were administrated to the patients at a pre-specified set of times during follow-up: immediately before start of treatment (control week 0), at the end of treatment (control week 6), six weeks after end of treatment (control week 12), and then every 12 weeks until death, drop-out or closure of the study in January 2009. The EORTC QLQ-C30 includes 30 items in the form of questions regarding a patient's symptoms, health and competency to perform various daily life tasks, and in that way it covers and reflects different generic aspects of QoL. Each item is answered by circling the number corresponding to the pre-coded response option that best applies. Nineteen of the patients (9 in arm A and 10 in arm B) started induction therapy at the time of randomization. The timing of the questionnaires for these patients differed from protocol, and hence, their answers have been discarded. We have focused on item 30 in the EORTC QLQ-C30, which is given by the following question: "How would you rate your overall quality of life during the past week?". This can be regarded as a summary measure of QoL, taking integer score values in the range from 1 to 7, where scores of 1 and 7 correspond to 'very poor' and 'excellent', respectively. That is, the higher the score value, the higher the QoL as measured by this particular item. Our aim has been to estimate the mean score of item 30 in a hypothetical drop-out-free population in which every subject contributes all planned sets of measurements. Ignoring missingness present in the data might lead to biased mean score estimates, and so we have made use of different adjusting techniques. It is not obvious whether one should adjust for all missing observations, including those due to death, or whether one should only consider surviving patients. The former corresponds to analysing an immortal cohort, while the latter corresponds to analysing a mortal cohort [3]. On the surface, the mortal cohort analysis seems more reasonable, but in reality one may get a false impression of the relationship between treatments. For instance, it may be the case that one treatment improves survival, but at the cost of QoL. Hence, the treatment that is better in terms of survival may, precisely because of this advantage, come out worse in terms of QoL. Therefore, the immortal cohort analysis may be worth considering. The procedure of correcting for all missing observations, without regard to cause, can be quite sensible in many circumstances and give a more fair comparison of treatments. This will be our main approach since we indeed wish to compare arm A and arm B as regards QoL. One further note should be made regarding adjusting for mortality. In survival studies there is usually an amount of censoring due to subjects entering the study late and thus being under follow-up for just a short period of time. In these cases one will not know when death takes place, and so distinguishing between death and missingness due to other causes may not be feasible. Hence, adjusting for all missing observations may be the most clear-cut approach. However, for the disease studied here, mortality is high, and most patients have been followed until death. Therefore, we have also performed a mortal cohort analysis, where patients are removed from the study at their known death times, and we have compared this with the other analysis. The employed methodology includes three methods that rest on different assumptions. Merely using one method could then result in wrong conclusions if the relevant assumptions were not to be true. By using two or three methods, the conclusions will be more certain and robust when the respective results agree. The methodology has been implemented using the programming language Matlab ® [4]. Methods In this section we introduce the statistical framework used for analysing longitudinal data subject to monotone missingness with regard to estimation of the mean of a timecontinuous, discrete-valued response variable. Notation Consider a longitudinal study of a time-continuous response process , taking only discrete values, and some time-continuous covariate processes , which can take both discrete and continuous values. In accordance with Diggle et al. [5] and Gunnes et al. [6], we refer to the variable (t) as the hypothetical response at time t, that is, the response that would have been recorded had the subject, possibly contrary to fact, contributed a measurement at this time. In the same way, we let (t) be the hypothetical covariates at time t. Measurements of the response and covariates are scheduled for a pre-specified set of ordered times t 1 ,..., t  , where  is the total number of measurement occasions. We assume that the data are subject to monotone missingness, and the predictable time-continuous response indicator process is denoted by R. The term 'predictable' means that the value of R(t) is known at time t-, i.e. right before t. We set R(t) equal to 1 if the subject has contributed all planned measurements of the response and covariates up to, and including, time t. Otherwise, we set R(t) equal to 0. Further, we write Y(t 1 ),..., Y (t T ) for the observed responses, where T   is the total number of measurement sets the subject gives rise to. Correspondingly, we write X(t 1 ),..., X (t T ) for the observed covariates. The specification of the missingness and censoring schemes presented below is based on the history of the observed and unobserved processes. Following the notation of Gunnes et al. [6], the past history and strict past history of the hypothetical time-continuous response process and covariate processes at time t are written and , respectively. In the same way, [t] denotes the past history of the time-continuous response indicator process R at time t, and (t) denotes its strict past. Note that since R is predictable, we have [t] = (t) . If we restrict these histories to the scheduled measurement times, we set and make an equivalent definition of . Further, we let and denote the past history and strict past history, respectively, of the time-discrete observed response and covariate processes, where . Missingness and censoring schemes The methodology that we have made use of in our work is based on some assumptions regarding the response indicator process R. The missingness completely at random (MCAR) condition [ [7], chapter 1.3] states that the response indicator process is independent of the hypothetical response process and covariate processes : In other words, knowledge of all realizations of the response and covariate variables does not influence the dropout probability. When the missingness at random (MAR) condition [8] is fulfilled, the response indicator process only depends on the observed data: This means that the probability of dropping out is unaffected by response and covariate values that are not observed. MAR is guaranteed by insisting that the response indicator process depends solely on previously observed responses and covariates. On the other hand, if the response indicator process depends on unobserved data, we have missingness not at random (MNAR). The continuous-time independent censoring (CTIC) condition [9] can be defined as follows: where is shorthand for . A sufficient, but not necessary, condition for CTIC is for every time t. This allows R(t) to depend on any aspect of the past of and but for the current infinitesimal and . A stronger condition than CTIC is the discrete-time independent censoring (DTIC) condition, which recognizes that longitudinal data are measured in discrete time: Thus, it places constraints on the expected value of the increment of the hypothetical response. A sufficient condition for DTIC is for each time t k . This implies that R(t k ) may only depend on and until time t k-1 , and not on the interval (t k-1 , t k ) [6]. The DTIC condition may seem somewhat unrealistic, but it corresponds to what can actually be observed. Clearly, we cannot correct for the unobserved development within an interval. The linear-increments method The linear-increments (LI) method postulates linear models for the increments of the hypothetical response process at different times. This was first proposed by Diggle et al. [5] for continuous-valued response variables. Gunnes et al. [6] discuss the LI technique for discrete-valued response variables, for which the model at time t k is given by Here, the predictors (t k ) are functions of the strict past are the same as for the hypothetical data, and they are estimated for each time t k using ordinary least squares regression. For every subject, the mean hypothetical response at time t k is estimated by replacing the regression functions with the ordinary least squares estimates and then, recursively, inserting previously obtained estimates into Equation (5) and calculating the cumulative sum. Finally, an estimate of the population mean of the hypothetical response at time t k is given by the arithmetic average of all individual estimated mean hypothetical responses. The detailed procedure is given by Gunnes et al. [6]. The inverse-probability-weighting method As the name suggests, the inverse-probability-weighting (IPW) method involves weighting the observed responses at a certain time by the inverse of the respective probabilities of measurements being taken, and thus, creating a pseudo-population where no data are missing. Following Gunnes et al. [6], we let (t k ) = Pr{R(t k ) = 1} be the probability that the subject contributes measurements of the variables of interest at time t k , and we set (t 1 )  1 for all subjects. Further, we let be the conditional probability that the subject contributes a set of measurements at time t k , given that a set of measurements was contributed at t k-1 . Under the assumption of monotone missingness, the probability that the subject contributes a set of measurements at time t k  t 2 is given by If the MAR condition is fulfilled, the unknown conditional probabilities can be estimated in a preliminary pooled logistic regression analysis [3,6]: Here, the predictors Z(t k ) are functions of the time t k and , and  are the corresponding time-independent regression coefficients. Subject-specific weights w(t k ) are found by taking the inverse of the respective estimated measurement probabilities . We have used "stabilized" weights [ [10], page 562] to reduce the variability of the estimates. Here, is the estimated probability that a set of measurements is taken at time t k , calculated by includ-ing only baseline covariates in the logistic model given in Equation (8). Finally, the population mean of the hypothetical response at time t k is estimated by a weighted arithmetic average of all observed responses: where Y i (t k ) denotes the observed response of subject i at time t k , with corresponding weight w i (t k ), and I(t k ) is the set of subjects of which measurements are taken at t k [6]. The Markov-process method The Markov-process (MP) method [6] is based on an assumption that the hypothetical response process is a If the DTIC condition is fulfilled, the discrete analogue of the time-continuous Aalen-Johansen estimator [11] of the transition probability matrix at time t k  t 2 is given by where , and equals the Udimensional identity matrix [6]. The estimated occupation probability of state v at time t k is ,..., , n t n t Here, is the empirical proportion of n subjects occupying state u at time t 1 . Finally, the population mean of the hypothetical response at time t k is given by a weighted sum of the estimated state occupation probabilities: where c u denotes the value of the hypothetical response corresponding to occupation of state u [6]. Single imputation Subjects participating in longitudinal studies occasionally fail to contribute measurements of the variables of interest while under follow-up. This can result in a considerable loss of information, especially when the employed methodology is developed for analysing monotone (in terms of missingness) subsets of the data. In order to be able to utilize more of the available data, a feasible approach is to use single imputation to fill in for all non-consecutive, i.e. isolated, missing values that are directly preceded and succeeded by observed values. Thus, a new "artificial" and more complete monotone (in terms of missingness) subset of the data is created. (Multiple imputation has not been used here since the added complexity was not deemed necessary.) In a supplementary analysis we have chosen to impute a non-consecutive missing value at time t k by the arithmetic average of the two corresponding adjacent observed values at times t k-1 and t k+1 . That is, for instance, if a subject contributes a measurement of value 4 at a certain time, fails to contribute a measurement at the following time and then contributes a measurement of value 6 at the next time, the missing value in between the two observed ones is imputed by (4 + 6)/2 = 5. The MP method is currently developed only for integervalued responses or responses that can be cast in this form. Since the non-consecutive missing values in some cases may be imputed by decimal numbers, i.e. non-integers, we have not calculated the MP estimates when single imputation has been applied prior to the data analysis. During treatment and the first couple of weeks following end of treatment, the scores reported by the patients randomized to arm A changed considerably, and so, imputation of missing values in this period using the technique described above would be inappropriate and might lead to biased mean score estimates. Therefore, missing values at the first three scheduled filling-in times of the EORTC QLQ-C30, that is, control weeks 0, 6 and 12, have not been imputed for either of the treatment arms. Results As previously mentioned, item 30 in the EORTC QLQ-C30 has been the response in focus. This item deals with the overall QoL of a patient during the past week. The observation of the response process is discrete (in time), corresponding to the filling in of the questionnaire. It is reasonable to believe that the expected increment of a discrete-valued response at time t k will depend on its previous value at time t k-1 , as will the probability of contributing a response measurement at time t k . In addition, we assume that sex, treatment arm and whether or not induction therapy was given will affect the response process as well as the response indicator process. In consequence, the following covariates have been included in the linear regression model of the LI method: the previous score, indicator for being a woman, indicator for being randomized to arm A and indicator for having received induction chemotherapy. Further, the following covariates have been included in the pooled logistic regression model of the IPW method: indicators for the possible values of the previous score, time, indicator for being a woman, indicator for being randomized to arm A and indicator for having received induction chemotherapy. (Note that in the analysis where single imputation has been applied, the previous score, instead of indicators for the possible values of the previous score, has been included in the pooled logistic regression model of the IPW method. The reason for this is that then the previous score value may actually be a decimal number and not an integer in the range 1-7.) Two corresponding immortal cohort analyses have been performed using the three estimation methods. Single imputation was not applied in the first analysis, whereas in the second analysis it was applied. For comparison, a mortal cohort analysis, without applying single imputation, has also been performed using the LI method. Because of the assumption of monotone missingness, only a selection of the score values in the original data set are considered to be observed in a specific analysis, and the remaining score values are thus regarded as missing. All our analyses are restricted to 198 patients (98 in arm A and 100 in arm B) whose respective score values at control week 0, that is, immediately before start of treatment, are available. Keep in mind that in the analysis where single imputation has been applied, some of the observed score values, with respect to monotone missingness, are actually missing values that have been imputed. Table 1 presents the numbers of observed score values, with respect to monotone missingness, for both treatment arms at different control weeks. The corresponding numbers of missing score values are presented in Table 2. Obviously, the numbers of observed score values decrease over time as the patients fail to answer the current question. In the same way, the numbers of missing score values increase over time. Figure 1 displays the mean score estimates, plotted against time, for both treatment arms when considering an immortal cohort. In the plot corresponding to arm A, we notice a rapid decline in the curves right after start of treatment. At control week 6, they reach a low before increasing. This sudden dip at the end of treatment is most likely due to some of the adverse effects of chemotherapy, such as nausea and discomfort, which generally lead to low score values. The curves fluctuate somewhat after control week 24. In contrast, the curves in the plot corresponding to arm B fall gradually. They begin to rise again at control week 84. Figure 2 displays the LI estimates of the mean score, plotted against time, for both treatment arms when considering a mortal cohort. We observe no important differences between the immortal cohort analysis and the mortal cohort analysis as regards estimation of the mean score using the LI method. Figure 3 displays the empirical standard errors of the mean score estimates (based on 1000 bootstrap samples), plotted against time, for both treatment arms when considering an immortal cohort. As expected, the empirical standard errors increase over time. The variability does not seem to differ much between the three estimation methods. Arm B control week mean score of treatment. In the plot corresponding to the IPW method, the lower percentile limit lies just barely on the positive side of the zero line at control week 72, which indicates a possible higher mean score in arm A. However, this is not supported by the results obtained from the other two estimation methods. Table 3 presents the numbers of observed score values, with respect to monotone missingness, for both treatment arms at different control weeks. The corresponding numbers of missing score values are presented in Table 4. By comparing the numbers in Table 1 and Table 3, we see that we get up to 4 and 6 more observed score values at a given control week in arm A and arm B, respectively, when single imputation is applied. Only a few of the score values that are gained have been imputed. The rest of them are available score values that were considered to be missing in the first two analyses where single imputation was not applied, but that now are regarded as observed because of the filling in of non-consecutive missing values preceding them. Figure 5 displays the mean score estimates, plotted against time, for both treatment arms when considering an immortal cohort. By comparing the curves in Figure 1 and Figure 5, we see that the application of single imputation prior to the data analysis has not changed the observed The table presents the numbers of observed score values, with respect to monotone missingness, for arm A and arm B at different control weeks. Single imputation has not been applied. The table presents the numbers of missing score values, with respect to monotone missingness, due to death and other causes for arm A and arm B at different control weeks. Single imputation has not been applied. With single imputation Empirical standard errors of the estimated mean scores for an immortal cohort (without single imputation) The table presents the numbers of observed score values, with respect to monotone missingness, for arm A and arm B at different control weeks. Single imputation has been applied. Observed and estimated mean scores for an immortal cohort (with single imputation) and estimated mean scores very much. Figure 6 displays the empirical standard errors of the mean score estimates (based on 1000 bootstrap samples), plotted against time, for both treatment arms when considering an immortal cohort. It is evident that single imputation reduces the variability of the estimates. Figure 7 displays the differences in the mean score estimates between arm A and arm B, plotted against time, when considering an immortal cohort. The corresponding 95% percentile intervals (based on 1000 bootstrap samples) are also shown. The curve patterns resemble the ones displayed in Figure 4. Discussion Results from the data analyses suggest that the true mean score might be overestimated by using the observed mean score, which equals the arithmetic average of the observed score values at a given control week. The most likely reason for this is that the worst patients, that is, the patients with the lowest score values, fail to complete the questionnaire. Thus, higher score values tend to predominate in the data. The initial and sudden drop in the curves of the mean score estimates in the plots corresponding to arm A is in accordance with what might have been expected; the patients in arm A, who received both radiation therapy and chemotherapy, experienced an immediate reduction in mean score, as opposed to the patients in arm B, who received only radiation therapy. However, the difference between the two treatment arms with respect to the mean score seems to diminish over time. The application of single imputation did not alter the mean score estimates considerably, but the numbers of extra observed score values were indeed quite low. It did, however, lower the empirical standard errors of the mean score estimates. In other words, we gain precision from using single imputation, and this makes our estimates more reliable. The MP method is certainly the easiest one to implement among the three estimation methods. However, this method, unlike the other two methods, is limited to handle only discrete-valued responses. Further, the IPW method may give more variable estimates and thus less precision [12]. Therefore, we recommend using the LI Empirical standard errors of the estimated mean scores for an immortal cohort (with single imputation) Figure 6 Empirical standard errors of the estimated mean scores for an immortal cohort (with single imputation). The figure displays the empirical standard errors of the estimated mean scores (based on 1000 bootstrap samples) for arm A (upper panel) and arm B (lower panel) when considering an immortal cohort. Single imputation has been applied. The blue dotted-line curve corresponds to the IPW method, and the green dash-dotted-line curve corresponds to the LI method. Arm B control week standard error method in practice when appropriate. This is a good method that is relatively easy to implement. The Matlab ® code for the implementation of the methodology considered in this paper is available and can be obtained by contacting the corresponding author. Conclusion Health-related QoL is an important research field of current interest. In medical settings we believe that it is crucial to consider QoL when treatments are being evaluated. The obtained results from the data analyses corresponding to the three estimation methods agree with one another. Within each treatment arm, the estimated mean scores of self-reported QoL are adjusted downwards compared to the observed mean score. There are significant differences in the estimated mean scores of self-reported QoL between arm A and arm B at the end of treatment. Differences in the observed and estimated mean scores for an immortal cohort (with single imputation)
2016-05-04T20:20:58.661Z
2009-04-30T00:00:00.000
{ "year": 2009, "sha1": "1e4b041410670ce5bb5d3ff921beb0b506ed0b81", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/counter/pdf/10.1186/1471-2288-9-28", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16d30c29bab7b517e77d0da12f67a156ccd7a41d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267059599
pes2o/s2orc
v3-fos-license
Thoracentesis to alleviate pleural effusion in acute heart failure: study protocol for the multicentre, open-label, randomised controlled TAP-IT trial Introduction Pleural effusion is present in half of the patients hospitalised with acute heart failure. The condition is treated with diuretics and/or therapeutic thoracentesis for larger effusions. No evidence from randomised trials or guidelines supports thoracentesis to alleviate pleural effusion due to acute heart failure. The Thoracentesis to Alleviate cardiac Pleural effusion Interventional Trial (TAP-IT) will investigate if a strategy of referring patients with acute heart failure and pleural effusion to up-front thoracentesis by pleural pigtail catheter insertion in addition to pharmacological therapy compared with pharmacological therapy alone can increase the number of days the participants are alive and not hospitalised during the 90 days following randomisation. Methods and analysis TAP-IT is a pragmatic, multicentre, open-label, randomised controlled trial aiming to include 126 adult patients with left ventricular ejection fraction ≤45% and a non-negligible pleural effusion due to heart failure. Participants will be randomised 1:1, stratified according to site and anticoagulant treatment, and assigned to referral to up-front ultrasound-guided pleural pigtail catheter thoracentesis in addition to standard pharmacological therapy or to standard pharmacological therapy only. Thoracentesis is performed according to local guidelines and can be performed in participants in the pharmacological treatment arm if their condition deteriorates or if no significant improvement is observed within 5 days. The primary endpoint is how many days participants are alive and not hospitalised within 90 days from randomisation and will be analysed in the intention-to-treat population. Key secondary outcomes include 90-day mortality, complications, readmissions, and quality of life. Ethics and dissemination The study has been approved by the Capital Region of Denmark Scientific Ethical Committee (H-20060817) and Knowledge Center for Data Reviews (P-2021–149). All participants will sign an informed consent form. Enrolment began in August 2021. Regardless of the nature, results will be published in a peer-reviewed medical journal. Trial registration number NCT05017753. INTRODUCTION Pleural effusion is present on chest radiographs or thoracic ultrasounds in 50% of the patients admitted with acute decompensated heart failure. 1 2 Treatment options for heart failure-related pleural effusion are diuretics, initiating and optimising guideline-directed heart failure therapy, and sometimes invasive fluid drainage by therapeutic thoracentesis. Currently, no evidence from randomised trials supports thoracentesis in patients with STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ The randomised controlled trial design reduces the risk of confounding by indication seen in observational studies investigating the effect of thoracentesis to alleviate pleural effusion in patients admitted with acute heart failure.⇒ The pragmatic study design enables the investigation of two treatment strategies for heart failure-related pleural effusion as they are currently practised in the clinic (referral to up-front therapeutic thoracentesis in addition to diuretics and guideline-directed medical therapy compared with diuretics and guidelinedirected medical therapy alone).⇒ The primary outcome, days alive without hospitalisation during the following 90 days and patientreported outcomes with quality-of-life assessment are highly patient centred.⇒ The trial is pragmatic and strict requirements regarding the method of quantification of the pleural effusion before randomisation and implementation of a standardised diuretic treatment protocol are not feasible.⇒ The unblinded nature of the trial introduces a risk of bias. Open access acute decompensated heart failure and pleural effusion, and neither the American College of Cardiology/ American Heart Association nor the European Society of Cardiology provides recommendations for the use of thoracentesis in their guidelines on heart failure treatment. 3 4ltrasound-guided thoracentesis is a low-risk procedure, also used in the treatment of malignant and infection-related pleural effusions, 5 6 with the most common complications being pneumothorax (1%-6%) and rarely bleeding (<1%). 7 8In one out of four patients with non-malignant pleural effusion where thoracentesis is necessitated, the precipitating cause is heart failure. 9he direct advantage of thoracentesis in heart failurerelated pleural effusion is the immediate relief of symptoms, which could result in faster discharge.Significant improvement in dyspnoea and mental aspects of quality of life has been observed in a prospective cohort of 164 patients with pleural effusion of all causes, and the improvement in dyspnoea after thoracentesis was not associated with the volume of fluid drained. 10Also, in a retrospective analysis of 373 out-patients with severely symptomatic heart failure with reduced ejection fraction and moderate or severe pleural effusion refractory to diuretics, thoracentesis resulted in immediate and often long-lasting relief of symptoms. 11However, fast symptom relief due to thoracentesis may mask the need for optimisation of diuretics and guideline-directed medical therapy for the underlying heart failure condition, leaving patients more prone to recurrence and rehospitalisation.Persistent congestion at the time of discharge is associated with higher mortality and heart failure-related rehospitalisation, 12 and hospitals with shorter durations of stay for acute heart failure have been shown to have higher readmission rates. 13ata on the effect of thoracentesis on mortality and hospitalisation are sparse and from observational studies.A large American health insurance claims study of more than 70 000 thoracenteses performed during heart failurerelated admissions showed an association between thoracentesis and longer hospital stays, increased healthcare expenses and increased in-hospital mortality. 14However, cases and controls were not matched by the presence of pleural effusion, which leaves a high risk that the unfavourable risk associated with thoracentesis observed is a result of confounding by indication.Prospective cohort studies have determined 30-day and 1-year mortality rates of 9%-22% and 50%-53% in patients with heart failure-related pleural effusion undergoing thoracentesis, 9 15 which is well above 1-year mortality rates of 21%-34% previously reported in patients with acute heart failure. 16 17A clinical practice guideline from 2002 states that thoracentesis is indicated in patients presenting with shortness of breath when at rest but does not consider the effusion's size or underlying cause or the procedure's timeliness. 18JM Porcel states in a narrative review from 2010 that 'In patients with large symptomatic effusions, a complementary therapeutic thoracentesis may rapidly relieve the dyspnea'. 19It is still the current opinion that in patients with intense dyspnoea due to large effusions single therapeutic thoracentesis should be considered while waiting for the diuretics to take effect. 20Still, to our knowledge, the amount of pleural effusion or symptom burden in acute heart failure necessitating thoracentesis, the timeliness of the procedure and the current incidence of thoracentesis in acute heart failure are unknown. In all, thoracentesis in heart failure-related pleural effusion is controversial, and the exact timing of thoracentesis and outcomes from it is unknown. 21Furthermore, it is unknown whether early timing of thoracentesis improves outcomes such as prognosis, patient satisfaction and quality of life, and time spent in the hospital.Despite this critical gap in evidence, the use of thoracentesis in patients admitted with heart failure is increasing. 14 OBJECTIVES The main objective of the Thoracentesis to Alleviate cardiac Pleural effusion Interventional Trial (TAP-IT) is to investigate if a strategy of referral to up-front thoracentesis by pleural pigtail catheter insertion (thoracentesis) in addition to standard pharmacological therapy with diuretics and guideline-directed heart failure therapy compared with standard pharmacological therapy only increases days alive outside of the hospital during the following 90 days in patients with pleural effusion due to acute heart failure and left ventricular ejection fraction (LVEF) ≤45%.We hypothesise that a strategy of referring patients with heart failure-related pleural effusion to up-front thoracentesis increases the number of days the patients are alive and not hospitalised during the following 90 days.The secondary objectives are to assess the effect of referral to thoracentesis on admission duration, complications and patient-reported outcomes such as patient satisfaction and quality of life. METHODS AND ANALYSIS Study design TAP-IT is a pragmatic, multicentre, open-label, randomised controlled trial carried out in 11 cardiology departments across Denmark.Participating departments represent all administrative regions of Denmark and all are academic hospitals, including four specialised referral centres.The overall trial design is outlined in figure 1. Study population Patients aged 18 years or older admitted with signs and symptoms of acute decompensated heart failure, LVEF ≤45% and non-negligible pleural effusion will be screened for inclusion.Patients with both decompensated chronic heart failure and new-onset heart failure will be screened.Patients with LVEF >45% presumed to suffer from heart failure with preserved ejection fraction (HFpEF) will not be considered for this trial.The rationale for this is the ongoing diagnostic uncertainty in HFpEF and the Open access following requirements of comprehensive echocardiography or invasive haemodynamic testing, 22 which is not applicable in this pragmatic trial in an acute setting.The pleural effusion can be documented by either chest radiograph, ultrasound, computer tomography or MRI.Other causes of effusion then heart failure are ruled out after clinical assessment with a medical history and physical examination. 23Patients with an indication for diagnostic thoracentesis or with a suspected pulmonary or pleural infection will be excluded.Other criteria for exclusion comprise absence of informed consent, severe aortic stenosis, severely impaired renal function, planned or expected admission >10 days for other conditions than heart failure, an intrathoracic procedure within the previous 3 months (including thoracentesis) and contraindications to thoracentesis according to local guidelines.Patients receiving oral anticoagulation therapy can be included.In patients with a massive pleural effusion, substantially affected haemodynamics or high oxygen demand, a conservative approach with diuretics is not perceived to be safe and feasible and acute intervention with thoracentesis is indicated and causes exclusion.Detailed inclusion and exclusion criteria are listed in table 1. Quantification of pleural effusion Quantification of pleural effusion on chest radiographs is inaccurate but still the most widely used method in patients admitted with acute heart failure.Previous studies have established that a volume of approximately 650 mL correlated to the effusion forming a meniscus completely obscuring the hemidiaphragm on a standing chest radiograph, 24 which, in this study, will be accounted as non-negligible.Equivalently, pleural effusion can be quantified by thoracic ultrasound.A pleural effusion causing clear visual separation of approximately 3 cm between the chest wall and the lung correlates to a mean drained pleural effusion volume of approximately 550 mL. 25 The optimal method for quantification of pleural effusion is CT, but this is not routinely done in patients admitted with acute decompensated heart failure.Therefore, an exact volume of effusion cannot be used as inclusion criteria in this pragmatic trial. Open access Recruitment Potential participants will be identified and screened by an investigator from the cardiology department involved in the trial and responsible for the patient's care.Screening can occur at the time of admission to the cardiology department, during ward rounds, in the emergency department or after transfer from another medical specialty to the cardiology department.Participants can be included at any time during admission when the diagnosis of pleural effusion related to acute decompensated heart failure is established, regardless of whether a diuretic treatment regime has already been initiated.After oral and written information, the participants will sign a consent form (a copy of the consent form in Danish is available in the online supplemental material).A screening log with information regarding the reason for screen failure will be kept in an internet-based electronic case report form and randomisation programme. Randomisation After informed consent, an investigator will randomise participants with the internet-based electronic case report form and randomisation programme.Randomisation is 1:1 and stratified according to site and whether participants are treated with oral anticoagulation therapy (regardless of type: direct-acting oral anticoagulant or vitamin K antagonist vs no anticoagulation) with alternating block sizes to reduce predictability.Participants are assigned to referral to up-front thoracentesis in addition to standard pharmacological therapy or standard pharmacological therapy only.Physicians, investigators and participants are not blinded to the result of the randomisation. Intervention This is a pragmatic trial comparing two already established treatment regimes.Accordingly, participants in the pharmacological treatment arm (control group) will receive standard pharmacological therapy with diuretics in a dosage determined by the treating physician and guideline-directed medical therapy for heart failure as deemed appropriate.There is no standardised medication regime, but the participating departments are encouraged to adhere to international guidelines on heart failure treatment and the use of diuretics in heart failure with congestion as proposed by the European Society of Cardiology. 4 26 Participants in the intervention arm (intervention group) will be referred to up-front thoracentesis in addition to standard pharmacological therapy.Thoracentesis will be performed according to local practice, including any required interruption in anticoagulation treatment.Each hospital will adhere to its standard guidelines for required platelet count, interruption of oral anticoagulants and maximum international normalised ratio (INR).The limit for INR varies across sites due to different possibilities for surgical intervention in case of bleeding complications.In the participating departments, the standard thoracentesis procedure is ultrasound-guided intercostal small-bore pigtail catheter insertion (size 5-8 French) either at the radiology department or in the ward by a trained cardiologist experienced in the procedure.In general, the effusion accessible is passively drained over a few hours in the ward, unless the patient develops hypotension or pulmonary symptoms such as coughing, dyspnoea or moderate pain or discomfort, in which case the drainage will be discontinued.For some patients, the drainage can be required to persist over 24 hours.In bilateral effusions, it is not a study requirement to perform bilateral thoracentesis; the decision to perform staggered bilateral thoracentesis is made by the treating physician based on the participants' symptoms and clinical presentation.Participants are discharged at the discretion of the treating physician when found appropriate according to best clinical practice; there is no requirement to document the total absence of pleural effusion before discharge. Cross-over A degree of cross-over between groups is anticipated.Ultimately the feasibility of thoracentesis in a referred participant will be assessed by the radiologist or cardiologist performing the procedure, as is current practice.Therefore, some participants randomised to a treatment strategy with referral to up-front thoracentesis may not undergo the procedure, possibly due to overestimation of the effusion on the chest radiograph compared with ultrasound.Information on why the procedure was unsuccessful will be reported.Similarly, some participants randomised to pharmacological therapy only may be resistant to diuretics with inadequate treatment response and may ultimately need thoracentesis to achieve symptom relief.The recommended waiting period before performing thoracentesis on a stable participant in the pharmacological treatment arm with inadequate or slow response to diuretics is 5 days from randomisation.Participants in the pharmacological treatment arm whose condition deteriorates to a degree that they fulfil any of the study exclusion criteria (eg, need for diagnostic thoracentesis, increased oxygen demand or substantially affected haemodynamics) can be referred to thoracentesis immediately. Procedures Participants will be treated according to the best practice in the participating departments.All participants with new-onset heart failure will have transthoracic echocardiography performed during the index admission.In patients with an established diagnosis of chronic heart failure, echocardiography performed before the index admission may be used, preferably not older than 3 months; an updated echocardiogram is encouraged, but not mandated.Baseline blood samples include a standard work-up for acute heart failure including albumin, creatinine, estimated glomerular filtration rate (eGFR), sodium, troponin and N-terminal pro-b-type natriuretic peptide.Diagnostic laboratory tests on pleural fluid samples will be collected and are used primarily to Open access retrospectively the classification of the effusion as non-malignant transudate.Pleural fluid samples in bilateral pleural effusion of known heart failure aetiology are not routine. 23Body weight and volume of drained pleural fluid will be registered to monitor the decongestion treatment response as is standard clinical practice.Participating sites are encouraged but not required to perform a standing chest radiograph before discharge. A non-mandatory project-specific biobank is created at selective sites with the collection of additional blood and pleural fluid samples for later batch analyses of biomarkers to assess their predictive value in a separate sub-study. Follow-up Patient-reported outcome questionnaires during follow-up in the TAP-IT trial are summarised in table 2. During a 90-day follow-up period, participants will receive a total of three questionnaires regarding patient satisfaction and quality of life.An investigator will contact patients approximately 7 days after discharge to arrange the preferred method for the follow-up questionnaires (electronically, in paper form, or interview by phone).Fourteen days after discharge, participants will receive two questionnaires: the 23-item Kansas City Cardiomyopathy Questionnaire (KCCQ) to assess quality of life 27 28 and selected questions from the survey Questions About Acute Hospitalization used in the annual Danish National Survey of Patient Experiences to assess overall satisfaction with the admission.At the end of the 90-day follow-up, the participants will receive the KCCQ again.To increase response rates, electronic reminders will be sent every third day up to three times.Participants receiving the questionnaires by regular mail will be contacted by phone up to two times at appropriate time intervals to ensure they have received and answered the questionnaire.Data regarding other outcomes will be obtained by review of electronic medical records performed by the coordinating investigator during and after the conclusion of the trial.Data will be entered into an electronic case report form using the Research Electronic Data Capture (REDcap V.10.6.18).Primary outcome data will be collected for all participants via electronic medical records independent of drop-out during questionnaire follow-up. Primary outcome The primary outcome is the number of days the participants are alive and not hospitalised during the 90 days following randomisation.Hospitalisation is defined as admission over 24 hours or over a change in calendar date due to all causes. Secondary outcomes Secondary outcomes include days alive and not hospitalised due to heart failure in the 90 days following randomisation; duration of the index admission; mortality and readmissions.Hospitalisations will be classified as related to heart failure according to guidelines from the American Heart Association and American College of Cardiology. 29To assess the diuretic response and effect of the treatment regime changes in body weight and dosage of diuretics during the index admission will be analysed.Patient-reported outcomes include overall satisfaction with the admission and KCCQ score.Outcomes regarding safety are defined as the number of common complications during hospitalisation; complications to thoracentesis including the development of thromboembolism within 30 days. 30Endpoints are detailed in table 3. Sample size For estimation of sample size, we assumed a t-test of superiority and found a total of 126 participants required to detect a difference of 3 days in the primary endpoint with an α of 0.05 and a power of 90%.This assumes that participants assigned to a strategy with referral to up-front thoracentesis in addition to pharmacological therapy will have 85 days alive and not hospitalised during the 90 days after randomisation while participants assigned to pharmacological therapy alone will have 82 days, with a shared SD of 5 days, and in-hospital mortality of 5% in both groups. 16 31 32 Analysis plan The distribution of the primary outcome, 'days alive and not hospitalised during the 90 days following randomisation' is more complex and unpredictable than our initial assumption of normality especially due to lack of epidemiological data on patients with larger heart failurerelated pleural effusions. 33Accordingly, we have, in the late phase of the TAP-IT trial, chosen our main analysis to be a non-parametric test to compare the difference in the distribution of the outcome instead of a parametric test comparing means. 34For analysis of the primary endpoint, we will assess the Mann-Whitney parameter for days alive without hospitalisation during the 90 days following randomisation, using the Wilcoxon-Mann-Whitney test. 35The loss of power with the Wilcoxon rank sum test compared with the t-test is often limited if distributions are normal, and when normality is violated, the Wilcoxon rank sum test can be three or four times more powerful than the independent samples t-test. 34 36The Open access initially decided power of 90% will allow for a possible small loss of power from using the Wilcoxon-Mann-Whitney test.Time-to-event data will be compared by the log-rank method and adjusted analyses will be performed by proportional hazards regression.A detailed statistical analysis plan will be made available before database closure.Statistical analysis will be performed using the statistical software R in the release version available at the end of follow-up. Safety and serious adverse events There are no experimental treatments or procedures involved in the trial.The two established treatments compared in the trial have known complications and adverse effects. 7 8Thoracentesis is routinely performed at all participating hospitals, including for other causes of pleural effusion.The risk of arterial thromboembolic complications due to interruption of anticoagulation is approximately 0.3%. 30Serious adverse events will be monitored according to standard regulatory requirements.Results will be published in a peer-reviewed medical journal independent of the nature of the outcome.Communication of the results to trial participants will be personalised and results will be disseminated to the public through patient organisations and social media. Contributors JJT: principal investigator, conceptualised the design and initiated the study.JHT, BBL, MS, KKI, OWN, CT, MGL, CAB, NS, AB, ES, SV, MT, RVR, KR, DEH, LK and FG: consulted on the study design and investigation.SG and JJT: obtained funding.SG: Coordinating investigator and prepared the first draft of the manuscript.All authors critically revised and approved the final manuscript.Funding The trial is supported by The Independent Research Fund Denmark (grant number 1030-00121B) with additional funding from the Hartmann Foundation (grant number A36846), Per Henriksen's Fund and the Research Foundation at Copenhagen University Hospital-Bispebjerg and Frederiksberg, Denmark.SG has Table 1 Inclusion and exclusion criteriaPatients admitted with signs and symptoms of acute decompensated heart failure are screened based on the following criteria Planned or expected admission>10 days for other conditions than heart failure ► Inability to give informed consent *Patients receiving oral anticoagulation therapy are eligible for inclusion †Reversible exclusion criteria.If the condition is later stabilised the patient can be randomised.CRP, c reactive protein; eGFR, estimated glomerular filtration rate; INR, international normalised ratio; LVEF, left ventricular ejection fraction; SBP, systolic blood pressure; TAVI, transcatheter aortic valve implantation; WBC, white blood cells. Table 2 Patient-reported outcome questionnaires The study does not require registration and monitoring from the Danish Medicines Agency (case number 2020031478).The current study protocol V.2.0, dated 7 January 2022, was approved as an amendment to the original study protocol version 1.0 dated 21 December 2021.Clinicaltrials.gov Identifier: NCT05017753 (24 August 2021). Table 3 Primary and secondary outcomes Primary outcome ► Days alive and not hospitalised during the 90 days following randomisation Secondary outcomes ► Days alive and not hospitalised due to heart failure during the 90 days following randomisation ► Duration of the index admission from randomisation to discharge ► Time to death ► Time to first readmission or death ► Changes in body weight from randomisation until discharge ► Changes in dosage of diuretics from randomisation until discharge ► Overall satisfaction with the admission (survey) ► KCCQ score ► Complications to hospitalisation (eg, falls, hospital-acquired infections, delirium) from randomisation until discharge ► Complications to thoracentesis (eg, pneumothorax, bleeding, infection, analgesic treatment, reexpansion pulmonary oedema) from randomisation until discharge.Including the development of thromboembolism within 30 days from interruption of anticoagulant therapy before thoracentesis.KCCQ, 23-item Kansas City Cardiomyopathy Questionnaire.
2024-01-22T06:16:48.122Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "cc548e06b81ef5ca58e571ce246b9dcd2fcb0790", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/14/1/e078155.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed9c57626099809a682257caed7c7150afe8604c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235710830
pes2o/s2orc
v3-fos-license
Novel High-Quality Sonographic Methods to Diagnose Muscle Wasting in Long-Stay Critically Ill Patients: Shear Wave Elastography, Superb Microvascular Imaging and Contrast-Enhanced Ultrasound Novel ultrasound (US) methods are required to assess qualitative changes in the quadriceps rectus femoris (QRF) muscle when evaluating mechanically ventilated, long-stay ICU patients with suspected neuromuscular acquired weakness (ICUAW). Our aim was to analyze novel US muscle assessment methods in these patients versus healthy controls by carrying out a prospective observational study. Shear wave elastography (SWE) showed, with a receiver operating characteristic (ROC) curve of 0.972 (95% confidence interval (CI) = 0.916–1.000), that patients increased muscle stiffness associated with muscle fibrosis when diagnosed with ICUAW. We also performed, for the first time, superb microvascular imaging (SMI), which is an innovative US technique designed for imaging microvascularization unseen with color Doppler US, and observed that 53.8% of cases had significantly lower QRF muscle microvascular angiogenic activity than controls (p < 0.001). Finally, we used contrast-enhanced ultrasound (CEUS) to analyze maximum and minimum QRF muscle perfusion and obtained a ROC curve of 0.8, but when used as markers for SMI, their diagnostic capacity increased to 0.988 (CI = 0.965–1) and 0.932 (CI = 0.858–1), respectively. These findings show, for the first time, that these novel sonographic muscle methods should be used for their diagnostic capacity when assessing sarcopenic processes associated with this group of critically ill patients. Introduction An increasing number of ICU patients require long stays, lasting up to several months after surviving the first acute episode. A long ICU stay is usually defined as the requirement for mechanical ventilation and ICU therapy for more than 1 week and up to 3 weeks [1,2]. In addition, chronic critical illness (CCI) has been defined as an ICU length of stay of at least 8 days combined with at least one eligible diagnosis during hospitalization among the following: prolonged acute mechanical ventilation, sepsis, severe wounds, stroke and traumatic brain injury, and tracheotomy [3]. Secondary sarcopenia and its links to Nutrients 2021, 13, 2224 2 of 15 neuromuscular ICU-acquired weakness (ICUAW) in critically ill long-stay ICU patients have received far less research attention than their short-term counterparts. We have recently described a new, reliable, structured ultrasound (US) protocol to assess secondary sarcopenia in long-stay ICU catabolic patients with ICUAW that allows clinicians to assess quantitative and qualitative changes in the quadriceps rectus femoris (QRF) muscle in severely ill, mechanically ventilated patients with clinically suspected ICUAW [4]. However, new bedside imaging diagnostic techniques to assess qualitative muscle wasting alterations in long-stay ICU patients or in CCI are needed because they are clinically useful secondary sarcopenic diagnostic tools. Shear wave elastography usually relies on a focused acoustic beam generated by a US transducer that compresses the underlying tissue, thus inducing a local shear wave. The speed of that wave, also known as the shear wave elasticity (SWE), is then measured as it propagates through the tissue and is displayed as a parametric image or through a selective region of interest (ROI) analysis in meters per second or kilopascals (kPa), depending on whether measuring speed or elasticity in the US image. SWE provides a quantitative metric of tissue stiffness because it directly relates to the local shear elastic modulus. Therefore, the stiffer the tissue, the greater the SWE [5]. Muscle shear wave elastography is considered a high-quality measurement of muscle biomechanical properties [6]. However, although SWE muscle analysis has previously been used in musculoskeletal lesions [7,8], as far as we know, it has never been utilized in critically ill patients with ICUAW who were exposed to relevant muscular alterations to assess secondary sarcopenia. Superb microvascular imaging (SMI) is an innovative US technique specifically designed for imaging very low-flow states, which uses a unique algorithm that allows for the visualization of diminutive vessels with slow velocity without using a sonographic contrast agent [9]. To date, although SMI has been used to detect blood flow signals in various conditions such as breast lesions [9], lymph nodes [10] or carcinomas [11], as far as we know, no clinical research with SMI technology for microvascular assessment has previously been reported in muscle blood flow studies in critically ill patients with neuromuscular secondary sarcopenia associated with ICUAW. The use of intravascular contrast-enhanced ultrasound (CEUS) offers easily accessible visualization and quantification of skeletal muscle and the microcirculation of other tissues with almost no side effects. A previously described CEUS protocol was shown to be able to assess muscle microvascular flow [12]. CEUS has also been shown to be able to demonstrate well-delineated, circumscribed areas of impaired perfusion with hypoenhancement compared with surrounding muscle areas at the clinical level of a lesion in the arterial wash-in phase (0-30 s after intravenous administration). We propose that the use of intravascular CEUS may improve the ability of the US to assess muscle quality characteristics and distinguish between muscular abnormalities in critically ill patients and healthy controls. In addition, the use of intravascular CEUS may also enable the sonographic detection of other minor muscle injuries. The aim of this study was to investigate novel, reliable methods to assess the quality of sonographically analyzed QRF muscle secondary sarcopenia in long-stay mechanically ventilated critically ill patients with suspected neuromuscular acquired weakness, without previous malnutrition at ICU admission, at the bedside [3]. To do so, in addition to previously described US standardized procedures [4], we performed QRF muscle shear wave elastography (SWE), superb microvascular imaging (SMI) and contrast-enhanced QRF muscle ultrasound (CEUS) studies and, particularly, their specific correlation with SMI for muscle microvascular evaluation analysis. We also collected the same data for matched healthy controls. Patients and Healthy Controls We conducted a prospective observational study in a 42-bed adult medical-surgical ICU at a tertiary university hospital in Las Palmas de Gran Canaria (Canary Islands, Spain) between July 2019 and December 2020. As previously stated, patients were not malnourished prior to ICU admission, needed prolonged mechanical ventilation and were expected to have an ICU stay longer than seven days. We defined prolonged mechanical ventilation as when the duration of mechanical ventilation was longer than 14 days [2]. In all of the studied patients, when neuromuscular acquired weakness was clinically suspected [13], the novel QRF muscle US tools were applied in addition to performing the previously published US examination protocol [4]. Clinical diagnosis of neuromuscular ICUAW was considered when the patient, once awake, presented with flaccid quadriparesis and hyporeflexia in the absence of other neurological, biochemical or central neurological damage [14], and had a diagnostic electromyogram (EMG) [15], a median Medical Research Council (MRC) score of less than four [16,17], or both. The electromyogram was performed in 17 of the patients, of which 15 met electrophysiological criteria for axonal polyneuropathy and the remaining two patients presented neurophysiological criteria for mixed axonal and demyelinating polyneuropathy. Likewise, we also conducted US QRF muscle assessments on age-, sex-and body mass index (BMI)-matched healthy controls. Patients who were not expected to survive longer than three days and those with primary neuromuscular pathology were excluded. The following demographic and clinical data were obtained: age; sex; height; weight; BMI; Glasgow Coma Score (GCS), acute physiology, chronic health evaluation (Apache) II score and sequential organ failure assessment (SOFA) score at ICU admission; ICU admission and discharge date; and ICU admission diagnosis, length of stay (LOS) and the presence of sepsis. Additionally, we collected data on the following organ failures, also upon ICU admission: respiratory, cardiovascular, renal, hepatic, hematologic and gastrointestinal. Finally, corticosteroid treatment and neuromuscular blocking treatment data were also collected. Novel High-Quality Quadriceps Rectus Femoris US Methods for Sarcopenic Assessment We performed the US assessment with an Aplio 500 US device (Canon Medical Systems Corporation, Tokyo, Japan), with 10-12 MHz small parts and a multifrequency linear-array probe (width of probe: 38-58 mm), on all patients for whom a neuromuscular acquired weakness diagnosis was considered appropriate, as well as on the healthy controls. During the assessment, the participants lay in the supine position with their arms supinated and their knees relaxed and fully extended. The probe was coated with a suitable water-soluble transmission gel to provide acoustic contact without depression of the dermal surface, and it was aligned perpendicularly to the longitudinal and transversal axes of the QRF muscles with the aim of obtaining transverse and longitudinal images. To obtain the most accurate data of each QRF muscle for patients and controls, we performed at least three longitudinal and three transversal images on each QRF muscle and in B-Mode, M-Mode, Doppler, SMI and SWE. Regarding CEUS images, this contrast was given once on each leg and obtained at least three images on each QRF muscle. Once we collected all the images, we analyzed the numeric data on each one of them, and as a result, we calculated the median value of each data gathering. Image files were stored on the US device computer. Since muscle dimensions change with contraction and/or relaxation and the studied muscle is more compressible in a relaxed state [18][19][20], the assessment was performed without compression. The acquisition site was located two-thirds of the way along the femur length, measured between the upper pole of the patella and the anterior superior iliac spine. We measured the exact site with electronic calipers so that once the muscle was imaged, its boundaries could be identified and measured. For greater accuracy, averaged bilateral measurements were estimated. All sonographic exams were performed by a single examiner. In all of the performed studies, we first used real-time B-mode US scanning to assess muscle quantity. Therefore, we measured the cross-sectional area (CSA) in cm 2 and muscle thickness in mm and explored for the presence of edema in the subcutaneous tissue and for intramuscular and interfacial fluid. Muscle quality was first assessed in four categories according to its echogenicity by using a specifically designed scale, previously protocolized by us [4]. The scale was as follows: homogenous hypoechogenicity (Category 1); heterogeneous hypoechogenicity (Category 2); fat infiltration (Category 3); muscle fasciitis and/or necrosis (Category4) [4]. We also used conventional color Doppler US to assess QRF vascularization and, therefore, the angiogenic muscle activity. Additionally, we use M-mode US to demonstrate the presence or absence of fasciculations because it captures the mechanical event of the muscle contraction. Shear Wave Ultrasound Elastography (SWE) During SWE image acquisition, the transducer was coated with a suitable watersoluble transmission gel to provide acoustic contact without depression of the dermal surface, and it was aligned perpendicularly to the longitudinal and transversal axes of the QRF muscles in order to achieve accurate and reliable SWE measurements. The position of the patient, muscle contraction and the pressure applied to the muscle by the probe can influence the SWE value. To limit bias, in addition to the afore mentioned amount of gel, we were careful not to apply any pressure to the muscle during the evaluation. The SWE data from each patient was obtained by using Aplio 500 US device, which has a specific software named: 2D Shear Wave. The probe must be perpendicular to the muscle to be able to measure the SWE. Then, we will obtain an image with two screens: on the left side, it showed the elastography (kPa) or speed (m/s) and on the right side, the propagation map (arrival time contour). To have a reliable measure of the propagation map, all lines should be smooth (not necessarily straight) and parallel to each other. If the lines are in a zigzag and not parallel to each other is an unreliable measure. Regarding the Region of interest (ROI), it is a circle with 10 mm of diameter and to obtain a valid ROI, and we must place it where the propagation map lines are parallel to each other and smooth. We measured for each longitudinal or transversal scan at least three or four ROIs. Afterwards, according to the obtained measures, we calculated the median values. ROI values were calculated as kPa, and the 2DSWE was color-coded in dark blue (less than 36 kPa), light blue (36-72 kPa) and green, yellow and red (greater than 180 kPa) [21]. SWE images of a case and a healthy control are shown in Figure 1. identified and measured. For greater accuracy, averaged bilateral measurements were estimated. All sonographic exams were performed by a single examiner. In all of the performed studies, we first used real-time B-mode US scanning to assess muscle quantity. Therefore, we measured the cross-sectional area (CSA) in cm 2 and muscle thickness in mm and explored for the presence of edema in the subcutaneous tissue and for intramuscular and interfacial fluid. Muscle quality was first assessed in four categories according to its echogenicity by using a specifically designed scale, previously protocolized by us [4]. The scale was as follows: homogenous hypoechogenicity (Category 1); heterogeneous hypoechogenicity (Category 2); fat infiltration (Category 3); muscle fasciitis and/or necrosis (Category4) [4]. We also used conventional color Doppler US to assess QRF vascularization and, therefore, the angiogenic muscle activity. Additionally, we use M-mode US to demonstrate the presence or absence of fasciculations because it captures the mechanical event of the muscle contraction. Shear Wave Ultrasound Elastography (SWE) During SWE image acquisition, the transducer was coated with a suitable water-soluble transmission gel to provide acoustic contact without depression of the dermal surface, and it was aligned perpendicularly to the longitudinal and transversal axes of the QRF muscles in order to achieve accurate and reliable SWE measurements. The position of the patient, muscle contraction and the pressure applied to the muscle by the probe can influence the SWE value. To limit bias, in addition to the afore mentioned amount of gel, we were careful not to apply any pressure to the muscle during the evaluation. The SWE data from each patient was obtained by using Aplio 500 US device, which has a specific software named: 2D Shear Wave. The probe must be perpendicular to the muscle to be able to measure the SWE. Then, we will obtain an image with two screens: on the left side, it showed the elastography (kPa) or speed (m/s) and on the right side, the propagation map (arrival time contour). To have a reliable measure of the propagation map, all lines should be smooth (not necessarily straight) and parallel to each other. If the lines are in a zigzag and not parallel to each other is an unreliable measure. Regarding the Region of interest (ROI), it is a circle with 10 mm of diameter and to obtain a valid ROI, and we must place it where the propagation map lines are parallel to each other and smooth. We measured for each longitudinal or transversal scan at least three or four ROIs. Afterwards, according to the obtained measures, we calculated the median values. ROI values were calculated as kPa, and the 2DSWE was color-coded in dark blue (less than 36 kPa), light blue (36-72 kPa) and green, yellow and red (greater than 180 kPa) [21]. SWE images of a case and a healthy control are shown in Figure 1. Superb Microvascular Imaging (SMI) SMI was performed to observe and record the vascular structures of the QRF muscle. The following parameters were set for the SMI examination: color velocity scale = 1 to 2 cm/s; color frequency = 5-7 MHz; color frequency frame rate > 30 frames per second; the gain setting was adjusted to show optimal vascular imaging information. Both color SMI (cSMI) and monochrome SMI (mSMI) were used in all subjects, but only the mSMI was used to assess the muscle vascularity in this study due to its higher sensitivity. Therefore, SMI was performed in monochromatic mode, and the SMI settings were standardized to the manufacturer's recommendations of a low-velocity range (<2 cm/s) and high frame rate with minimal flash artifacts. The monochromatic mode was chosen, as stated above, due to its high sensitivity for low-and slow-velocity blood flow detection. We evaluated two kinds of parameters, quantitative and qualitative. With reference to the quantitative, it is analyzed by the vascular index, which is shown in percentage (%). This parameter shows the ratio between the Doppler signal pixel and the data obtained from the studied muscle. It can be calculated by the software VI test app of the Aplio 500 US device. On the other hand, we have also studied the qualitative parameters of the muscle. Those are: vessel morphology, vessel distribution and the presence of penetrating vessels. • Vessel morphology can be categorized as simple, which manifests as dot-like or linear forms or complex, which can be found as branching or shunting types. • Vessel distribution can be classified as peripheral, which shows with all vessels located at the margin; or central, which displays with any vessel that can be detected within the studied muscle. • Presence of penetrating vessels, which is shown as a vessel with high vascularization. Still images from the SMI of the target QRF muscle were obtained and archived on a picture archiving and communication system. All obtained data were shown as a percentage of the microvascularization presence or absence. The additional time required for the SMI analysis was usually less than 10 s for most patients. SMI images of a patient and a healthy control are displayed in Figure 2. Superb Microvascular Imaging (SMI) SMI was performed to observe and record the vascular structures of the QRF muscle. The following parameters were set for the SMI examination: color velocity scale = 1 to 2 cm/s; color frequency = 5-7 MHz; color frequency frame rate > 30 frames per second; the gain setting was adjusted to show optimal vascular imaging information. Both color SMI (cSMI) and monochrome SMI (mSMI) were used in all subjects, but only the mSMI was used to assess the muscle vascularity in this study due to its higher sensitivity. Therefore, SMI was performed in monochromatic mode, and the SMI settings were standardized to the manufacturer's recommendations of a low-velocity range (<2 cm/s) and high frame rate with minimal flash artifacts. The monochromatic mode was chosen, as stated above, due to its high sensitivity for low-and slow-velocity blood flow detection. We evaluated two kinds of parameters, quantitative and qualitative. With reference to the quantitative, it is analyzed by the vascular index, which is shown in percentage (%). This parameter shows the ratio between the Doppler signal pixel and the data obtained from the studied muscle. It can be calculated by the software VI test app of the Aplio 500 US device. On the other hand, we have also studied the qualitative parameters of the muscle. Those are: vessel morphology, vessel distribution and the presence of penetrating vessels. • Vessel morphology can be categorized as simple, which manifests as dot-like or linear forms or complex, which can be found as branching or shunting types. • Vessel distribution can be classified as peripheral, which shows with all vessels located at the margin; or central, which displays with any vessel that can be detected within the studied muscle. • Presence of penetrating vessels, which is shown as a vessel with high vascularization. Still images from the SMI of the target QRF muscle were obtained and archived on a picture archiving and communication system. All obtained data were shown as a percentage of the microvascularization presence or absence. The additional time required for the SMI analysis was usually less than 10 s for most patients. SMI images of a patient and a healthy control are displayed in Figure 2. Contrast-Enhanced Ultrasound (CEUS) Both patients and healthy controls underwent a CEUS assessment of QRF muscle blood flow. An intravenous bolus injection of 4.8 mL of SF6 (sulfur hexafluoride) microbubbles (Sono-Vue ® , Bracco, Italy), an intravascular contrast agent, was given via a cubital intravenous line. The microbubbles are recovered and stabilized by a phospholipid membrane. Additionally, this contrast is purely intravascular, meaning that the microbubbles will not go through the endothelium. Moreover, the size of the microbubbles is smaller than that of red blood cells. The contrast agent is eliminated from the body via expired air and has few side effects, which can include headache and abdominal pain. It can be used in patients that suffer from renal failure but cannot be used in patients with recent acute coronary syndrome or clinically unstable ischemic cardiac disease. CEUS can be measured quantitatively and qualitatively. In fact, the CEUS enhancement pattern (ROI) is a qualitative parameter and is shown in Figure 3a. Regarding the quantitative parameter, it is measured by a time-intensity curve that is obtained with our built-in software from Aplio 500 US device. After the infusion, the distribution of the contrast agent was visualized in the early arterial phase, which allowed us to assess the microvascular flow of the QRF muscle [12]. CEUS images were acquired, and time-intensity curve analysis of a CEUS video clip was then performed. After setting an ROI (pink circle) in the area of strongest enhancement, the following quantitative parameters were automatically calculated: peak intensity(×10 −5 arbitrary units [AU]), which is the maximum intensity of the time-intensity curve; time to peak (seconds), the time needed to reach the peak intensity; mean transit time(seconds), the time when the intensity is higher than the mean value; slope(×10 −5 AU/seconds), which is the maximum wash-in velocity of the contrast agent; and area under the curve(×10 −5 AU • seconds), which is the integral value of the Contrast-Enhanced Ultrasound (CEUS) Both patients and healthy controls underwent a CEUS assessment of QRF muscle blood flow. An intravenous bolus injection of 4.8 mL of SF 6 (sulfur hexafluoride) microbubbles (Sono-Vue ® , Bracco, Italy), an intravascular contrast agent, was given via a cubital intravenous line. The microbubbles are recovered and stabilized by a phospholipid membrane. Additionally, this contrast is purely intravascular, meaning that the microbubbles will not go through the endothelium. Moreover, the size of the microbubbles is smaller than that of red blood cells. The contrast agent is eliminated from the body via expired air and has few side effects, which can include headache and abdominal pain. It can be used in patients that suffer from renal failure but cannot be used in patients with recent acute coronary syndrome or clinically unstable ischemic cardiac disease. CEUS can be measured quantitatively and qualitatively. In fact, the CEUS enhancement pattern (ROI) is a qualitative parameter and is shown in Figure 3a. Regarding the quantitative parameter, it is measured by a time-intensity curve that is obtained with our built-in software from Aplio 500 US device. After the infusion, the distribution of the contrast agent was visualized in the early arterial phase, which allowed us to assess the microvascular flow of the QRF muscle [12]. CEUS images were acquired, and time-intensity curve analysis of a CEUS video clip was then performed. After setting an ROI (pink circle) in the area of strongest enhancement, the following quantitative parameters were automatically calculated: peak intensity(×10 −5 arbitrary units [AU]), which is the maximum intensity of the time-intensity curve; time to peak (seconds), the time needed to reach the peak intensity; mean transit time(seconds), the time when the intensity is higher than the mean value; slope(×10 −5 AU/seconds), which is the maximum wash-in velocity of the contrast agent; and area under the curve(×10 −5 AU · seconds), which is the integral value of the curve was associated with the total blood volume and the sum of the wash-in area and wash-out area. curve was associated with the total blood volume and the sum of the wash-in area and wash-out area. Images of the peak maximum and minimum CEUS intensity of a patient and a healthy control are shown in Figures 3 and 4. Images of the peak maximum and minimum CEUS intensity of a patient and a healthy control are shown in Figures 3 and 4. The hospital institutional review board approved the study (protocol number: 2019-344-1, 25 July 2019). Written informed consent was obtained from patients or close relatives. The hospital institutional review board approved the study (protocol number: 2019-344-1, 25 July 2019). Written informed consent was obtained from patients or close relatives. Statistical Analysis Categorical variables were expressed as frequencies and percentages, and continuous variables were expressed as means and standard deviations (SD) when data followed a normal distribution or as medians and interquartile ranges (IQR = 25th-75th percentile) when the distribution departed from normality. The percentages were compared, as appropriate, using the chi-square (χ 2 ) test or the exact Fisher test; the means were compared by a t-test, and the medians were compared using the Wilcoxon test for independent data. A receiver operating characteristic (ROC) analysis was conducted to determine the discriminant power of the muscle area for the outcome. The area under the corresponding ROC curve was estimated using the means of 95% confidence intervals. The discriminant threshold (corner closest to (0, 1)) was chosen as that which minimized the function: (1 − Sensitivity) 2 + (1 − Specificity) 2 For the obtained predictor, the sensitivity and specificity were estimated using the means of 95% confidence intervals. Statistical significance was set at p < 0.05. Data were analyzed using the R package, version 3.6.1 (R Development Core Team, 2019). Statistical Analysis Categorical variables were expressed as frequencies and percentages, and continuous variables were expressed as means and standard deviations (SD) when data followed a normal distribution or as medians and interquartile ranges (IQR = 25th-75th percentile) when the distribution departed from normality. The percentages were compared, as appropriate, using the chi-square (χ 2 ) test or the exact Fisher test; the means were compared by a t-test, and the medians were compared using the Wilcoxon test for independent data. A receiver operating characteristic (ROC) analysis was conducted to determine the discriminant power of the muscle area for the outcome. The area under the corresponding ROC curve was estimated using the means of 95% confidence intervals. The discriminant threshold (corner closest to (0, 1)) was chosen as that which minimized the function: For the obtained predictor, the sensitivity and specificity were estimated using the means of 95% confidence intervals. Statistical significance was set at p < 0.05. Data were analyzed using the R package, version 3.6.1 (R Development Core Team, 2019). Results During the study period, 1746 patients were admitted to the ICU, of which 960 were mechanically ventilated and 362 were ventilated for longer than 7 days; eventually, Nutrients 2021, 13, 2224 9 of 15 167 of these patients had prolonged mechanical ventilation. Neuromuscular acquired weakness was clinically suspected in 26 out of 167 patients (15.5%), to whom the novel QRF-US methods and the study protocol were applied. The median time on mechanical ventilation was 51 days (IQR: 34.2-92.5). The variables for the entire cohort and for each group are summarized in Table 1. There were no significant differences in grouping based on demographics. Data are means ± SD, frequencies (%) and medians (IQR). Abbreviations: SD = standard deviation; IQR = interquartile range; SWE = shear wave elastography; kPa = kilopascals; SMI = superb microvascular imaging; CEUS = contrast-enhanced ultrasound. The clinical characteristics of the patients are shown in Table 2. The patients were critically ill upon ICU admission, as shown by the studied severity scores. Most of them (76.4%) had a multiorgan failure (particularly respiratory, renal and cardiovascular failure). Additionally, 84.6% of them were septic upon ICU admission, and 61.5% and 34.6% received corticosteroids or neuromuscular blockers during their ICU stay, respectively. The median time between ICU admission and the clinical suspicion of ICUAW/performance of the QRF muscle ultrasound study was 32 days. As displayed in Table 1, the median SWE values in kilopascals were significantly greater in the patients compared to the control group (p < 0.001). It showed an area under the ROC curve of 0.972 (95% CI = 0.916-1.000). Additionally, 53.8% of the patients had significantly lower QRF muscle microvascular angiogenic activity levels, as it was detected by SMI if we compared those to the controls, which all had normal, hundred percent, microvascularization (p < 0.001). Measurements of the maximum and minimum QRF muscle perfusion, as assessed by CEUS, were both significantly lower in the patients versus the controls (p < 0.001). Their diagnostic value was shown in the areas under the ROC curves (0.801(95% CI = 0.668-0.934) and 0.817 (95% CI = 0.682-0.951), respectively). Measurements of minimum and maximum CEUS, as markers of SMI, both had an area under the receiver operating characteristic (ROC) curve over 0.93 and 0.98, respectively, as shown in Table 3. Data are means ± SD, frequencies (%) and medians (IQR). Abbreviations: ICU = intensive care unit; GCS = Glasgow Coma Scale; QRF-US: quadriceps rectus femoris ultrasonogram. Abbreviations: CEUS = contrast-enhanced ultrasound; SMI = superb microvascular imaging; AUC = area under the curve; (*) Point closest-to-(0,1) corner. Additionally, as shown in Table 1, the QRF muscle area and thickness significantly decreased (p < 0.001) in the patient group compared to the control group. Their diagnostic values had an area under the ROC curve of 0.971 (95% CI = 0.932-1.000) and 0.950 (95% CI = 0.893-1.000), respectively. Significantly greater levels of intramuscular/interfacial fluid and subcutaneous edema (p < 0.001) were seen in the patients (80.8% and 65.3%, respectively) compared to 0% for both factors in the control group. Echogenicity was also significantly different in the patients versus controls (p < 0.001). None of the cases were graded as Categories 1 or 2, but 42.3% had fat infiltration, and 57.7% had muscle necrosis and fasciitis and were thus graded as Categories 3 and 4, respectively. In addition, 69.2% of patients had significantly lower numbers of detected fasciculations on the muscle if we compared them to the healthy controls, who all had 100% of fasciculations (p < 0.001). Of note, there was no significant difference in subcutaneous tissue thickness between both study groups (p = 1). We also visualized QRF muscle vascularization with color Doppler US, which allowed us to assess blood flow changes in the studied subjects; significantly lower QRF muscle angiogenic activity was observed in 53.8% of the patients compared to the healthy controls, who had normal vascularization (p < 0.001). ROC curve analysis results for all of the studied QRF muscle wasting markers are shown in Table 4. Discussion We prospectively studied novel high-quality US methods in an important group of critically ill patients at risk of prolonged ICU and hospital stay or death, along with increased use of health resources. Most of the studied patients were septic, had a multiorgan failure and received corticosteroids or neuromuscular blocking agents, which are wellrecognized risk factors for ICUAW [14,15]. Due to these factors, they were at risk of developing secondary sarcopenia, which eventually leads to musculoskeletal weakness and physical damage that can persist for years in those who survive a prolonged ICU stay. On applying the researched novel high-quality US tools in these patients, we found that QRF muscle SWE showed a significant increase in muscle stiffness, measured in kPa, in patients compared to the control group (p < 0.001), with an outstanding area under the ROC curve of 0.97. This finding suggests that the QRF muscles in these studied patients became stiffer, indicating a significant increase in muscle fibrosis. We also found that 53.8% of the patients had significantly lower QRF muscle microvascular angiogenic activity, as detected by SMI, than the controls, who had normal microvascularization (p < 0.001). As far as we know, this is the first time that these techniques have been applied to this category of critically ill patients. This demonstrates that we are able to detect changes in muscle stiffness and low flow microvascularization with this technique that cannot be visualized using the regular color Doppler US technique. Additionally, maximum and minimum QRF muscle perfusion levels, as assessed by CEUS, were significantly lower in patients than in the controls (p < 0.001). This finding indicates less muscle perfusion and, consequently, a loss of their muscle biomechanical properties. However, muscle perfusion persistence detection is also a good preemptive indicator of muscle strength and functional recovery. When we applied our usual muscle US protocol, we found that QRF muscle area and thickness significantly decreased in the studied patients compared to the control group (p < 0.001), confirming our and others' previous findings of US-diagnosed muscle quantity loss during an ICU stay [4,22]. Interestingly, QRF cross-sectional area has also been found to be a more reliable proxy for muscle strength in an ICU setting, where volitional and nonvolitional muscle strength measurements are challenging [23]. As expected, significantly greater levels of subcutaneous edema and intramuscular/interfacial fluid (p < 0.001) were also seen in the patients, compared with the absence of these factors in the controls, which demonstrates that the US can easily detect fluid displacement in these patients. Echogenicity, another relevant muscle quality characteristic in US, was also significantly different in patients versus controls (p < 0.001). Even though during severe catabolic illness, normal musculoskeletal tissue is slowly replaced by fibrous or fat tissues (thus progressively increasing its echogenicity), we observed that none of the cases were graded as Categories 1 and 2, most probably due to the patients having a prolonged ICU stay and, therefore, only exhibiting a greater degree of fat infiltration (Category 3) or muscle necrosis and fasciitis (Category 4). In addition, we also observed that 69.2% of patients had significantly lower numbers of detected fasciculations compared to 100% of healthy controls (p < 0.001). These findings are similar to those obtained in our previous study [4], and they may also be explained by the fact that the patients presented with muscle-specific myofiber alterations [24]. The color Doppler US showed significantly lower QRF muscle angiogenic activity in 53.8% of the patients compared to the healthy controls, who had normal vascularization (p < 0.001), indicating a decrease in muscle angiogenesis, which also confirms previous findings [4]. Regarding methods for musculoskeletal evaluation in critically ill patients, attention has mainly been focused on computed tomography (CT), bioelectrical impedance spectroscopy (BIS) and US [25]. CT allows for muscle quantity and quality assessment but requires patient radiation and, most frequently, moving the patient to radiology department facilities. However, it has a relevant prognostic value, and it has been demonstrated that low skeletal muscle quality at ICU admission (as assessed by CT-derived skeletal muscle density) is independently associated with higher 6-month mortality in mechanically ventilated patients, thus reinforcing the importance of muscle quality and quantity as prognostic factors in the ICU [26]. Serial BIS [27] may be less accessible at the bedside than US but also requires less training for its use. However, it may be misleading, as its measurements are mainly linked to inaccuracies due to large fluxes in fluid status in critically ill patients. This fact, coupled with the lack of reliable weight measures in critical care, the lack of predictive equations for this cohort, and the limitations in positioning the patient for accurate measurements, reduces its usefulness [6]; there is still a need for more research in this specific area. US measurements, however, have demonstrated utility in measuring declines in muscle mass and quality in critically ill patients [2,4,28]. US assessment of musculoskeletal composition by applying the described novel US techniques combined with those previously protocolized by our team provides a unique opportunity to develop improved methods of secondary sarcopenia diagnosis and potential recovery when ICUAW is suspected, based on objective data. Concerning SWE, although there has been an increase in the number of studies regarding musculoskeletal elastography [7], this technique has barely been studied in the setting of critically ill patients [29], particularly in those patients with suspected ICUAW and on long-term mechanical ventilation. SWE, as stated earlier, is a method of US imaging based on the detection of shear wave propagation through tissues. By using inversion algorithms, this method maps the waves into elastograms and determines the stiffness of the tissue by measuring the shear modulus value [30,31]. SWE gives a spatial representation of soft tissue stiffness and provides measures of muscle quality, and it seems to be a reliable technique to evaluate limb muscles and the diaphragm in both critically ill patients and healthy controls [29]. SWE has also been established as an excellent diagnostic method for the fibrosis stage, both in nonalcoholic fatty liver disease [32] and in several other nonhepatic applications [7]. In skeletal muscles, it provides a two-dimensional representation and quantifiable measurement of their mechanical properties and an estimate of muscle fibrosis [21]. Although it has been shown that SWE muscle analysis may provide new data about muscle quality during critical illness [29], as far as we know, it has not previously been evaluated in long-stay critically ill patients with suspected ICUAW that are exposed to relevant muscular alterations. In this study, we found, when measuring QRF muscle elasticity, that median SWE values (in kilopascals) were significantly greater in patients compared to the healthy control group. It is important to stress that SWE values, but not echogenicity, are associated with muscle fibrosis and that high shear modulus values have been associated with muscle stiffness, while low shear modulus values have been linked to atrophy in chronic myopathies [29]. Therefore, our results, with an outstanding area under the ROC curve, confirm these findings and show that our studied patients developed significant rigidness of the muscle, which is associated with muscle fibrosis when first diagnosed with ICUAW in long-term mechanically ventilated critically ill patients. We performed SMI to depict the vascular structures of the QRF muscle. It is an innovative software US technique specifically designed for imaging very low flow states, which uses a unique algorithm that allows for the visualization of diminutive vessels at slow velocity without using a contrast agent [9]. To date, although SMI has been used to detect blood flow signals in various serious clinical conditions (mainly in breast lesions [9][10][11]), as far as we know, no clinical research using SMI has been reported in muscle blood flow studies in critically ill patients with neuromuscular damage due to secondary sarcopenia associated with ICUAW. We found that around half of the studied patients had significantly lower QRF muscle microvascular angiogenic activity, as detected by SMI, than the control patients, who had normal vascularization. As far as we know, this is the first time that this technique has been applied to such long-term mechanically ventilated critically ill patients. This means that using this technique makes it possible to detect tiny changes in muscle flow microvascularization that cannot be visualized by color Doppler US alone. It is important to be able to detect this QRF muscle low flow because it most likely has muscle recovery prognostic importance, which should be confirmed in future studies. CEUS may be the preferred method for the assessment of defective skeletal muscle blood flow responses to exercise and for investigating and quantifying responses to therapy [33]. CEUS is a non-invasive sonographic technique for quantitative imaging that can be used to assess muscle vascular perfusion. For this technique, a contrast agent containing inert gas-filled microbubbles, which are smaller than red blood cells in size, is injected into the bloodstream. Once the microbubbles are exposed to the high-energy ultrasound in the region of interest (ROI), they are destroyed. The destroyed microbubbles are replenished by the neighboring blood vessels, and afterwards, the microbubble intensity is gradually restored in the ROI. The kinetics of the microbubbles in the ROI are used to estimate perfusion indices, as previously published; the concentration of microbubbles when fully replenished is proportional to the microvascular blood volume (MBV), and the rate at which the microbubbles replenish determines the microvascular flow velocity (MFV). MBV represents the total amount of capillaries participating in the microcirculation at a given moment, whereas blood flow (MBF) is the product of blood volume and flow velocity [33]. In our study, the patients and healthy controls underwent CEUS assessment of QRF muscle blood flow. Maximum and minimum QRF muscle perfusion levels, as assessed by CEUS, showed significantly lower values in patients compared to controls (p < 0.001). This finding indicates lesser QRF muscle perfusion and a decrease in angiogenic activity, which would entail a loss of their biomechanical properties. CEUS by itself also contributes to the analysis of QRF muscle perfusion, but its excellent diagnostic capacity, as shown by an area under the ROC curve of 0.8, is slightly lower than that of other more relevant QRF muscle biomarker tools researched in this study, such as SMI. However, when we use CEUS maximum and minimum values as markers of SMI, they show a better sensitivity and specificity, obtaining a ROC curve value of 0.93 for CEUS minimum and 0.98 for CEUS maximum. As far as we know, this is the first time that this technique has been used to diagnose QRF muscle perfusion alterations in long-term mechanically ventilated critically ill patients. We believe that with these two novel qualitative methods, one can not only establish the patient's current angiogenic activity, but it would also be feasible to forecast, in upcoming additional studies, the patient's recovery prognosis in the long term. Conclusions In this study, we were able to assess specific qualitative changes in the QRF muscle by applying three novel US methods to mechanically ventilated long-stay ICU patients with clinically suspected neuromuscular acquired weakness. Among the newly studied US tools, SWE showed, with an outstanding area under the ROC curve, that the studied patients developed serious muscle rigidness associated with muscle fibrosis when first diagnosed with ICUAW, which has relevant diagnostic and prognostic consequences. We also performed SMI in these patients for the first time; SMI is a new US software analysis that allowed us to detect tiny changes in muscle flow microvascularization unseen with color Doppler US. Being able to detect low muscle flow allows to demonstrate muscle viability, and it has recovery prognostic importance. Finally, we used CEUS to analyze QRF muscle perfusion; despite its excellent diagnostic capacity, as shown by an area under the ROC curve value of 0.8, it performed slightly poorer than the other QRF muscle biomarker tools researched in this study, but when used as a marker of SMI, its diagnostic capacity increased to over 0.9 in terms of area under the ROC curve value. These findings are relevant because they show, for the first time, that these novel sonographic muscle tools can be used to assess the muscle quantity and quality wasting process in this specific group of critically ill patients and should, due to their clinical relevance, be added to sonographic musculoskeletal diagnostic protocols.
2021-07-03T06:17:03.905Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "6702c8d9525707ca9b845d1ebc07ddfc5eb70f8a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/7/2224/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e47d181046a964c605ce18a7202a6f984252074c", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
187936523
pes2o/s2orc
v3-fos-license
Regional tendencies in air temperature at the southwestern Pribaikalie The regional features of long-term changes in air temperature at the south-western Baikal region on the background of global climate changes were studied. To estimate changes in air temperature in the basins of the south-western Pribaikalie, the long-term series of air temperature were used for the Tunka weather station. The correlation coefficients between the series of air temperature at the Tunka weather station and the temperature averaged for the temperate latitudes, the Northern Hemisphere, and the Globe were analyzed. Analysis of the variations in the anomalies of the mean annual temperature averaged for different zones, have shown a pronounced increase in the growth rate of the annual air temperature since the beginning of the 1970s for all the series of data. To estimate the relationship between the change in the surface air temperature in south-western Pribaikalie and large-scale atmospheric circulation mechanisms, correlation coefficients between the series of temperature and the characteristics of the atmospheric circulation have been calculated. The results of the analysis showed that the closest relationship between air temperature in the basins of the south-western Pribaikalie exists with the Scandinavian index (SCAND), the western type of atmospheric circulation of Wangengeym-Girs (W) in winter, the pressure in the center of the Siberian High in December. Introduction The current climate change is characterized by continuing warming, the main indicator of which is the near-surface air temperature, calculated as the average surface air temperature (at 2 m above the surface) over the continents and the water surface temperature of the ocean surface [1,2]. According to the results of the Intergovernmental Panel on Climate Change (IPCC) from 1880 to 2012, the increase in global near-surface air temperature on the continents and oceans was 0.85°C (0.65 to 1.06°C). Since 1951, the rate of growth of surface air temperature was 0.12°С per 10 years (from 0.08 to 0.14), and for 1998-2012 it was only 0.05°C / 10 years (from 0.05 to 0.15). The slowdown in temperature growth was due to natural fluctuations in the climate system and can not serve as evidence of the cessation of the global warming [3]. It is known, that the change in the near-surface air temperature for the Northern Hemisphere in the XX and early XXI centuries was also not homogeneous. The anomaly of the mean annual near-surface temperature averaged for the Northern Hemisphere varied unevenly during the period 1850-2014 [4]. Until Northern Hemisphere was lowered. Since 1970, the phase of active growth of near-surface air temperature has been observed, with a maximum (0.719°C) in 2005 [5]. The most intensive is considered warming, which began after 1976. At the same time, interest in research on the problem of climate change has grown [6]. A large number of works have been devoted to the analysis of the ongoing climate changes on the territory of Russia [1,4,5,[7][8][9][10][11]. According to the IPCC Fifth Assessment Report, climate change in Russia as a whole (on average for the year and for the territory) should be characterized as continuing warming, noting that the trend towards slowing down of warming is traced only in winter [3]. At the same time, many authors emphasize the very important role of the natural variability of the climate system over the decades, especially for individual regions Objects, data and methods The territory of the southwestern Pribaikalie is represented by mountain-hollow landscapes. Local conditions are imposed on the zonal features of the climate of this region due to a combination of high-mountainous terrain and relatively low intermountain depressions, latitudinal orientation of the main orographic elements, regional peculiarities of the atmospheric circulation. In addition, the complex terrain and individual geomorphological conditions contribute to the formation of a unique microclimate of different landscapes, due to the interaction of the circulation and radiation factors and the properties of the underlying surface. The description of the main meteorological parameters and their changes is based on the data of meteorological stations located on the territory of the southwestern Pribaikalie. As it was shown, the data of the Tunka weather station are the most representative for the study area [12]. To assess the changes in air temperature in the basins of the south-western Baikal region, against the backdrop of current climate changes, long-term series of temperatures at the weather station Tunka [13]. In addition, the anomalies of the mean annual nearsurface air temperature averaged for the globe, the Northern Hemisphere and the temperate latitude zone from 1888 to 2014 [14] were studied. One of the leading climate-forming factors that affect the fluctuations of the regional climate is the large-scale mechanisms of atmospheric circulation. The connection between the circulation systems and the temperature regime of the Northern Hemisphere has been confirmed by many authors [1,4,5,15,16]. When the relationship between large-scale atmospheric circulation mechanisms and components of the climate system is identified, scientists use long-term series of atmospheric circulation indices that are linked to specific geographic sectors. According to the results of modern studies for the territory of Siberia, the most informative are the indices SCAND (Scandinavian Index), NAO (North Atlantic Oscillation Index), as well as the frequency of Wangenheim-Girs atmospheric circulation types (W -western, E -east, C -meridional) and elementary circulation mechanisms according to the classification of Dzerdzievsky. In addition, the Siberian anticyclone (Siberian High) has a significant influence on the climate of Eastern Siberia [17][18][19]. In the present paper, to determine the relationship of regional changes in the surface temperature in the south-western Pribaikalie with large-scale atmospheric circulation mechanisms, the monthly and annual values of the atmospheric circulation indices, the frequency of atmospheric circulation types according to the Wangenheim-Girs and Dzerdzievsky classifications, and the position of the center of the Siberian High were used. To estimate the relationship between the change in the surface air temperature of the basins of the southwestern Baikal region (according to the data of the Tunka weather station) and large-scale atmospheric circulation mechanisms, the coefficients of correlation between the series of temperature and the characteristics of the atmospheric circulation were studied. Results and discussion To estimate changes in air temperature in the basins of the south-western Pribaikalya, against the background of modern climate changes, long-term series of temperatures were used for the Tunka meteorological station. As it was shown, that the data of the Tunka weather station are the most representative for the investigated territory. Long-term changes in the temperature regime in the basin When analyzing trends in air temperature, special attention has been paid to the period since 1976, because during this interval the most intensive warming is observed. During the period of instrumental observations at the Tunka weather station, positive trends in temperature were noted during the whole period of observations (1939-2015 -0.28°С/10 yr). During the period 1976-2015 trend of air temperature was higher (0.34°С/10 yr), than for the base period (1961-1990 -0.08°С/10 yr) and for the whole period of instrumental observations , which is generally consistent with the global trends of long-term changes in air temperature. In order to analyze the features of regional climate change on the territory of the Russian Federation we investigate the standard physiographic regions [20]. The territory of the southwestern Pribaikalie is a part of the Pribaikalie and Transbaikalia regions, the main area of the region is represented by the Altai-Sayan and Baikal Mountain countries. Comparison of the linear trend coefficients for 1976-2006 and 1976-2012 showed that the annual trend values on the territory of Russia have not changed. In the Baikal region and Transbaikalia and at Tunka station, the rate of increase in the average annual temperature became less (table 1). Comparison of estimates of the linear trend coefficient over two periods (1976-2006 and 1976-2012) in the territory of the Russian Federation, Pribaikalie and Transbaikalie and the weather station Tunka showed that the annual trend values in the territory of the Russian Federation did not change. In The results of the analysis showed that the closest relationship between air temperatures in the basins of the southwestern Pribaikalya exists with the Scandinavian index (SCAND), the western type of atmospheric circulation of Wangengeym-Girs (W) in winter (table 2). The pressure in the center of the Siberian High in December (-0.41). Conclusions Thus, long-term changes in the air temperature of the basins of the southwestern Baikal region occur synchronously with the global changes. However, a high variety of types of underlying surface and a significant difference in the heights of the mountain-hollow landscape affect regional climate change. The winter months make the main contribution to the increase in the average annual air temperature in the basins of the south-western Pribaikalie during the period from 1976 to 2012, while in the territory of the Russian Federation and the Pribaikalie and Transbaikalie region -spring and summer. Circulation processes play a significant role in the formation of the climate of the basins of the southwestern Pribaikalie, especially in the winter season, when the solar radiation incoming the territory is minimal. In the cold period of the year, the influence of Western transport on the territory of Eastern Siberia is weakened as a result of the formation of the Siberian anticyclone, which leads to a violation of zoning in the temperature distribution. The effect of global circulation, as a climateforming factor, is manifested in air temperature relationships with SCAND indiex and the frequency of western circulation in the winter months. In addition, an increase in the linear trend coefficient of the average annual air temperature at the Tunka weather station from 1976 to 2012 is due to the temperature increase during the winter months, during the formation and development of the maximum capacity of the Siberian High. This may indicate the influence of the power of the Siberian anticyclone on the formation and trends in the thermal regime of the basins of the southwestern Pribaikalie.
2019-06-13T13:19:21.821Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "16275c43975998c32157e6f185953c437ea4e415", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/190/1/012039", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f4a4bcbc08cd475eba392577bbcb43cb68f1be18", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Environmental Science" ] }
248202228
pes2o/s2orc
v3-fos-license
Identifying variation for N-use efficiency and associated traits in amphidiploids derived from hybrids of bread wheat and the genera Aegilops, Secale, Thinopyrum and Triticum Future genetic progress in wheat grain yield will depend on increasing biomass and this must be achieved without commensurate increases in nitrogen (N) fertilizer inputs to minimize environmental impacts. In recent decades there has been a loss of genetic diversity in wheat through plant breeding. However, new genetic diversity can be created by incorporating genes into bread wheat from wild wheat relatives. Our objectives were to investigate amphidiploids derived from hybrids of bread wheat (Triticum aestivum L.) and related species from the genera Aegilops, Secale, Thinopyrum and Triticum for expression of higher biomass, N-use efficiency (NUE) and leaf photosynthesis rate compared to their bread wheat parents under high and low N conditions. Eighteen amphidiploid lines and their bread wheat parents were examined in high N (HN) and low N (LN) treatments under glasshouse conditions in two years. Averaged across years, grain yield reduced by 38% under LN compared to HN conditions (P = 0.004). Three amphidiploid lines showed positive transgressive segregation compared to their bread wheat parent for biomass per plant under HN conditions. Positive transgressive segregation was also identified for flag-leaf photosynthesis both pre-anthesis and post-anthesis under HN and LN conditions. For N uptake per plant at maturity positive transgressive segregation was identified for one amphidiploid line under LN conditions. Our results indicated that introgressing traits from wild relatives into modern bread wheat germplasm offers scope to raise biomass and N-use effciency in both optimal and low N availability environments. Introduction Bread wheat has been selected in plant breeding for improved grain yield and adaptability to diverse environments and agriculture practices. This has led to the loss of genetic diversity [1]. and modification in traits like vernalization requirement and photoperiod requirement. Development of Rht semi-dwarf wheat cultivars that responded to higher N fertilizer doses in the 1960s and 1970s, the so called "Green Revolution varieties", also narrowed the genetic base [2]. Moreover, wheat is a naturally self-pollinated crop with less heterozygosity and heterosis than outcrossing crops [3]. Therefore, hybrid seed production to enhance diversity and break the yield plateau is not straightforward. However, new genetic diversity can be created by incorporating genes into bread wheat cultivars from wild relatives [4,5], which have been shown to contain variation for traits of agronomic importance. For example, Triticum urartu (wheat A genome donor) has been implicated in photosynthetic capacity [6,7], Thinopyrum bessarabicum is a highly salt tolerant species [8], and. Aegilops speltoides (the putative wheat B genome donor) is adapted to drought/heat environments and nutrient-poor areas [9]. Furthermore, the use of ancestralderived germplasm, introgressing genes from the ancestral wheat species, has been suggested as a source of variation for tolerance of low N availability in wheat breeding programs [10,11]. Wheat wild relatives can be crossed to bread wheat to produce an interspecific hybrid or amphihaploid. The amphihaploid is then chromosome doubled (e.g. using colchicine) to produce an amphidiploid containing both the complete wheat genome and the complete genome from the wild relative [4]. N-use efficiency (NUE) can be defined as the grain dry matter (DM) yield (kg DM ha −1 ) divided by the supply of available N from the soil and fertilizer (kg N ha −1 ; [12]) and can be divided into two components: (i) N-uptake efficiency (NUpE; above-ground N uptake per unit N available) and (ii) N-utilization efficiency (NUtE; grain dry matter yield per unit aboveground N uptake). Wild relatives of bread wheat have been reported to have higher leaf photosynthetic rates compared to modern cultivars [13][14][15] suggesting that wheat breeding may have resulted in lower photosynthetic rates. Austin et al. [6] reported that the rate of leaf net photosynthesis was in general highest in the diploid wheat species, intermediate for the tetraploid species and lowest for hexaploid T. aestivum. Synthetic lines (Triticum durum × Aegilops tauschii) and derivatives developed by CIMMYT have been associated with higher leaf photosynthetic rate [16] and grain yield [17] than the recurrent bread wheat parents under optimal conditions. In addition, primary synthetic spring wheats have been shown to have greater root biomass compared to recurrent parents in Australia [18] and CIMMYT synthetic-derived wheat lines expressed increased partitioning of root mass to deeper soil profiles and grain yield under drought compared with the recurrent bread wheat parents in NW Mexico [19,20]. Increasing biomass of the wheat crop for future gains in yield potential implies an additional requirement for N capture to support photosynthesis. Increased N fertilizer inputs, however, will have economic implications as well as environmental impacts, through nitrate leaching into groundwater and conversion of nitrate by denitrifying soil bacteria into nitrous oxide, a greenhouse gas which contributes to global warming [21,22]. The development of cultivars with reduced requirements for N fertilizer will therefore be of economic benefit to farmers and help to reduce environmental contamination associated with inputs of N fertilizers [23,24]. Promising traits for selection by breeders to increase NUE include deeper roots for increased N uptake [25], increased leaf photosynthesis rate [11] along with the stay-green trait associated with optimized post-anthesis N remobilization [26] and/or late N uptake [27,28]. In the present study, eighteen amphidiploid lines were characterized for NUE and associated traits including plant biomass and leaf photosynthesis rate in glasshouse experiments. The amphidiploid lines were produced by crossing accessions of the wild relatives of wheat (Amblyopyrum muticum, Aegilops speltoides, Aegilops umbellalata, Aegilops comosa, Thinopyrum turcicum and Thinopyrum bessarbicum) with the bread wheat cultivars, Chinese Spring, Paragon, Pavon 76 and Highbury [4]. The objectives were to: (i) identify novel wheat lines (amphidiploids) expressing higher biomass, NUE and leaf photosynthesis rate than the bread wheat parents under high and low N conditions and (ii) understand the physiological mechanisms underlying the improved performance in the novel ancestral wheat-derived amphidiploid lines compared to bread wheat parents. Plant materials Eighteen amphidiploid lines along with four bread wheat parental cultivars were grown in each of two glasshouse experiments ( Table 1). The bread wheat parent cultivars were Paragon (PAR), Highbury (HB) (UK spring wheat cultivars), Pavon 76 (PAV) (Mexican CIMMYT spring wheat cultivar) and Chinese Spring (CS) (Chinese spring wheat landrace). The amphidiploids were produced at the Wheat Research Centre, University of Nottingham by crossing bread wheat as the female parent with a wild species to produce a F 1 interspecific hybrid or amphihaploid. The F 1 hybrids were then chromosome doubled using colchicine to produce the amphidiploids [4,5]. Each amphidiploid was expected to contain the complete genome of wheat plus the complete genome of the wild relative. However, chromosome analysis of a number of the amphidiploids using genomic in situ hybridization (GISH) showed some variation in chromosome number of both the wheat genome and that of the wild relative. A complete GISH analysis of all the lines used in the experiments was not possible and therefore the present analysis will consider each amphidiploid as an individual genotype. Experimental treatments and design Two glasshouse experiments were conducted at the University of Nottingham, UK, Sutton Bonington Campus. The experiments were sown on 23 June 2014 and 6 July 2015 and harvested on 15 December 2015 and 28 December 2016. The experimental procedures and measurements were the same in both years unless stated otherwise. Seeds were sown in plastic modules filled with soil compost (Levington Advance Seed and Modular F2S). After seed germination (~6 days after sowing) seedlings of 2-4 cm length were transferred to a cold room for vernalization for eight weeks at 6˚C with a 12 h photoperiod and then transplanted into 2 L pots (16.8 cm diameter) in the glasshouse using low N peat-compost (Klasmann Medium Peat 818) supplemented with nutrients as described in S1 Table. The experimental design used was a split-plot where two levels of N treatment (HN: high N and LN: low N) were randomized on the main-treatment and 22 genotypes (18 amphidiploids and four bread wheat parents) were randomized on the sub-plot treatment. There were four replicates. A single seedling was transplanted per pot and represented one replicate. Nitrogen was applied as ammonium nitrate (NH 4 NO 3 ) granules (34% N) dissolved in water. Two levels of N were applied, low N at 60 kg ha -1 equivalent and high N at 200 kg ha -1 equivalent (0.25 and 1.27 g NH 4 NO 3 pot -1 under LN and HN conditions, respectively). For the low N treatment, N application was split into two doses each of 30 kg N ha -1 equivalent and for the high N treatment into three doses of 50 kg N ha -1 , 50 kg N ha -1 and 100 kg N ha -1 equivalents. The first application was applied immediately after transplanting and the second at onset of stem extension (GS31) for both treatments. The last application for the high N treatment was at flag-leaf emergence (GS39, Zadoks growth stage [29]). The eighteen amphidiploid lines and four bread wheat parents used in the experiment are shown in Table 1. Glasshouse conditions Plants were irrigated with a complete nutrient solution (minus N) regularly with an automatic irrigation system to maintain plants free from water stress and nutrient stresses (other than N). The composition of the complete nutrient solution (minus N) is described in S1 Table. Daily minimum and maximum air temperature was measured using a tiny tag temperature data logger (Gemini data loggers, S2 Plant development and plant height Regular monitoring of plant growth stages was done following the decimal code of Zadoks growth stages. The growth stages for a plant were assigned when the main shoot was at the specific stage. In both years, heading date (GS55), anthesis date (AD, GS61) and physiological maturity date (PMD, GS89, when the peduncle of the main shoot was 100% senesced) were assessed. Plant height to the tip of the ear (excluding awns) was measured on the main shoot at harvest. Leaf gas-exchange traits Gas-exchange measurements were taken on the flag-leaf of each plant on the main shoot using a Li-Cor 6400 XT Portable Photosynthesis System with chlorophyll fluorescence attachment (Li-Cor Biosciences, NE, USA) under HN and LN conditions. Light-saturated photosynthetic rate (A max ) and stomatal conductance (g s ) of the flag leaf were measured. Measurements were taken on the flag leaf twice a week between 10.00 and 15.00 h from flag-leaf emergence (GS37) to mid-grain filling (GS85). The Li-Cor 6400 settings were: flow rate 400 μmol s -1 , block temperature 20˚C with ambient relative humidity. The sample (cuvette) CO 2 concentration was set to 400 μmol mol -1 and PAR was set to 1500 μmol m -2 s -1 (10% blue). All parameters were analyzed by taking average values per plant during each of the pre-anthesis and post-anthesis periods. At anthesis (GS65), the flag leaf area of the main shoot was estimated by measuring the length and width (at the widest) of the flag leaf with a ruler, and then multiplying the product of the length and the width by the correction factor of 0.83 [26]. Flag-leaf visual senescence scoring Senescence kinetics of the flag leaf were assessed visually for main shoots by recording the leaf percentage green area using a standard diagnostic key based on a scale of 0-10 (10 = 100% senesced), as described by Gaju et al. [26]. Assessments were carried out weekly after anthesis until full flag-leaf senescence. The data were then fitted against thermal time from anthesis (GS65; base temperature of 0˚C) using a modified version of an equation with five parameters consisting of a monomolecular and a logistic function [30]. The onset of post-anthesis senescence (VS.Onset;˚Cd) was defined as the onset of the rapid phase of senescence and the end of post-anthesis senescence (VS.End;˚Cd) as the thermal time when the visual senescence score is 9.5. The senescence parameters were estimated in each plant and then subjected to ANOVA. Grain yield and NUE component analysis Plants were harvested at physiological maturity by cutting the whole plant at ground level in each pot. The plant was divided into: i) the main shoot, ii) remaining fertile shoots (those with an ear) and iii) infertile shoots (those without an ear). Shoots for each of the main-shoot and remaining-fertile-shoot category were divided into: i) ear, ii) leaf lamina and iii) stem and leaf sheath and each component weighed after oven drying at 80˚C for 48 h. After drying, ears were threshed and the grain weighed and counted. Plant N% of: i) grain ii) leaf lamina and iii) stem and leaf sheath for each of the main-shoot and remaining-fertile-shoot categories was determined separately using the Dumas method [31]. The weight of the infertile shoots was recorded after oven drying at 80˚C for 48 h. NUE and their components were calculated as per Eqs 1-3 on a per plant basis [32]. where available N includes N from the N fertilizer solution and the peat-compost (S1 Table). Statistical analysis ANOVA for a split-plot experimental design was carried out using GenStat 19th edition. A cross-year ANOVA was applied to analyze N treatment and genotype effects across years and the interaction with year, assuming N treatment and genotype were fixed effects and replicates and year were random effects. Skewed data were transformed and probability values from ANOVA for transformed data were used for significance levels of treatments. Correlation and regression analysis using the mean values across years was carried out using GenStat 19 th edition. Biplots were created using R program (https://www.r-project.org/) package FactoMineR. Results There was no significant year × N treatment × genotype interaction for the majority of traits including grain yield, biomass plant -1 and N uptake plant -1 reflecting that the experimental glasshouse condtions were similar in both years. Therefore, results are presented for the crossyear means. Anthesis date (AD) was advanced by 1 day and physiological maturity date (PMD) by 10 days under LN compared to HN conditions ( Table 2) There was a significant reduction in ears plant -1 under LN conditions (-50.3%; Table 2; P = 0.002; Table 2 Post-anthesis A max ranged from 12.0 (CS) to 20. Table, Fig 2B). Twelve amphidiploids under HN and seven under LN showed significant positive transgressive segregation above their bread wheat parents (Fig 2B). Also three lines (Ae. Table; P<0.001). One line (Ae. umb77 × PAV) maintained g s better under N limitation than its bread wheat parent. For post-anthesis g s four lines (Ae. spe8 × PAV, Ae. spe40 × PAV, Ae. umb77 × PAV and Ae. umb10/3 × CS) maintained g s better under N limitation than their bread wheat parent (S2 Table). Genetic variation in flag-leaf senescence parameters Onset of flag-leaf senescence (VS.Onset) was earlier under LN (730.4˚Cd post GS65) than under HN (826.6˚Cd) conditions (P = 0.016, Table 3). Under HN, VS.Onset ranged from 555.3 (CS) to 1308.7˚Cd (Am. mut12 × Par) and under LN conditions from 519.8 (CS) to 1061.5˚Cd (Am. mut12 × Par) (P<0.001). There was no N × G interaction. Most of the 18 amphidiploid lines showed positive TS with delayed onset of senescence compared to their bread wheat parent under both N treatments (14 under HN and 12 under LN) (Fig 4A). Similar effects were observed for the end of flag-leaf senescence (VS.End) ( Table 3; Fig 4B). GY plant -1 showed a negative association with VS.Onset under HN conditions and also with VS. End under LN conditions (Table 3). Trait associations for yield, yield components, NUE and flag-leaf traits To investigate the trait associations amongst genotypes, principal component analysis was conducted for 13 traits related to grain yield, NUE and flag-leaf senescence for the 18 amphidiploid lines and four bread wheat parents under HN and LN conditions (Fig 5). Under HN, PC1 explained 69.6% of the phenotypic variation and associated traits included GY plant -1 , grains plant -1 , grain N plant -1 and HI showing a positive effect. PC2 explained 17.6% of variation and associated traits were flag-leaf area and AGDM plant -1 with positive effect and TGW with HI and negative association with anthesis date and VS.Onset. In addition, AGDM plant -1 showed a positive association with FL area and NUpE and negative association with TGW. Under LN conditions, PC1 explained 67.3% of the phenotypic variation and the main traits associated were GY plant -1 , grain N plant -1 and grains plant -1 . PC2 explained 13.6% of variation, the main traits associated with it being pre-anthesis A max , while TGW was also associated with a positive effect. GY plant -1 showed a positive association with grain N plant -1 , grains plant -1 , NUtE and NUpE, and a negative association with AD, PMD and VS.Onset. Performance of individual amphidiploids showing TS Among the 18 amphidiploid lines, line Th. tur201 x CS had the greatest positive transgressive segregation for GY plant -1 , AGDM plant -1 and AGN plant -1 under HN conditions. This line also showed the highest TS for NUE and grains plant -1 under LN conditions and for postanthesis flag-leaf A max under both HN and LN conditions ( Table 2 and S3 Table). This line, was not significantly taller than its bread wheat parent. Two other amphidiploids, Se. ana142 × HB and Se. ana141 × CS, also showed significant TS above their bread wheat parent for AGDM plant -1 under HN conditions; however, both these lines were significantly taller than their bread wheat parent under HN conditions. Discussion In the amphidiploid lines, harvest index was lower than their elite bread wheat parents as expected so that grain yield plant -1 was generally less than the respective bread wheat parent cultivar. For the same reason, the N-utilization effciency (plant grain dry matter yield / plant N uptake) was generally less in the amphidiploid lines than their respective bread wheat parent cultivars. Therefore, this discussion focuses mainly on genetic variation in the amphidiploids relative to the bread wheats in N uptake plant -1 and AGDM plant -1 and their physiological basis including leaf photosynthesis rate, assuming that HI can be subsequently increased by modern breeding in material with higher biomass and N uptake. Effects of plant height and development rate Plant height under LN (74.1 cm) was slightly higher than under HN (71.1 cm) conditions. Under LN, there was a ca. 50% reduction in fertile shoots plant -1 likely resulting in less shading of the main-shoot by tillers than under HN conditions which may have contributed to the small increase in main-shoot plant height under LN. Biomass plant -1 increased with increasing PH among genotypes in both N treatments, as has been reported elsewhere [33,34]. For example, there were genetic gains in plant height in CIMMYT spring wheat cultivars from 1966 to 2009 with a positive association with biomass and grain yield [35]. The amphidiploid lines Se. ana142 × HB and Se. ana141 × CS showed TS over their bread wheat parent for PH under LN conditions. Plant height was reported to be positively associated with seedling root length in recent studies on a Savannah × Rialto DH winter wheat population [36] and a winter wheat Avalon × Cadenza DH population [37]. It could be speculated that the taller amphidiploid lines may have had more extensive rooting systems than their parents contributing to increased N uptake and biomass under LN conditions in the present study, although root traits were not presently measured. Present results showed three amphidiploid lines (Th. tur201 × CS, Se. ana142 × HB and Se. ana141 × CS) had positive TS for biomass plant -1 under both HN and LN conditions (except for Se. ana141 × CS under LN). The biomass increase may be less useful if it is associated with increased PH, as PH is generally fixed in modern wheat breeding programs in the optimum range of ca. 70-100 [38]. Encouragingly, Th. tur201 × CS had positive TS for biomass plant -1 but similar PH to its bread wheat parent and therefore represents promising germplasm for deployment in pre-breeding for biomass improvement. On average anthesis date was one day earlier and physiological maturity date ten days earlier under LN than HN conditions. Under LN conditions genotypes with earlier anthesis date had higher GY plant -1 . Earliness is related to the plant's ability to escape severe abiotic stress conditions [39] by reducing growth pre-anthesis and conserving soil resources for more profitable use in grain growth post-anthesis-a useful strategy under terminal stress conditions. Earlier anthesis amongst genotypes increased GY plant -1 due to increased HI rather than biomass consistent with a stress escape effect. There was a strong association between genetic variation in anthesis date and the timing of flag-leaf senescence in both N treatments, with later anthesis date associated with delayed senescence and extended photosynthesis. Nehe et al. [40] also reported an association between later anthesis and delayed flag-leaf senescence in 16 spring wheat cultivars under HN and LN field conditions in India similar to the present findings, concluding that greater N uptake at anthesis with later anthesis may have buffered N remobilization from flag-leaves contributing to a stay-green effect. Bogard et al. [27], in contrast, reported that delayed leaf senescence (stay-green) was associated with earlier anthesis date and increased grain yield in a winter wheat Toisondor × CF9107 DH population under both high and low N conditions in field experiments in France. However, delayed senescence (stay-green) has showed promise in cereal breeding for improving yield under N deficiency by increase photosynthesis duration [11,40], linked to optimized post-anthesis N remobilization and N uptake [24,40]. In the present study the negative association of stay-green with GY under LN conditions may have been also linked to grain sink limitation in the amphidiploid lines [41]. Genetic variation in amphidiploid lines for leaf photosynthesis rate Pre-anthesis leaf photosynthesis rate was not affected by N treatment. This was likely because there was only a small extent of N stress before anthesis. However, post-anthesis A max was also not affected by N treatment implying that flag-leaf N concentration of the main shoot may have been similar in the two N treatments. Significant genetic variation was found in pre-and post-anthesis A max in both N treatments but did not show any association with GY or AGDM plant -1 . Several amphidiploid lines showed higher leaf photosynthesis rate than their bread wheat parents. Interestingly, the three highest biomass amphidiploid lines (Th. tur201 x CS, Se. ana142 x HB and Se. ana141 x CS) each tended to maintain high post-anthesis A max for longer than their parent cultivars under both N treatments (S1 Fig). There was more TS for postanthesis A max than pre-anthesis A max and results showed specific amphidiploid lines had potential to increase leaf photosynthesis rate compared to their bread wheat parent. Encouragingly, there was no negative trade-off between flag-leaf pre-anthesis A max and flag-leaf area under both N conditions, inidcating that higher photosynthesis was independent of flag-leaf area. The lack of a GY plant -1 association with flag-leaf A max implied that grain growth of the genotypes may have been predominantly sink limited as mentioned above. Higher leaf photosynthesis rate was reported in synthetic-derived hexaploid lines compared to the Paragon UK spring wheat parental cultivar in UK field experiments [11]. Greater photosynthetic capacity than bread wheat cultivars and synthetic hexaploid wheat lines was reported in hexaploid triticale, octoploid triticale, and Chinese Spring-rye disomic addition lines with rye chromosomes associated with a Rubisco large subunit gene (at heading and grain-filling stages) and a Rubisco small subunit gene (at grain-filling stage) [42,43]. Furthermore, Merchuk-Ovnat et al. [44] investigated drought-related QTL introgressions from emmer wheat into cultivated wheat and found that yield improvement in introgression lines over their recurrent parent was partly due to enhanced flag-leaf photosynthetic capacity. It can also be speculated that the higher leaf photosynthesis rate and biomass of amphidiploid lines than their bread wheat parents may have been in part related to more chloroplasts per mesophyll cell [45]. Genetic variation in genotypes for N uptake and NUtE N uptake plant -1 at maturity showed a strong association with biomass plant -1 under both N treatments indicating the important role of N accumulation in maintaining photosynthetic capacity and biomass. N-utilization efficiency was higher under N limitation as reported elsewhere [11,46]. Also, as expected there was a negative relation amongst genotypes between genetic variation in NUtE and grain N%. Amongst the 22 genotypes, the bread wheat parent Paragon showed the highest N uptake plant -1 under HN conditions. Under LN conditions Th. tur201 × CS had the highest N uptake plant -1 showing TS over its parent CS for biomass plant -1 and NUpE indicating its potential as a source of useful traits for breeding for NUE. As far as we are aware this is the first demonstration of an amphidioloid derived from Thinopyrum turcicum showing improved abiotic stress tolerance compared to bread wheat. This line also showed highest GY plant -1 and second highest AGDM plant -1 and highest NUtE under LN conditions. The high NUtE for this amphidiploid was partly explained by a high NHI. In the present experiments under HN and LN conditions genetic variation in NUtE showed a negative association with the onset of flag-leaf senescence. This may be partly explained by sink-limitation of grain growth in amphidiploids so that GY did not increase even with enhanced post-anthesis photosynthetic capacity in several of the amphidiploid lines. The trend for a positive association of GY plant -1 with timing of onset of flag-leaf senescence observed when considering just the four parent bread wheat cultivars with higher HI under LN conditions also indirectly suggested there was sink limitation in the amphidiploid lines. Alternatively, the stay-green trait in the ampidiploid genotypes may have represented a non-functional stay green phenotype in the present study [47]. There was a N × G interaction for NUtE with some genotypes increasing NUtE relatively more than others under low N, e.g., Th. tur201 x CS, which also maintained NUE and GY plant -1 relatively better under LN conditions. Numerous previous studies of cultivars and segregating populations have shown an inverse relationship between NUtE and grain N% [48,49], which was also observed amongst the genotypes in our study. Therefore, an enhanced ability to produce viable grains at a low grain N% may be a trait associated with high NUtE and GY under LN conditions. Raising NUtE associated with lower grain N% is feasible in enduse markets for which a high grain starch to protein ratio is desirable, e.g., the feed, distilling or biofuel markets. A lower grain N% and higher NutE may also be linked to a reduced efficiency of post-anthesis N remobilization to the grain [26]. Genetic variation in GY plant -1 showed a slightly stronger positive association with N uptake than with NUtE under both HN and LN conditions. Furthermore, the association between GY and N uptake was stronger under LN than under HN conditions. Previous studies in bread wheat also showed N uptake accounted for a greater proportion of genetic variation in NUE under LN than under HN conditions [11,[50][51][52]. These results indicate that root traits determining N uptake may have been in part underlying the genetic variation in NUE observed under N limitation [53, 54], although root traits were not measured in the present experiments. Present results showed introducing diversity from wheat wild relatives into cultivated wheat could help in raising NUE in wheat breeding for achieving food security. This supports previous investigations indicating that enlisting wild grass genes is a fesible strategy to combat N limitation in wheat farming, e.g. Subbarao et al. [55]. Several amphidiploids were better adapted to maitianing biomass productivity under low N conditions than their bread wheat parents and therefore have potential for introgressing traits for N stress tolerance in wheat pre-breeding programmes. In the present study we identified amphidiploid lines, e.g. Th. tur201 x CS, Se. ana142 x HB and Se. ana141 x CS, that have potential to be deployed in prebreeding programmes for higher biomass and NUE under both HN and LN conditions. In future work, these lines need to be backcrossed with elite bread wheat cultivars and the hexaploid derivatives explored further for expression of leaf photosynthesis and N stress tolerance traits to confirm the present results at the field scale.
2022-04-17T05:13:59.416Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "8a6b23b567154629b80418b2ceaa58748f8f9c1b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8a6b23b567154629b80418b2ceaa58748f8f9c1b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
250933069
pes2o/s2orc
v3-fos-license
Surface Modification of Erythrocytes with Lipid Anchors: Structure–Activity Relationship for Optimal Membrane Incorporation, in vivo Retention, and Immunocompatibility Red blood cells (RBCs) are natural carriers for sustained drug delivery, imaging, and in vivo sensing. One of the popular approaches to functionalize RBCs is through lipophilic anchors, but the structural requirements for anchor stability and in vivo longevity remain to be investigated. Using fluorescent lipids with the same cyanine 3 (Cy3) headgroup but different lipid chain and linker, the labeling efficiency of RBCs and in vivo stability are investigated. Short-chain derivatives exhibited better insertion efficiency, and mouse RBCs are better labeled than human RBCs. Short-chain derivatives demonstrate low retention in vivo. Derivatives with ester bonds are especially unstable, due to removal and degradation. On the other hand, long-chain, covalently linked derivatives show remarkably long retention and stability (over 80 days half life in the membrane). The clearance organs are liver and spleen with evidence of lipid transfer to the liver sinusoidal endothelium. Notably, RBCs modified with PEGylated lipid show decreased macrophage uptake. Some of the derivatives promote binding of antibodies in human plasma and mouse sera and modest increase in complement deposition and hemolysis, but these do not correlate with in vivo stability of RBCs. Ultra-stable anchors can enable functionalization of RBCs for drug delivery, imaging, and sensing. Introduction Red blood cells (RBCs) are Mother Nature-made carriers of oxygen. [1] Due to biocompatibility and in vivo longevity (up to 40 days life span in mice and 115 days in humans), RBCs have been explored for delivery of genes, chemotherapy, contrast agents, and enzymes. [2] Several strategies to append molecules, enzymes, and nanoparticles to the RBC surface have been tested, including covalent modifications, [3] targeting of integral membrane proteins, [4] physical absorption, [5] and genetic modification of progenitor cells. [6] Each one of these approaches has advantages and disadvantages. [7] Many groups, including ours, have been working on surface modification of RBCs with lipid anchors, [8] due to the simplicity and versatility of the approach. We found that indocarbocyanine lipid DiI (dioctadecyl-Cy3 or DiI-C18), which is commonly used for labeling of cell membranes, [9] showed much better retention in the RBC membrane than phospholipids, with up to 90% of the lipid present in the RBC membrane 48 h postinjection in mice. [8a] We further prepared an aminomethyl derivative of DiI and conjugated it to thiolated enzymes and antibodies. The DiI-linked molecules efficiently painted mouse RBCs and showed good in vivo stability and retention. [10] At the same time, our previous studies [8a,10,11] explored a limited chemical space, and the retention of the anchor and the circulation of modified RBCs were followed only for a relatively short time (up to 5 days). An open question remains as to which structural parameters of the anchor determine the incorporation efficiency and retention. As mouse erythrocyte has a life span of ≈40 days, [12] it would be interesting to observe the retention of the anchor over the entire life span. To address these questions, we prepared a library of lipid derivatives with the same Cy3 headgroup, but with different linkages between the headgroup and the lipophilic part, and different lipid chain types. The presence of the same fluorophore makes the comparison by flow cytometry straightforward and convenient. The results define the role of lipid structure in membrane incorporation and retention and open an avenue for stable functionalization of RBCs and adoptive cell therapies. Lipid Library Design and Incorporation Efficiency in RBC Membrane The lipid derivatives used in the study are shown in Figure 1. We acquired commercially available dialkyl indocarbocyanine lipids DiI-C18, DiI-C18:2, DiI-C16, DiI-C12 and synthesized additional headgroup derivatives of DiI-C18: DiI-amine, DiI-PEG5000, and DiI-PEG3400-methyltetrazine (Mtz). In addition, we prepared diacyl glycerol derivatives of Cy3: Cy3-C18, Cy3-C18:1, Cy3-C16, Cy3-C14, Cy3-C12, phospholipid derivative Cy3-distearoyl phosphatidylethanolamine (Cy3-DSPE), and Cy3-cholesterol. This library covers different lipid lengths, lipid types, and also headgroup-tail linkages. In addition, the library includes a limited number of headgroup derivatives relevant to drug delivery. Thus, PEGylated lipids have importance for erythrocyte PEGylation for blood camouflaging. Mtz is a click chemistry group that enables versatile modifications with biomolecules, [13] but also serves as a mimic of a small molecule payload (M w 230 Da). To determine the labeling efficiency, we used fresh human RBCs from two healthy donors. RBCs were incubated with lipids at 25 μM for 1 h and washed and analyzed with flow cytometry for the percentage of labeled cells and mean fluorescence intensity (MFI). Short-chain DiI-C12 and Cy3 diacyl glycerol (C12, C14) and long chain DiI-PEG5000, DiI-PEG3400Mtz, DiI-C18:2, and Cy3-DSPE resulted in the highest percentage of labeled human RBCs (Figure 2A). DiI-C12, DiI-C16, DiI-C18, Cy3-DSPE, and DiI-C18-PEG3400Mtz showed 2-3-fold higher MFI than the rest of the lipids ( Figure 2B). Interestingly, amino DiI-C18 showed ≈4-fold lower labeling efficiency than the parent DiI-C18. Select lipids were then used to label mouse RBCs. The data in Figure 2C,D show generally higher mouse RBC labeling efficiency (MFI and percent RBCs) than human RBCs. There was no correlation between mouse and human RBC labeling efficiency, except DiI-C12, which exhibited the highest efficiency in both types of erythrocytes ( Figure 2E). Finally, we checked the linearity of the Cy3 group signal when incorporated in RBCs. DiI-PEG3400Mtz showed linear increase in mouse RBC MFI with increasing labeling concentrations ( Figure 2F). Next, we investigated in vivo retention and longevity of modified RBCs. We used the lipids with the highest labeling efficiency ( Figure 2C,D) for in vivo studies: DiI-C18, DiI-C18:2, DiI-C12, Cy3-C12, DiI-PEG3400Mtz, Cy3-DSPE, and Cy3-cholesterol. Labeled RBCs were injected intravenously in BALB/c mice and the circulating levels (percent positive RBCs) and the stability of the lipid (MFI of the labeled population) were analyzed by flow cytometry ( Figure 3A). Representative dot plots for DiI-C18 and DiI-C18:2 ( Figure 3B) show a distinct population of labeled RBCs at both 1 min (between 9% and 13% of total RBCs) and 3 weeks postinjection. We found differences between lipids in terms of both percent RBC and percent MFI ( Figure 3C,D). Thus, DiI-C18, DiI-C18:2, DiI-PEG3400Mtz, and Cy3-cholesterol RBCs exhibited the longest circulation and retention. DiI-C18 RBCs had ≈40 days circulation span and only about 33% decrease in MFI at Day 20. DiI-C18-PEG-MTz and DiI-C18:2 RBCs showed somewhat shorter circulation span of ≈30 days and similar decrease in MFI as DiI-C18 at Day 20. Cy3-cholesterol RBCs had ≈30 days circulation span, but MFI dropped below 35% at Day 22. Shorter DiI-C12 RBCs had a circulation span of ≈20 days, and much faster decrease in MFI than DiI-C18 and Cy3cholesterol ( Figure 3C,D). The circulation of Cy3-C12 RBCs and Cy3-DSPE RBCs was the shortest, and both disappeared within a few days postinjection ( Figure 3C,D). Because of rapid clearance, it was not possible to reliably measure MFI for Cy3-DSPE, but MFI for Cy3-C12 dropped by 85% at Day 3 ( Figure 3D). We next questioned whether the lipid retention and RBC longevity are prolonged in the immunodeficient host. NOD-SCID-gamma (NSG) mice are severely immunodeficient with dysfunctional macrophage, adoptive, and innate (e.g., complement) responses. [14] We injected NSG mice with DiI-C12 RBCs that showed fast removal of the lipid and short half life in BALB/c mice. DiI-C12 RBCs demonstrated similar life span and similar change in MFI to BALB/c mice ( Figure 3E,F), suggesting that most of the elimination of lipids from RBCs is not mediated by the immune system. The nature of lipid linker affects in vivo retention. Thus, RBCs labeled with diacyl glycerol derivative Cy3-C12 and phospholipid Cy3-DSPE showed much faster removal than stable DiI-C12. To compare the stability of different lipids in vitro, we measured the fluorescence of labeled RBCs after incubation in mouse serum for 3 h. According to Figure 5A, DiI-C18, DiI-C18:2, DiI-PEG3400Mtz, DiI-C12, Cy3-C12, and Cy3-cholesterol RBCs showed less than 15% loss in MFI at 3 h. At the same time, Cy3-DSPE RBCs showed over 60% loss of MFI. Confocal microscopy showed that Cy3-DSPE and DiI-C18 had similar uniform labeling of RBCs prior to incubation in serum ( Figure 5B). After 3 h incubation in serum, there was five times more fluorescence released in serum from Cy3-DSPE RBCs than from DiI-C18 RBCs ( Figure 5C). Thin layer chromatography (TLC) analysis showed the presence of intact Cy3-DSPE along with some degradation products ( Figure 5D). While 3 h time incubation is shorter than the in vivo longevity of some of the lipids ( Figure 3B), the data indicate the removal of the phospholipid from RBC membrane through interaction with serum. Immune Recognition of the Modified RBCs We next asked which organs mediate the clearance of modified RBCs. Because ex vivo imaging at the Cy3 wavelength is challenging, we injected RBCs labeled with near-infrared indocarbocyanine lipid DiR-C18. We determined the levels of DiR-C18 RBCs in vivo by dotting blood and measuring total near-infrared (NIR) fluorescence with highly sensitive Li-COR Odyssey scanner ( Figure 6A). DiR-C18 RBCs showed in vivo life span of over 30 days, and terminal half life of 7.3 days ( Figure 6B), similar to that of DiI-C18. NIR imaging of organs 30 days postinjection of DiR-C18 RBCs showed predominant accumulation in the spleen, with minor accumulation in the liver and bone marrow ( Figure 6C). To determine the location of the label in the clearance organs, we imaged freshly excised livers and spleens of BALB/c mice injected with DiI-C18 RBCs and DiI-PEG3400Mtz RBCs (Day 40 and Day 29, respectively) with confocal microscope. In all groups, there was a predominant accumulation of Cy3 signal in extrasinusoidal spleen cells and some accumulation in the liver sinusoids, ( Figure 6D). Notably, there was evidence of fluorescence transfer to the sinusoidal endothelium in the liver ( Figure 6D, upper right). While DiI-PEG3400Mtz did not reduce the accumulation in the spleen as compared with parent DiI-C18, we observed decreased accumulation in the liver ( Figure 6D, lower right). To test if PEGylated RBCs are less prone to macrophage recognition, we incubated DiI-PEG3400Mtz RBCs and DiI-C18 RBCs with fresh mouse peritoneal macrophages for 24 h and studied the uptake by fluorescent microscopy. We found significantly fewer cells with intracellular Cy3 fluorescence in the DiI-PEG3400Mtz group ( Figure 6E,F), but significantly more RBC rosettes around macrophages ( Figure 6E,G), suggesting reduced internalization of PEGylated RBCs. Collectively, these data suggest that while PEGylation does not prevent the clearance by the spleen, it prevents the uptake by macrophages in the liver and in vitro. Finally, we measured hemolysis, IgG binding, and complement C3 deposition on DiI-C12, Cy3-C12, DiI-C18:2, and DiI-PEG3400Mtz RBCs in autologous human lepirudin plasma and in mouse sera collected from mice injected with respective labeled RBCs (experiment in Figure 3; sera collected postmortem at D20, D20, D29, D29, respectively). For a positive control, we reacted DiI-PEG3400Mtz RBCs with human or mouse IgG-TCO. This two-step reaction results in over 100 000 IgG molecules per RBC, [15] and these RBCs have a much shorter half life than DiI-PEG3400MTz RBCs. [10] According to Figure 7A-C, "positive control" IgG-modified RBCs exhibited significant hemolysis, C3 opsonization, and IgG binding, whereas other modified RBCs showed much lower hemolysis, C3 opsonization, and IgG binding. Interestingly, DiI-PEG3400Mtz RBCs showed higher level of C3 and IgG deposition than other derivatives. For mouse RBCs, the relationship between hemolysis, C3, and IgG deposition was less clear. Thus, all derivatives showed elevated binding of IgG, but there was no correlation with hemolysis. Notably, DiI-PEG3400Mtz RBCs showed only a modest IgG binding, suggesting low antibody response to PEGylated RBCs in mice, as suggested before. [13b] Discussion The premise of this work was to understand the role of lipid structure in both the incorporation efficiency and the retention in the RBC membrane. We used a library of lipids with the same Cy3 headgroup to facilitate measurements by flow cytometry. Albeit fluorescence is not as quantitative tool as radioactive labeling [5] and could be subject to quenching; it is commonly accepted for measurements of RBC longevity by flow cytometry [12] and enables analysis of the percentage of labeled RBCs and their mean fluorescence. While our experiments did not reveal a general rule regulating in vitro labeling efficiency, short-chain derivatives promoted more efficient labeling of human RBCs than long-chain derivatives. On the other hand, some long-chain lipids showed better human RBC labeling efficiency than others. For example, Cy3-DSPE showed more efficient labeling than DiI-C18 (100% vs 28%), and DiI-PEG3400Mtz showed more efficient labeling than DiI-PEG5000 and DiI-amine. These results suggest that interactions between the lipid headgroup and the RBC membrane components could also determine the labeling efficiency. Propensity of lipid to form supramolecular assemblies could play a role in the ability to fuse with the membrane. Indeed, our previous study suggested that DiI micelle disassembly could be important for RBC incorporation. [10] Therefore, lipids that have higher critical micelle concentration could more easily interact with the RBC membrane. Mouse RBCs showed better labeling efficiency than human RBCs, both in terms of MFI and percent labeling, which could be due to differences in membrane composition, physical properties, surface area, and metabolism between mouse and human RBCs [16] or differences in the labeling medium (anticoagulant citrate dextrose [ACD] buffer for human RBCs, 1% BSA/PBS for mouse RBCs). In vivo retention of the anchor was often in inverse relationship to in vitro labeling efficiency. Thus, DiI derivatives with long dialkyl chains or cholesterol exhibited ultralong retention in vivo, whereas Cy3 derivatives with ester bond, especially Cy3-DSPE, were unstable and quickly removed from RBCs. Our short-term serum incubation experiment suggests that lipid removal by serum could play some role in the loss in vivo, for example, due to transfer to albumin and lipoproteins. It is also possible that incomplete insertion into the membrane makes the ester bond more exposed to esterases. In case of DiI-PEG3400Mtz and DiI-C18, the estimated half life of the anchor was much longer than the reported 24-day half life of mouse RBCs. [12,17] The half life of RBCs even in the case of DiI-C18 was shorter than 24 days, likely due to elimination of some damaged RBCs, as well as elimination of the label, leading to the inability to detect RBCs with flow cytometer. The loss of the label as one of the reasons for "apparent elimination" is indirectly supported by DiI-C12 and Cy3-cholesterol, for which the RBC half life closely matched the anchor half life. Most likely, the combined effect of the anchor on immune recognition and malleability of modified RBCs, as well as the removal of the label, result in dramatic differences in the circulation half life. PEGylation seems to have an additional effect on immunocompatibility of RBCs by decreasing C3 deposition and macrophage recognition, which could have an application for preparation of bioinert erythrocytes, for example, for production of universal blood. At the same time, the effect of PEGylation on hemolysis, complement activation, and antibody response is not clear. Previous study suggested no binding of anti-PEG antibodies to PEGylated RBCs in mouse and human serum, [13b] while we found a small increase, which was PEG independent and IgG independent in mouse serum. While the translational significance of these finding remains to be elucidated, it appears that in vivo longevity of modified RBCs does not fully correlate with hemolysis, IgG binding, and C3 opsonization determined in vitro. In summary, besides the practical aspect of RBC membrane derivatization for drug delivery and imaging, due to RBCs being long circulating "cells" and the ease of in vivo sampling, this study provides basic understanding on retention of lipids in the cell membrane under in vivo conditions. Synthesis of 2a-e: DCC (3 eq.) in CH 2 Cl 2 (5 mL) were added to a solution of tert-butyl (2,3dihydroxypropyl)carbamate (1 eq). Fatty acid (3 eq.) and a catalytic amount of DMAP 0.1 eq.) dissolved in dry CH 2 Cl 2 (10 mL) in nitrogen atmosphere. Stirring of the resulting mixture was continued for 24 h at room temperature, and the solvent was then evaporated under reduced pressure. The obtained product was purified by column chromatography (CH 2 Cl 2 /MeOH, 100/1). The product was obtained as a white powder. Synthesis of 3a-e: Compound 2a-e dissolved in CH 2 Cl 2 with trifluoracetic acid (25%) in (10 mL) in nitrogen atmosphere. Stirring of the resulting mixture was continued for 30 min at room temperature, and the solvent was then evaporated under reduced pressure. The obtained product was purified by column chromatography (CH 2 Cl 2 /MeOH, 100/1). The product was obtained as a white powder. Synthesis of 4a-e: Compound 3a-e (1 eq), Cy3-COOH (1.1 eq.), HBTU (3 eq), and DIEA (3 eq.) dissolved in dry DMF (10 mL) in nitrogen atmosphere. Stirring of the resulting mixture was continued for 24 h at room temperature, and the solvent was then evaporated under reduced pressure. The obtained product was purified by column chromatography (CH 2 Cl 2 /MeOH, 100/1). The product was obtained as a white powder. Synthesis of Cy3-Cholesterol: DCC (3 eq.) in CH 2 Cl 2 (5 mL) was added to a solution of cholesterol (1 eq), Cy3-COOH (1.1 eq.), and a catalytic amount of DMAP (0.1 eq.) dissolved in dry CH 2 Cl 2 (10 mL) in nitrogen atmosphere. Stirring of the resulting mixture was continued for 24 h at room temperature, and the solvent was then evaporated under reduced pressure. The obtained product was purified by column chromatography (CH 2 Cl 2 /MeOH, 100/1). The product was obtained as a pink solid (Yield 58% RBC Labeling: Fresh human RBCs were obtained from discarded leukodepletion filters after processing sodium citrate anticoagulated donor blood at the Children's Hospital Colorado Blood Donation Center. Institutional review board approval was not required for discarded material and anonymous samples. RBCs were eluted from leukodepletion filters by applying ACD buffer in the direction of the flow and were used within 2 h after blood collection. Sodium EDTA-anticoagulated mouse blood was collected from female or male BALB/c and NSG mice (8-10 weeks of age) via cardiac puncture, according to the animal protocol approved by the University of Colorado IACUC). Human RBCs were washed in ACD buffer and mouse RBC were washed in 1% BSA/PBS at room temperature at 3,000 g, total 3 times. Erythrocyte suspension (≈10 10 /mL) was incubated with 25 μM lipids in ACD buffer (human RBCs) or 1% BSA/PBS (mouse RBCs) at 37 °C for 1 h and washed three times in ACD buffer or 1% BSA/PBS as described above. Labeling efficiency were determined by Guava easyCyte HT flow cytometer (Luminex Corp, Seattle, WA) as described later. In vivo Circulation and Biodistribution: The University of Colorado Institutional Animal Care and Use Committee (IACUC) approved animal experiments (protocol 103 913(11)1D). Mice were treated according to regulations provided by the Office of Laboratory Animal Resources at the University of Colorado. BALB/c and NOD/LtSz-SCID IL2Rγc null (NSG) mice were bred in-house. Mice of 8-10 weeks age (males and females) were used for experiments. Mice were injected intravenously with ≈10 9 RBCs (each strain was injected with RBCs derived from the same strain). At different time intervals, ≈20 μL of blood were obtained with heparinated haematocrit capillary (ThermoFisher) via periorbital plexus. RBCs were resuspended at ≈0.5 million mL −1 in 1%BSA/PBS and analyzed by Guava easyCyte HT flow cytometer (20 000 RBCs were counted after gating out platelets and leukocytes). The percentage of positive RBCs and mean fluorescence intensity (MFI) of labeled RBCs (if more than 1% of total RBCs) was determined using FlowJo software v.10 (BD Life Sciences, Ashland, OR). For pharmacokinetic analysis, the flow data were normalized to 100% injected dose (MFI and percent labeled RBCs at 1 min) and fit into mono or biexponential decay using the pharmacokinetic modeling software Boomer. [21] When labeled RBCs reached <1% of total RBCs (usually 10% or less of the injected dose), mice were injected with FITC-lectin and Hoechst and sacrificed with carbon dioxide (CO 2 ) followed by cardiac perfusion. Fresh livers and spleens were placed on glass slides and imaged with Nikon Eclipse AR1HD inverted confocal microscope with Plan Apo 10 objective as described. [18] For imaging of biodistribution of DiR-labeled RBCs, main organs were arranged in a Petri dish. Lipid Stability in Mouse Serum: Mouse RBC labeled with lipids were incubated in mouse serum at 37 °C for up to 3 h. Intact RBCs were removed by centrifugation at 3000 g for 10 min at room temperature. RBCs were washed three times in 1% BSA/PBS to remove the remaining serum, and the labeling (MFI) was determined by flow cytometry as described above. The supernatant was analyzed for fluoresce released from RBC by dotting 2 μL on 0.45 μm nitrocellulose membrane and scanning with a Bio-Rad gel imager for Cy3 fluorescence at 540 nm excitation/560 nm emission. For TLC, 10 parts of methanol were added to 1 part of serum, and the tubes were centrifuged at 500 g for 10 min to pellet the protein fraction. The methanol phase was carefully collected, applied on TLC Silica Gel 60 F254 (EMD Millipore), and separated on the mobile phase chloroform: methanol (9:1) with 0.1% trifluoroacetic acid. The plates were scanned with a Bio-Rad gel imager for Cy3 fluorescence. Uptake by Peritoneal Macrophages: Fresh nonactivated macrophages were isolated and seeded in culture for 24 h in a 96-well culture-treated plate (Corning Inc.) as described. [19] On the next day, labeled RBCs were added (≈1 × 10 5 RBC/well) in triplicates, and macrophages were incubated for 24 h. In some experiments, PMF were prelabeled with 10 μM DiO for 2 h before addition of RBCs. After incubation, macrophages were washed with PBS 4 times to remove nonbound RBCs, fixed with 4% formalin, and stained with nuclear stain Hoechst. Cells were imaged under 100× magnification with Zeiss Axio Observer 5 epifluorescent microscope, and ≈7 microscopic fields were acquired. To verify RBC localization outside the cells, a Nikon Eclipse AR1HD inverted confocal microscope with 405, 488, 561, and 640 nm excitation lasers and corresponding emission filters was used. The percentage of DiI+ cells and rosette+ cells per field was determined manually and plotted with Prism. Hemolysis, IgG, and C3 Deposition on Human and Mouse RBCs: Lepirudin-anticoagulated blood was obtained from healthy donors as described before and processed immediately. [20] Blood was centrifuged at 3000 g for 25 min at 4 °C, plasma was collected, and kept on ice. Lepirudin plasma allows complement activation and was used with autologous RBCs in subsequent experiments. Buffy coats were aspirated, and RBCs were washed in ACD buffer and labeled with lipid derivatives as above. For a negative control, plain nonlabeled RBCs were used. For a positive control, RBCs were first labeled with DiI-PEG3400Mtz and then reacted with mouse or human IgG-TCO as described before [15] to produce IgG-coated RBCs. All RBCs were washed once in PBS and were incubated in autologous lepirudin plasma at 37 °C for 1 h (1:4 RBC:plasma volume ratio). RBCs were pelleted by centrifugation at 3,000 g for 10 min at room temperature. The supernatant was collected, diluted with PBS, and the absorbance of hemoglobin was measured at 540 nm. The relative hemolysis was expressed as percent change from nonlabeled RBCs incubated in naïve serum. For C3/IgG detection by dot-blot immunoassay, RBC pellet was washed three times in 1% BSA/PBS to remove the remaining serum, resuspended in PBS, and 2 μL of sample was applied in triplicates on a 0.45 μm-pore nitrocellulose membrane (Bio-Rad). The membranes were blocked with 5% w/v milk and probed with anti-C3 antibody for 1 h at room temperature, washed, and then incubated with IRDye80°CW-labeled secondary antibody. For IgG detection, anti-human IgG IRDye 80°CW-labeled antibody was directly used. The membrane was scanned using Li-COR Odyssey infrared imager, and the integrated intensities of dots were determined from eightbit grayscale images using Fiji software. The quantification data were plotted using Prism software v. 9.0 (GraphPad, San Diego, CA). The relative IgG and C3 binding was expressed as percent difference from nonlabeled RBCs incubated in naïve serum. Mouse hemolysis, C3, and IgG binding experiments were performed exactly as above, except that serum collected from naïve BALB/c mice was used for incubation of negative or positive control RBCs, and serum from BALB/c mice injected with the corresponding labeled RBCs (on the last day of the experiment in Figure 3) was used for incubation with labeled RBCs. Structures of Cy3 derivatives used in the study. Gaikwad A-B) Labeling efficiency (MFI) and percentage of labeled human RBCs with derivates described in Figure 1. Data are means and SD of three healthy blood donors (male 45yo, male 67yo, and female 47yo). C-D) Labeling efficiency (MFI) and percentage of labeled mouse RBCs with derivatives used for in vivo study. Data are means and SD of 1-3 BALB/c-derived RBCs batches. E) Lack of correlation between mouse and human RBC labeling efficiency (MFI multiplied by the fraction of labeled RBCs (1 = 100%)). No correlation was observed for most lipids, except for DiI-C12 that showed the highest labeling of both mouse and human RBCs. F) Linearity of MFI of mouse RBCs labeled with different concentrations of DiI-PEG3400Mtz. Gaikwad Pharmacokinetic analysis of data in Figure 3. A) One-compartment analysis of percentlabeled RBCs. B) % MFI based on extrapolation because some of the derivatives did not reach 50% of the initial level. DiI-C18 and DiI-PEG3400Mtz MFI values were best fitted with the two-compartment model; other were fitted with the one-compartment model. For DiI-C18, one of the mice did not produce a good fit. C) Summary of the analysis. Note that for some of the derivatives, the MFI half-life is much longer than the RBC half-life. DiI-C18 and DiI-PEG3400Mtz RBCs showed the longest half-life and the best stability in the membrane. P-value: ****<0.0001; ***<0.001,**<0.01,*0.05; one-way ANOVA with multiple comparisons. Biodistribution and immune recognition of RBCs. A) RBCs labeled with DiR were injected in BALB/c mice and blood fluorescence was monitored with NIR scanner Li-COR Odyssey. B) One-compartment pharmacokinetic analysis shows long half-life (n = 2 mice). C) Organ biodistribution of DiR fluorescence (pseudocolored) shows predominantly spleen and some liver and bone marrow accumulation. D) Confocal microscopy images of fresh livers and spleens of mice injected with DiI-C18 and DiI-PEG3400Mtz (after in vivo labeling of blood vessels and nuclei with FITC-lectin and Hoechst). The lipid accumulates in extrasinusoidal cells in the spleen and predominantly in sinusoidal cells in the liver (i.e., endothelium and Kupffer cells). DiI-PEG3400Mtz showed low accumulation in the liver. E) RBCs were incubated with fresh peritoneal macrophages for 24 h. DiI-C18 RBCs show mostly intracellular uptake, whereas DiI-PEG3400Mtz RBCs show mostly extracellular rosettes. Insets show confocal images of DiO-labeled macrophages to demonstrate intracellular versus extracellular localization. F) Quantification of percent cells (per field) that contain intracellular DiI. G) Quantification of percent cells (per field) that contain DiI+ rosettes. P-value: ****0.0001; ***<0.001 2-sided t-test, alpha 0.05. Gaikwad
2022-07-22T15:06:15.522Z
2022-07-19T00:00:00.000
{ "year": 2022, "sha1": "0ec0205a3639f5242bc50d1ef68fce5ad21bdd91", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anbr.202200037", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25fc83f643043dc8e221a312e34cc90c1a3c481b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
235642655
pes2o/s2orc
v3-fos-license
Visual Disfunction due to the Selective Effect of Glutamate Agonists on Retinal Cells One of the causes of nervous system degeneration is an excess of glutamate released upon several diseases. Glutamate analogs, like N-methyl-DL-aspartate (NMDA) and kainic acid (KA), have been shown to induce experimental retinal neurotoxicity. Previous results have shown that NMDA/KA neurotoxicity induces significant changes in the full field electroretinogram response, a thinning on the inner retinal layers, and retinal ganglion cell death. However, not all types of retinal neurons experience the same degree of injury in response to the excitotoxic stimulus. The goal of the present work is to address the effect of intraocular injection of different doses of NMDA/KA on the structure and function of several types of retinal cells and their functionality. To globally analyze the effect of glutamate receptor activation in the retina after the intraocular injection of excitotoxic agents, a combination of histological, electrophysiological, and functional tools has been employed to assess the changes in the retinal structure and function. Retinal excitotoxicity caused by the intraocular injection of a mixture of NMDA/KA causes a harmful effect characterized by a great loss of bipolar, amacrine, and retinal ganglion cells, as well as the degeneration of the inner retina. This process leads to a loss of retinal cell functionality characterized by an impairment of light sensitivity and visual acuity, with a strong effect on the retinal OFF pathway. The structural and functional injury suffered by the retina suggests the importance of the glutamate receptors expressed by different types of retinal cells. The effect of glutamate agonists on the OFF pathway represents one of the main findings of the study, as the evaluation of the retinal lesions caused by excitotoxicity could be specifically explored using tests that evaluate the OFF pathway. Introduction An excess of glutamatergic stimulation in the nervous system is at the origin of many neurodegenerative diseases in mammals [1,2]. The toxicity generated by excessive glutamate is developed through the activation of ion channels. Different studies show that an increase in intracellular calcium concentration is associated with the hyperactivity of excitatory amino acids [3], but not with that of non-excitatory [4], revealing the important role that calcium ions play in excitotoxicity. N-methyl-DL-aspartate (NMDA) has been described as the glutamate analog that shows the greatest potency in increasing calcium influx and inducing neurotoxicity, and NMDA-R has been described as the receptor responsible for mediating excitotoxicity [4]. In accordance with this, the use of glutamate antagonists has been shown to provide neuroprotection in animal models of neuronal injury [5]. Kainic acid (KA) is the other major glutamate agonist with an important role in neurotoxicity. As with NMDA, neuroprotection has been demonstrated through the use of its antagonists [6]. In the case of KA, a low concentration of this molecule is capable of causing an increase of intracellular calcium and the death of neural cells, without significant depolarization [7]. Physiologically, glutamate acts through NMDA or KA receptors, so in order to properly assess the excitotoxic effect of glutamate, it must be induced by both pathways. Furthermore, as the activation of NMDA receptors by glutamate requires the cell to be depolarized, the joint action of KA could depolarize it, inducing a greater effect of glutamate on NMDA receptors. As proof of this, a significant protection against neuronal death has been demonstrated by the direct antagonism of NMDA and AMPA receptors [5]. Their neuroprotective mechanism could work by antagonizing the cellular calcium influx [8] or through a chelating effect on intracellular calcium [9]. The mammalian retina has been proven to be a useful model for the study of neuronal excitotoxicity. In mice, glutamate excitotoxicity mainly affects the inner retinal layers [10][11][12], location of bipolar cells, conventional and displaced amacrine cells, and ganglion cells [13,14], which are sensitive to glutamate agonists acting on ionotropic glutamate receptors [11]. However, not all retinal cell types are susceptible to excitotoxicity to the same extent; the excitotoxicity induced by NMDA seems to have a strong effect on amacrine cells, a mild effect on bipolar cells, and no effect on photoreceptors [15]. In fact, not even different types of retinal ganglion cells (RGC) have shown the same sensitivity to excitotoxicity. Those RGC with a large soma seem to be more resistant to NMDA excitotoxicity than small RGC [16]. In a previous study of our group, a co-dose of 30 mM NMDA and 10 mM KA was shown to induce a deleterious effect on the inner retina [12]. Electroretinogram (ERG) results showed a significant decrease of the retinal "b" wave amplitude, both in scotopic and photopic conditions. However, the "a" wave amplitude did not change significantly, indicating the preservation of photoreceptors. Histologically, although no effects in the outer nuclear layer were observed, a significant thinning on the inner retinal layers was reported, indicating that NMDA and KA were able to induce a harmful effect on bipolar, amacrine, and ganglion cells. In addition, anterograde tracing of the visual pathway after NMDA and KA injection showed the absence of RGC projections to the contralateral superior colliculus and lateral geniculate nucleus [12]. However, the way in which the cell death process occurred seemed to depend on the magnitude of the excitatory response generated by the inoculated dose [17]. Thus, the dose is so important that it not only determines the type of death, but whether it can induce survival or death [18][19][20][21]. As different cell types exhibit a different response to glutamate toxicity because of their differential composition of glutamate receptors, in the present study, we aimed to determine the cells and pathways that best resist glutamatergic excitotoxicity. We performed new experiments using new lower doses of glutamate agonists, below the level that produces a deleterious effect. Our approach included in vivo studies to assess the functionality of the inner retina, using recording pattern ERG [22], and the spatial visual acuity, using the optomotor test [23]. We also performed ex vivo electrophysiological recordings using a multi-electrode array and immunohistological analysis of different retinal cell types. In the control retinas injected with PBS, Syt2b antibodies showed complete OFF cone bipolar cells, with stronger immunostaining of the bipolar terminals in the OFF sublamina of the inner plexiform layer (IPL), corresponding to axon terminals of type 2 cone bipolar cells. A fainter immunostaining of some bipolar terminals in the ON sublamina could be also observed, corresponding to type 6 cone bipolar cells stratification (Figure 1A,D,G, arrowheads). Immunolabeling with calbindin antibodies showed dendrites and axon terminals of horizontal cells in the OPL ( Figure 1D,G). In response to 1:0.3 mM NMDA/KA injection, there was a disorganization of the IPL, where most of the Syt2b OFF terminals were lost, and only some bipolar terminals could still be identified in the IPL ON stratum ( Figure 1B,E,H, arrowheads). Just a few distorted axons remained in the stratum OFF ( Figure 1H), but the axon terminals of the ON cone bipolar cells could still be identified in the IPL ON stratum ( Figure 1H arrows). Although the immunoreactivity of the OFF bipolar cell bodies and terminals decreased at this dose, immunoreactivity to horizontal presynaptic synaptosomes was visible at the OPL with a normal morphology, and just some sprouting could be observed. In addition, some dendritic terminals at the OPL were conserved, and it is reasonable to think that at least a portion of the cone bipolar cells remained alive. Horizontal cells (red) were maintained at this dose ( Figure 1E,H), but small sprouts of horizontal and cone bipolar cells (green) towards the outer nuclear layer (ONL) began to appear. In response to 10:3 mM NMDA/KA injection, a strong thinning of the retinal inner layers was observed ( Figure 1C,F). The INL, IPL, and GCL layers showed a drastic thickness decrease, while for the IS, ONL, and OPL layers, the thicknesses were maintained, indicating that the photoreceptors were not affected by the NMDA and KA injection. Cone bipolar cells immunostained with the Syt2b antibody were almost completely lost, and only a few ON terminals remained in the IPL ( Figure 1C,F, arrowheads). Despite the major bipolar cell loss and inner layers' disorganization, the horizontal cells remained ( Figure 1F,I), but they projected long processes towards the ONL ( Figure 1I,J, arrows), probably in search of new connections due to the loss of the inner layers. In the case of ON rod bipolar cells, PKCα labeling showed that they were slightly shortened in response to 1:0.3 mM NMDA/KA injection, compared with the control retina ( Figure 2A,B,H,I). With higher doses of NMDA/KA ( Figure 2C,J-L), the shortening of rod bipolar cell axonal processes was evident, accompanied by a global inner retina thinning. Nevertheless, in contrast with what happened to the OFF bipolar cells, here, the ON rod bipolar cells still maintained some of their terminals at the IPL. This fact suggests a bigger effect of the NMDA/KA mixture in the OFF pathway. When using the 10:3 mM NMDA/KA dose, sprouts of the rod bipolar cell dendrites towards the outer retina were also clearly observed ( Figure 2C,K, arrow). RGC (Figure 2A-F, red) also seemed to be affected by increasing concentrations of NMDA/KA, and their gradual loss accompanied the thinning of the inner retina previously described. Regarding the state of amacrine cells after treatment, in response to 1:0.3 mM NMDA/KA injection, the labeling of amacrine cells showed a decrease in the labeling intensity of AII and dopaminergic amacrine cells, and an evident loss of calretinin and ChAT amacrine cells (Figure 3). At the IPL, dopaminergic amacrine cells formed a plexus that synapsed with the bodies of the AII amacrine cells, which seemed to be maintained at this dose of excitotoxic agents ( Figure 3J,K). In contrast, the plexus of starburst amacrine cells disappeared almost completely in both the OFF and ON sublayers of the IPL, and the pattern of the three levels of stratification observed with calretinin labeling was not visible anymore ( Figure 3D,E,G,H,M,O). In response to 10:3 mM NMDA/KA injection, the amacrine loss was dramatic and there was almost no AII, dopaminergic, starburst, or calretinin-labeled amacrine cell remaining. As a consequence, their plexus at the IPL also disappeared almost completely ( Figure 3C,F,I,L,P). Double immunolabeling with calbindin antibodies shows dendrites and axon terminals of horizontal cells in the OPL (D,G). NMDA/KA 1/0.3 treated retinas show OFF cone bipolar cell death, and only a few distorted axons remain in the stratum of OFF (B,E,H arrowhead). In contrast, axon terminals of the ON cone bipolar cells can still be identified in the inner plexiform layer (IPL) stratum ON (B,H arrows). At this dose, the terminals of the horizontal cells show a normal morphology, and only some sprouting can be observed (E,H). NMDA/KA 10/3 treated retinas induce a clear reduction of the inner retina. While the INL, IPL, and GCL layers show a drastic thickness decrease, the IS, ONL, and OPL layer thickness are maintained, indicating that photoreceptors are not affected by the treatment (C,F). Neither ON nor OFF bipolar cells can be detected (C,F) in this condition, and horizontal cells display abnormal dendrite sprouting towards the ONL (C,F,I,J arrows) and the IPL (C,F,I,J arrowheads). IS-inner segments; ONL-outer nuclear layer; OP-outer plexiform layer; INL-inner nuclear layer; IPL-inner plexiform layer; GCL-ganglion cell layer. All of the images were obtained from temporal retina, ca. 500 µm from the optic disk. Scale bar of 10 µ m. Double immunolabeling with calbindin antibodies shows dendrites and axon terminals of horizontal cells in the OPL (D,G). NMDA/KA 1/0.3 treated retinas show OFF cone bipolar cell death, and only a few distorted axons remain in the stratum of OFF (B,E,H arrowhead). In contrast, axon terminals of the ON cone bipolar cells can still be identified in the inner plexiform layer (IPL) stratum ON (B,H arrows). At this dose, the terminals of the horizontal cells show a normal morphology, and only some sprouting can be observed (E,H). NMDA/KA 10/3 treated retinas induce a clear reduction of the inner retina. While the INL, IPL, and GCL layers show a drastic thickness decrease, the IS, ONL, and OPL layer thickness are maintained, indicating that photoreceptors are not affected by the treatment (C,F). Neither ON nor OFF bipolar cells can be detected (C,F) in this condition, and horizontal cells display abnormal dendrite sprouting towards the ONL (C,F,I,J arrows) and the IPL (C,F,I,J arrowheads). IS-inner segments; ONL-outer nuclear layer; OP-outer plexiform layer; INL-inner nuclear layer; IPL-inner plexiform layer; GCL-ganglion cell layer. All of the images were obtained from temporal retina, ca. 500 µm from the optic disk. Scale bar of 10 µm. Nevertheless, in contrast with what happened to the OFF bipolar cells, here, the ON rod bipolar cells still maintained some of their terminals at the IPL. This fact suggests a bigger effect of the NMDA/KA mixture in the OFF pathway. When using the 10:3 mM NMDA/KA dose, sprouts of the rod bipolar cell dendrites towards the outer retina were also clearly observed ( Figure 2C,K, arrow). RGC (Figure 2A-F, red) also seemed to be affected by increasing concentrations of NMDA/KA, and their gradual loss accompanied the thinning of the inner retina previously described. at this dose of excitotoxic agents ( Figure 3J,K). In contrast, the plexus of starburst amacrine cells disappeared almost completely in both the OFF and ON sublayers of the IPL, and the pattern of the three levels of stratification observed with calretinin labeling was not visible anymore ( Figure 3D,E,G,H,M,O). In response to 10:3 mM NMDA/KA injection, the amacrine loss was dramatic and there was almost no AII, dopaminergic, starburst, or calretinin-labeled amacrine cell remaining. As a consequence, their plexus at the IPL also disappeared almost completely ( Figure 3C,F,I,L,P). Tyrosine hydroxylase shows the dopaminergic amacrine cells and their dendritic plexus in the S1 stratum in the IPL (A,J, green). Dab1 shows the AII amacrine cells, whose typical lobular appendages are mainly in the OFF layer and their dendritic terminals in the ON layer (A,J red). Synaptic contacts from dopaminergic cells around the cell bodies of AII amacrine cells can be observed (J arrowheads). A decrease in TH and Dab1 immunoreactivity intensity is found in response to the 1/0.3 concentration of NMDA/KA. The morphology of AII amacrine cells looks disorganized, but the synaptic contacts with the dopaminergic cells still remain (B,K). At the NMDA/KA 10/3 dose, AII amacrine cells cannot be identified and only a few dendrites of dopaminergic cells can be observed in the S1 stratum of the IPL (C,L arrowhead). Double immunolabeling with antibodies against calretinin and choline acetyltransferase (D-I,M-P). Calretinin immunoreactivity labels several types of amacrine cells and ganglion cells with three typical plexuses of dendrite stratification in the IPL (D,G,M red). ChAT immunoreactivity is found in starburst amacrine cells, whose cell bodies are located in the INL and in the ganglion cell layer, and their dendrites stratify in two specular plexuses in the ON and OFF layers of the IPL (D,M green). At the 1/0.3 concentration of NMDA/KA, calretinin immunoreactive amacrine cells cannot be identified, and only a few ChAT amacrine cells and some CR ganglion cells remain (E,H,O). Both plexuses Tyrosine hydroxylase shows the dopaminergic amacrine cells and their dendritic plexus in the S1 stratum in the IPL (A,J, green). Dab1 shows the AII amacrine cells, whose typical lobular appendages are mainly in the OFF layer and their dendritic terminals in the ON layer (A,J red). Synaptic contacts from dopaminergic cells around the cell bodies of AII amacrine cells can be observed (J arrowheads). A decrease in TH and Dab1 immunoreactivity intensity is found in response to the 1/0.3 concentration of NMDA/KA. The morphology of AII amacrine cells looks disorganized, but the synaptic contacts with the dopaminergic cells still remain (B,K). At the NMDA/KA 10/3 dose, AII amacrine cells cannot be identified and only a few dendrites of dopaminergic cells can be observed in the S1 stratum of the IPL (C,L arrowhead). Double immunolabeling with antibodies against calretinin and choline acetyltransferase (D-I,M-P). Calretinin immunoreactivity labels several types of amacrine cells and ganglion cells with three typical plexuses of dendrite stratification in the IPL (D,G,M red). ChAT immunoreactivity is found in starburst amacrine cells, whose cell bodies are located in the INL and in the ganglion cell layer, and their dendrites stratify in two specular plexuses in the ON and OFF layers of the IPL (D,M green). At the 1/0.3 concentration of NMDA/KA, calretinin immunoreactive amacrine cells cannot be identified, and only a few ChAT amacrine cells and some CR ganglion cells remain (E,H,O). Both plexuses experience a big disruption and disorganization. At a 10/3 concentration, only some spots of ChAT and CR immunoreactivity can be observed in the IPL, accompanying IPL degeneration (F,I,P). INL-inner nuclear layer; IPL-inner plexiform layer; GCL-ganglion cell layer. Scale bar of 10 µm. Retinal Multielectrode Recording The response properties of different types of RGC (ON, OFF and ON/OFF) was analyzed both in the control animals and in those injected with 10:3 mM NMDA/KA. In addition to the decrease in the total number of recorded RGC in the NMDA/KA injected retinas, a change in the relative proportions is observed. From a total of 237 RGC recorded from four control eyes, 37.5% (n = 89) were ON, 14.7% (n = 35) were OFF, and 47.6% (n = 113) were ON/OFF while in the NMDA/KA injected eyes, from a total of 50 RGC recorded from four animals, 96% (n = 48) were ON, 4% (n = 2) were OFF, and no ON/OFF responses were recorded. These data indicate a statistically significant increase in the proportion of ON-type RGC and the absence of ON/OFF type RGC ( Figure 4A). The possibility that ON/OFF type RGC loses the OFF response should not be discarded (see discussion). The transient or sustained responses of RGC were also analyzed. All types of responses were observed in the control group, while in the NMDA/KA injected group, we were not able to observe any sustained OFF nor ON/OFF response ( Figure 4B). In summary, there was a global impairment of the retina with special incidence in the OFF responses. Light sensitivity was also analyzed in the ON-type RGC. Sensitivity to the light stimuli of increasing intensities differed between the control and NMDA/KA injected eyes. RGC from the control retinas were more sensitive to light stimuli of any tested intensity ( Figure 5A). A proportion of 29% of RGC in the control retinas were sensitive to 6.2 cd·s/m 2 , while in the NMDA/KA group, just 4% of cells were sensitive to such a light intensity. For any light intensity of a higher magnitude, statistically significant differences were observed between the control and NMDA/KA groups (chi-square p < 0.001). Furthermore, a significant reduction in firing frequency was observed in the NMDA/KA group (Mann-Whitney U), both during the light stimuli and basal activity ( Figure 5B). Response latency was also analyzed in RGC from both experimental groups ( Figure 5C) and a statistically significant increase was observed in the NMDA/KA injected group (Mann-Whitney U). Finally, a statistically significant decrease of the RGC receptive field was observed in the NMDA/KA injected group when compared with the control group (Mann-Whitney U; Figure 5D). We further tested the directional selectivity of the recorded RGC. Cells with/without directional selectivity could be recorded in both experimental groups ( Figure 6). The proportion of RGC with directional selectivity did not differ significantly from the control group to the NMDA/KA injected group. Just 18 out of 208 RGC (≈9%) of the control retinas showed a direction index >0.5, while 9 out of 46 RGC (≈20%) in the NMDA/KA injected group showed a direction index >0.5. However, an apparent different response accuracy was observed in the directional sensitivity between both experimental groups ( Figure 6), with the response being quite accurate in the control group and quite broad in the NMDA/KA injected group, although the degree of accuracy was not statistically analyzed because of the huge variability of the cell responses. The single cell results obtained by the multielectrode recordings suggested that the injury caused by the injection of NMDA/KA affected the functionality of the RGC that remained alive, with a higher effect on the OFF response. The single cell results obtained by the multielectrode recordings suggested that the injury caused by the injection of NMDA/KA affected the functionality of the RGC that remained alive, with a higher effect on the OFF response. . Each cell example includes a radial plot of the spike rate response to motion in eight directions across ten to twelve repetitions and the spike waveform, and the post stimulus time raster plot and cumulative recording line for the response to preferred (above) and the null (below) directions is also shown, (B). The red arrow indicates the preferred direction calculated as the vector sum of the response. Directional selective RGC from the NMDA/KA injected retinas shows an apparent broader response in the preferred direction than RGC from the control retinas. Pattern Electroretinography A pattern electroretinogram (pERG) allows for studying the functionality of the ganglion cell population by analyzing the response of the whole retina to changes of contrast through checkerboard stimulation ( Figure 7A). The effect of the different NMDA/KA doses on the wave amplitudes was studied at four spatial frequencies (0.08, 0.12, 0.17, and 0.31 cpd). pERG recordings were obtained from the same group of animals before and one week after the injection of the excitotoxic mixture (1:0.3, n = 7; 3:1, n = 7; 10:3, and n = 4, mM NMDA/KA) into the right eye. Three characteristic waves (N35, P50, and N95) were recognized in control/preinjection recordings at the four different spatial frequencies tested. However, one week after the injection of 3:1 mM NMDA/KA, the component N35 was not clearly identified, and P50 and N95 showed an increased latency and decreased amplitude compared with the preinjection recording, reflecting the functional damage of the ganglion cell population ( Figure 7B). No statistically significant difference in the N35, P50, or N95 wave amplitudes were observed among the different spatial frequencies in the control experiment. Significant differences between the left eye (PBS injected) and right eye (NMDA/KA injected) were observed when using excitotoxic concentrations above 3:1 mM NMDA/KA. The injection of 1:0.3 mM NMDA/KA did not induce any significant reduction in the N95 wave component amplitude for any spatial frequency ( Figure 7C, left, two-way ANOVA, p= 0.2561), confirming that the functionality of the ganglion cell population was not significantly affected by this dose. However, in the 3:1 mM NMDA/KA injected eye, the decrease in the N95 wave component was statistically significant compared with the control Figure 6. Directional selectivity in ON RGC. Examples of ON-RGC with directional selectivity (up) or without directional selectivity (down) recorded from the control retinas (A) and retinas from the NMDA/KA injected eyes (B). Each cell example includes a radial plot of the spike rate response to motion in eight directions across ten to twelve repetitions and the spike waveform, and the post stimulus time raster plot and cumulative recording line for the response to preferred (above) and the null (below) directions is also shown, (B). The red arrow indicates the preferred direction calculated as the vector sum of the response. Directional selective RGC from the NMDA/KA injected retinas shows an apparent broader response in the preferred direction than RGC from the control retinas. Pattern Electroretinography A pattern electroretinogram (pERG) allows for studying the functionality of the ganglion cell population by analyzing the response of the whole retina to changes of contrast through checkerboard stimulation ( Figure 7A). The effect of the different NMDA/KA doses on the wave amplitudes was studied at four spatial frequencies (0.08, 0.12, 0.17, and 0.31 cpd). pERG recordings were obtained from the same group of animals before and one week after the injection of the excitotoxic mixture (1:0.3, n = 7; 3:1, n = 7; 10:3, and n = 4, mM NMDA/KA) into the right eye. Three characteristic waves (N35, P50, and N95) were recognized in control/preinjection recordings at the four different spatial frequencies tested. However, one week after the injection of 3:1 mM NMDA/KA, the component N35 was not clearly identified, and P50 and N95 showed an increased latency and decreased amplitude compared with the preinjection recording, reflecting the functional damage of the ganglion cell population ( Figure 7B). No statistically significant difference in the N35, P50, or N95 wave amplitudes were observed among the different spatial frequencies in the control experiment. Significant differences between the left eye (PBS injected) and right eye (NMDA/KA injected) were observed when using excitotoxic concentrations above 3:1 mM NMDA/KA. The injection of 1:0.3 mM NMDA/KA did not induce any significant reduction in the N95 wave component amplitude for any spatial frequency ( Figure 7C, left, two-way ANOVA, p = 0.2561), confirming that the functionality of the ganglion cell population was not significantly affected by this dose. However, in the 3:1 mM NMDA/KA injected eye, the decrease in the N95 wave component was statistically significant compared with the control eye for any spatial frequency ( Figure 7C, middle, two way ANOVA, p < 0.0001). A reduction of ca. 70% for the N95 wave amplitude was observed. Bonferroni post-test analyses showed most significant differences for spatial frequencies of 0.08 and 0.12 cpd (p < 0.001 and p < 0.01, respectively). Likewise, the injection of 10:3 mM NMDA/KA induced a statistically significant decrease in N95 wave component amplitude between the injected and control eyes for any spatial frequency ( Figure 7C, right, two-way ANOVA, p = 0.0197). eye for any spatial frequency ( Figure 7C, middle, two way ANOVA, p < 0.0001). A reduction of ca. 70% for the N95 wave amplitude was observed. Bonferroni post-test analyses showed most significant differences for spatial frequencies of 0.08 and 0.12 cpd (p < 0.001 and p < 0.01, respectively). Likewise, the injection of 10:3 mM NMDA/KA induced a statistically significant decrease in N95 wave component amplitude between the injected and control eyes for any spatial frequency ( Figure 7C, right, two-way ANOVA, p = 0.0197). Optomotor Test To test the visual behavior after the injection of NMDA/KA, an optomotor test was carried out on the same animals in which the pERG was performed. Through this test, the mice's eye and head movements were recorded when the animals followed, with their gaze, the moving vertical bars presented on the screens ( Figure 8A). Different spatial frequencies (0.011, 0.022, 0.044, 0.088, 0.177, and 0.355 cpd) and contrasts (100, 50, 25, 10, and 5%) were explored. Gradual contrast (white to black or black to white) of the bars moving in both directions (clockwise and anticlockwise) allowed us to selectively test the function of the ON and OFF retinal pathways of the left (PBS injected) and the right (NMDA/KA injected) eyes ( Figure 8B). The highest contrast sensitivity of the animals was observed for a spatial frequency of 0.088 cpd. Any higher or lower frequency showed a decrease in Optomotor Test To test the visual behavior after the injection of NMDA/KA, an optomotor test was carried out on the same animals in which the pERG was performed. Through this test, the mice's eye and head movements were recorded when the animals followed, with their gaze, the moving vertical bars presented on the screens ( Figure 8A). Different spatial frequencies (0.011, 0.022, 0.044, 0.088, 0.177, and 0.355 cpd) and contrasts (100, 50, 25, 10, and 5%) were explored. Gradual contrast (white to black or black to white) of the bars moving in both directions (clockwise and anticlockwise) allowed us to selectively test the function of the ON and OFF retinal pathways of the left (PBS injected) and the right (NMDA/KA injected) eyes ( Figure 8B). The highest contrast sensitivity of the animals was observed for a spatial frequency of 0.088 cpd. Any higher or lower frequency showed a decrease in contrast sensitivity. Comparisons between the white to black gradients with the black to white gradients, perceived by the ON or OFF retinal pathways, did not show statistically significant differences in the control animals (two-way ANOVA, p = 0.4125). contrast sensitivity. Comparisons between the white to black gradients with the black to white gradients, perceived by the ON or OFF retinal pathways, did not show statistically significant differences in the control animals (two-way ANOVA, p = 0.4125). The optomotor test showed a clear correlation between the doses of injected excitotoxic agents and visual sensitivity. Injection of the lowest dose of NMDA/KA (1:0.3 mM) into the right eye did not affect the animals' ability to detect moving bars in both clockwise and counterclockwise directions ( Figure 8C, left). Specific stimulation of the ON and OFF pathways showed a similar sensitivity for each spatial frequency. Comparisons between the white to black gradients with the black to white gradients, perceived by the ON or OFF retinal pathways, did not show statistically significant differences in the 1:0.3 mM NMDA/KA injected retinas (two-way ANOVA, p = 0.2645). On the other hand, injection of the highest dose of NMDA/KA (10:3 mM) into the right eye prevented the animals from detecting counterclockwise bar displacement, while The optomotor test showed a clear correlation between the doses of injected excitotoxic agents and visual sensitivity. Injection of the lowest dose of NMDA/KA (1:0.3 mM) into the right eye did not affect the animals' ability to detect moving bars in both clockwise and counterclockwise directions ( Figure 8C, left). Specific stimulation of the ON and OFF pathways showed a similar sensitivity for each spatial frequency. Comparisons between the white to black gradients with the black to white gradients, perceived by the ON or OFF retinal pathways, did not show statistically significant differences in the 1:0.3 mM NMDA/KA injected retinas (two-way ANOVA, p = 0.2645). On the other hand, injection of the highest dose of NMDA/KA (10:3 mM) into the right eye prevented the animals from detecting counterclockwise bar displacement, while they were still able to identify clockwise bar movement, controlled by the PBS injected eye ( Figure 8C, right). These results indicate that the excitotoxic agent injection induced visual deficiencies just in the preferred direction, as perceived by the retina of the damaged eye. The injection of 3:1 mM NMDA/KA into the right eyes did not affect the ability of the animals to detect moving bars in a clockwise direction, as the left retina was not damaged. However, it affected the detection of moving bars in a counterclockwise direction: they were not detected when black to white gradual bars were applied (stimulation of the OFF retinal pathway), while they were still detected when white to black gradual bars were used (stimulation of the ON retinal pathway) at 0.044, 0.088, and 0.177 cpd ( Figure 8C, middle). Comparisons between the white to black gradients perceived by the ON retinal pathways for those of the spatial contrast showed statistically significant differences in in 3:1 mM NMDA/KA vs. PBS injected retinas (two-way ANOVA, p = 0.0192). All together, these results indicate a stronger damage effect of the excitotoxic agents on the OFF retinal pathway than on the ON retinal pathway, as supported by the electrophysiological recordings and the histological analysis. Discussion The retinal excitotoxicity caused by the intraocular injection of NMDA and KA is capable of inducing the death of ganglion cells, amacrine cells, and bipolar cells, as these cells express said ionotropic glutamate receptors [24][25][26]. Likewise, the action of glutamatergic agonists produces massive disorganization of the inner retina, even affecting the outer retina. All of these effects cause an alteration of retinal functionality that translates into a decrease in visual acuity and difficulty in detecting stimuli in motion. Our results show that the histological changes that the retina undergoes as a result of glutamatergic overstimulation have a progressive loss of retinal functionality, which is reflected by the alteration of the electrophysiological properties of the ganglion cells and therefore of the information that contribute to visual centers. Although previous works induced excitotoxicity by activation of NMDA or KA receptors in the retinal cells, given that both receptors are naturally activated by the physiological neurotransmitter glutamate, it seems logical to think that NMDA/KA coactivation must be the pathophysiological mechanism causing an excitotoxic effect on the retinal neurons during different nosological processes (ischemia, axonal compression, metabolic diseases, glaucoma, etc.). In a previous study [12], we demonstrated the deleterious effect of the joint intraocular application of both glutamatergic agonists. While the intraocular injection of NMDA alone needs a dose of 100 mM to produce a maximum lethal effect on ganglion cells, and a dose of 5 mM KA induces the death of ca. 50% of ganglion cells, and our present work shows that a concentration of 3 mM KA and 10 mM NMDA, when applied together, induces a much bigger effect on ganglion cell death than a single application. After joint treatment, only 20% of ganglion cells could be recorded in NMDA/KA injected eyes when compared with the control eyes. These data agree with some studies carried out in cultured retinal neurons, which are not affected by the stimulation of NMDA receptors alone, but are sensitive to the stimulation of non-NMDA receptors [27]. More specifically, it has been shown that KA is toxic to ganglion cells, but its effect is greater when injected together with NMDA. As the stimulation of the KA receptors achieves cellular depolarization and, therefore, sensitization of NMDA receptors-now free from the blockade by Mg-when activated together they induce a massive influx of Ca2 + and the consequent cell death. In the present work, we used different doses of both glutamatergic agonists, injected intraocularly together, to try to elucidate the different sensitivities of different retinal cell types to increasing doses of NMDA and KA, and their impact on some visual functions. The excitotoxicity caused by the joint intraocular inoculation of NMDA and KA achieves the simultaneous activation of NMDA and non-NMDA receptors, which causes the destructuring of the inner plexiform layer and the loss of cells in the inner retina [12]. The present work shows how increasing doses of NMDA and KA induce the death of bipolar, ganglion, and amacrine cells, leading to huge structural changes of the inner retina. However, it also shows that the damaging effects are manifested in the outer retina, as evidenced by the fact that growing cell extensions appear to be emerging from horizontal cells, accompanied by the processes of some bipolar cells. Although it is not clear why this effect occurs, it could be that the injury suffered by ganglion and amacrine cells in the IPL causes the disconnection of bipolar cells and the retraction of their dendritic tree at this level, and induces the outgrowth of horizontal and bipolar cell processes in the external retina seeking some kind of reconnection. In this sense, the appearance of growth shoots in the dendritic tree of ganglion cells has been observed after an injury to its axon [28,29]. The described structural alterations of the inner retina lead to a decrease in cellular functionality that is more intense in the OFF than in the ON pathway of visual processing. Undoubtedly, this is due to a greater loss of cells that integrate the OFF pathway. In this sense, a progressive decrease in the labeling of amacrine AII cells, which carry information from the rod pathway to the cone pathway, is also notable as the NMDA/KA dose increases. The damaging effect of excitotoxic agents on retinal cells also results in a decrease in the firing frequency of ganglion cells, both in their basal activity and in their response to light stimuli, because of the increased latency between shots. A reduction in the size of the receptive field has also been observed. Different types of ganglion cells could show differences in sensitivity to excitotoxic agents [16] related to the expression level of the ionotropic glutamate receptor [30]. Large, alpha-like ganglion cells show a low expression of calcium-permeable glutamate receptors. However, ON/OFF direction-selective ganglion cells, and OFF ganglion cells show higher levels of expression for these receptors, which causes a greater effect of excitotoxic agents and a greater mortality of the cells. These data can explain our functional results, in which we see greater damage, but not are exclusive to the cells of the OFF pathway. Our results show that after the injection of high doses of NMDA and KA, there is an absolute loss of the ON/OFF responses of the ganglion cells, as well as a significant decrease in the OFF responses. It cannot be ruled out that some of the ON cells that were registered after the effect of the excitotoxic agents are ON/OFF type cells that have lost the OFF response. Similarly, it is striking that even at these doses of excitotoxic agents, cells with a directional selectivity can still be seen. One explanation could be that the cells that maintain directional selectivity after the effect exerted by the excitotoxic agents are ganglion cells of the ON directional selective RGC type, large cells, with directional sensitivity and monostratified (sublamina ON), which respond preferentially to an slow movement of the stimulus in three directions of space [31][32][33][34], and which project preferentially on the superior colliculus or the medial terminal nucleus [35]. On the contrary, ganglion cells with directional selectivity of the ON/OFF type, (ON/OFF DS RGC) exhibit small and bistratified dendritic trees, are able to respond preferentially to rapid visual movement in four directions, and innervate the superior colliculus [36] or nuclei adjacent to the accessory optic system [37,38]. Therefore, it seems that there is a less harmful effect of NMDA and KA on the ON DS RGC compared with the ON-OFF DS RGC. In parallel with the decrease in cells with a directional selectivity, excitotoxic agents also affect ganglion cells without directional selectivity. After the administration of NMDA and KA, the detection of movement at certain spatial frequencies is usually the first function to be affected, as it depends on the bar stimulus that travels through the receptor fields of the neighboring ganglion cells. The loss of directional selectivity observed in the functional tests correlated with the histological disappearance observed by the immunohistochemical staining of cells that participate in directional selectivity, such as starburst amacrine cells. To assess the perception of visual contrast, pERG experiments were performed one week after the injection of NMDA and KA. Analysis of the pERG responses showed a decrease in amplitude (more intense in the lower frequencies (0.088, 0.120 cpd) and an increase in latency for the P50 and N95 waves, as had been suggested [39]. A parallelism was observed between the decrease in the amplitude of the pERG and the loss of ganglion cells. Although the origin of the pERG components is not exactly known, it is estimated that N95 is generated in ganglion cells. As the luminance in the pERG stimulus does not vary between stimuli, the pathways of activation (ON) and deactivation (OFF) by light are stimulated equally [40]. Therefore, to detect differences between these two pathways, we found it convenient to carry out another type of test: the optomotor test. To date, the optomotor response has been widely used to assess visual acuity, contrast threshold, and sensitivity to movement in laboratory animals [23,41,42]. By combining pERG measurements with the optomotor response, it is possible to assess the specific contribution of retinal cells involved in functional loss [23,43]. The decrease in response to anti-clockwise displacement optomotor stimuli is due to the preference of each eye for the detection of a direction of rotation, the left eye being the most stimulated by a light stimulus that moves clockwise, and the right eye more stimulated by stimuli moving counterclockwise. The use of degraded stimuli allowed us to observe a greater sensitivity of the OFF pathway to excitotoxic agents, as at intermediate doses of NMDA and KA, the OFF pathway of the damaged eye is completely affected, while the ON pathway is affected to a lesser extent. In view of these data, it is reasonable to think that the differences in the survival of retinal neurons are related to the expression of the different types of glutamate receptors, either NMDA type or KA type. In our opinion, it would be convenient to characterize to the greatest extent and at the cellular level the receptors of each type that retinal cells can express, in order to seek a relationship between cell survival and the maintenance of visual function, as we have tried in this work. Three combinations of NMDA (6384-92-5, Sigma-Aldrich, Darmstadt, Germany) and KA (58002-62-3, Sigma-Aldrich, Darmstadt, Germany), in a concentration of 1:0.3/3:1/10:3 mM, seperately, were tested on the mice through a single intraocular dose of one microliter. The NMDA/KA solutions were injected into the right eye and one microliter of phosphate buffer saline (PBS) was injected into the left eye as a control. The intraocular injection was performed under a microdissection microscope with a cold light illumination source (Wild Heerbrugg, Intralux HE, Switzerland). One microliter-calibrated syringe (Nanofil Tm, World Precision Instruments, Sarasota, FL, USA) with a 35G needle (Nanofil Tm, NF35BV-2, World Precision Instruments) was used for the intraocular injection. In the immediate postoperative period, 2% Methocel (Ciba Vision AG, 8442 Hetlingen, Switzerland) was applied topically to the cornea to prevent corneal desiccation. All of the immunohistochemical procedures, electrophysiological recordings, and behavioral tests on the intraocularly injected animals were performed 7 days after injection. Dose Estimation of Excitotoxic Agents As the doses administered to induce retinal excitotoxicity vary within the literature [16,[44][45][46][47][48][49], a series of experiments were performed to determine the ideal dose for the joint administration of KA and NMDA in our model [12]. In the current experiments, the abovementioned doses were tried (NMDA/KA at 1:0.3/3:1/10:3 mM). In a series of electrophysiological experiments, only 10:3 mM NMDA/KA was chosen as a dose able to induce clear excitotoxicity. Immunohistochemistry Before enucleation, a small signal was made on the upper pole of the eye in order to ensure anatomical retinal orientation. After making an incision in the cornea, the whole eye was fixed in freshly made 4% (w/v) paraformaldehyde in 0.1 M PBS (pH 7.4) for 1 h at room temperature, and subsequently rinsed three times with PBS. Then, the anterior pole of the eye was removed, and the posterior pole was washed again in PBS. The eyeball was cryoprotected in growing concentrations of sucrose (10, 20, and 30%) diluted in 0.1M PBS (1 h for 10% and 20% and overnight for 30%). Afterward, the eyes were included in an appropriate medium for freezing (Optimal Cutting Temperature media, Sakura Finetek, CA 90501, USA) and cross sections of 14-µm thickness were made using a cryostat (Leica CM1900; Leica Microsystems, Wetzlar, Germany). The sections were mounted on Superfrost TM Plus glass slides (ThermoFisher Scientific, Rockford, USA) and were stored at −20 • C until they were used for immunohistochemistry. Four combinations of double immunostaining were performed as follows. A rabbit monoclonal anti calbindin antibody (1:1000; CB-38a, Swant, Marly, Switzerland) and a mouse monoclonal anti-syt2b (1:50; ZNP-1, Zebrafish International Research Council, University of Oregon, Eugene, USA) were used to stain different cells in the inner retinal layers. Calbindin stains horizontal cells [50,51], some wide-field amacrine cells, and some large ganglion cells [52,53]. In rodents, the mouse monoclonal anti syt2b recognizes cone bipolar cells (OFF type) of type 2 [54,55] and type 6 [55], especially at their axon terminals (presynaptic areas). In addition, these antibodies label the presynaptic areas of horizontal cells in the mouse retina [54]. The retinal sections were incubated with the primary antibodies overnight at room temperature in a humid chamber. The next day, after three PB rinses, the samples were incubated for 1 h at RT and darkness with their matching combination of secondary antibodies at a 1:100 dilution in PB + 0.5% Triton X-100. The secondary antibodies used were donkey anti-mouse conjugated to Alexa-488 (A21202), donkey anti-mouse conjugated to Alexa-555 (A31570), donkey anti-rabbit conjugated to Alexa-488 (A21206), donkey anti-rabbit conjugated to Alexa-555 (A31572), donkey anti-goat conjugated to Alexa-488 (A11055), and donkey anti-sheep (A11015; all from ThermoFisher Scientific). Then, after three PB washes, the slides were cover-slipped with an anti-fading mounting medium (Citifluor Ltd., London, UK) and sealed with nail polish. Immunohistochemistry negative controls were conducted in parallel, omitting the primary antibody. The samples were observed in a Leica TCS SP2 confocal microscopy (Leica, Wetzlar, Germany). To image the studied cells, we used 40× and 63× oil immersion lenses, and Z projections of the maximum amplitude were taken. The same region of the temporal retina, ca. 500 um away from the optic disk, was used to take the images shown in the corresponding figures. Extracellular Multielectrode Recording Retinal ganglion cells were extracellularly recorded from the isolated mouse retina using an array of 100 electrodes that were 1.5 mm long (inter-electrode distance = 400 µm), as described previously [61,62]. After euthanasia, the mice's eyes were enucleated, eyeballs were hemisected, and the cornea and lens were removed under dim red illumination. Subsequently, the retinas with the pigment epithelium were carefully collected from the eyecup, mounted on a glass slide ganglion cell side up, and covered with a millipore filter. This preparation was mounted on a recording chamber; perfused with warm (36-37 • C) Ringer medium containing 124 mM NaCl, 26 mM NaCHO 3 , 22 mM glucose, 2.5 mM KCl, 2 mM MgCl 2 , 2 mM CaCl 2 , and 1.25 mM NaH 2 PO 2 ; and dark-adapted for 30 min. A 16-bit ACER TFT monitor with a resolution of 1280 × 1024 pixels at a 60 Hz vertical refresh rate was used for the visual stimulation. Specifically, an area of 800 × 800 pixels was used for the visual stimulation. The pictures drawn on this area were projected through a beam splitter and optical lenses to be focused onto the photoreceptor layer. The retinas were flashed (full-field white) periodically, whereas the electrode array lowered slowly into the retina with the help of a Leica micromanipulator. The electrode positioning ended when a significant number of electrodes detected light-evoked single-or multi-unit responses. The retinal recordings began with a series of flashes of different intensities and moving bars of various spatial frequencies at different temporal frequencies. All of the visual stimuli were programmed in Python using the Vision Egg: an open-source library for real-time visual stimulus generation [63]. The electrode array was connected to a 100-channel amplifier (frequencies of 250 to 7500 Hz) and a digital signal processor-based data acquisition system. All of the selected channels of data, as well as the state of the visual stimulus, were digitized with a resolution of 16 bits at a sampling rate of 30 kHz with a commercial multiplexed A/D board data acquisition system (Bionic Eye Technologies, Inc., New York, USA), and were stored digitally. Neural spikes were detected by comparing the electrode signals to specifically level thresholds set for each data channel through standard procedures previously reported [64]. The supra-threshold events recorded were analyzed offline. The classification of single units was performed through a principal component analysis (PCA) method, as previously described [61,64]. Later, the assignment of every individual wave to a given cell was confirmed by analyzing the corresponding spike trains. The action potentials of each sorted unit were labeled by timestamps to generate inter-spike interval (ISI) histograms, peristimulus time histograms, and peristimulus spike plots. The following types of the stimuli were used for testing RGC functionality. Flash stimuli: Consists of a period of 700 ms of light at maximum intensity, followed by another of 2300 ms of darkness; the entire process was repeated 30 times. This stimulus allowed us to determine the functional cell type (ON, OFF, or ON/OFF) and the latency value. The last second of darkness prior to each light period was used to determine the basal frequency of each cell. Light Intensity: Consists of a period of 700 ms of light, with 11 randomly presented lighting intensities, followed by another of 2300 ms of darkness; the whole process was repeated 12 times. The sensitivity of each cell was determined by analyzing the response of the ON cells to this stimulus. Displacing Light Bars: A white bar at a maximum light intensity (length of 250 µm and 0.5 Hz) crossed the screen in eight different directions to determine the receptive field size and position, as well as the preferred direction of each cell. The explored directions were 0 • , 45 • , 90 • , 135 • , 180 • , 225 • , 270 • , and 315 • . The response obtained at 0 • or 180 • (column vector) was multiplied by the response obtained at 90 • or 270 • (row vector) so as to calculate the size of the receptive field. This allowed for creating a matrix, represented as an image through the Image J program, which revealed the receptive field of the cell and allowing its measurement. To determine the preferred direction, the sum vector was calculated based on the response to all directions. Once the angle of that vector was known (preferred angle), the response of the opposite angle to such stimulus was subtracted (prefnull/pref + null). Thus, an index between 0 and 1 was obtained for each cell (directional index), with 1 indicating a complete preference for a particular direction and 0 indicating any direction preferred. All of these calculations are based on the work by Elstrott and colleagues [65]. Pattern Electroretinography Pattern electroretinography (pERG) was used to measure the central retinal response to a constant luminance checkerboard alternating black and white [66]. A total of 18 adult mice were used in these experiments. Three experimental groups were made depending on the dose. A single dose of 1/0.3, 3/1, or 10/3 mM NMDA/KA was administered to the first (n = 7), second (n = 7), and third groups (n = 4), respectively. The stimulation equipment (Roland Consult, Brandenburg, Germany) consisted of a stimulator and two screens. The animals were anesthetized following the protocol previously described. After checking its state of unconsciousness through the foot reflex, its vibrissae were cut so as to avoid interference in the registry. The animals were placed on a platform raised 15 cm high and at a distance of 25 cm from both screens. The temperature of the mice was kept constant at 37 • C using a closed-circuit water thermal blanket placed on the platform. A reference electrode was placed on the animal's tongue, a ground electrode (a needle) was placed to the base of the tail, and two gold band electrodes were placed on the corneas. A few drops of methylcellulose were added (2% Methocel, Omnivision, Neuhausen, Switzerland) to protect the cornea of animals and to improve conductivity. Impedance was measured using the recording software itself so that in all cases it was below 10 KΩ. The stimulation was performed using of checkerboards configured before registration. The spatial frequencies were 0.088, 0.12, 0.17, and 0.31 cycles per degree (cpd), a frequency of change of 1 Hz, and a contrast of 100%. The stimuli were presented on two screens (919Pz, AOC, Taipei, Taiwan), with a luminance of 80 cd/m 2 for white and 0.25 cd/m 2 for black. A total of 400 signals were averaged to obtain an optimal result. After registration, the animals were allowed to recover in an external cage on a thermal blanket to promote awakening. Optomotor Response The optomotor test is a non-invasive analysis of spatial visual acuity (or spatial frequency threshold; [23] that has proven to be efficient and reproducible [67][68][69]. The same 18 adult mice used in the pattern electroretinography were used in these experiments. A homemade Prusky style optomotor device was made [23] with four screens (FLATRON, LG, Seoul, South Korea) facing each other, forming a closed space. Inside, the awake mice were placed in the center, in a vertical transparent methacrylate cylinder. The cubicle was covered with an opaque table to prevent entering of external light, and a closed-circuit infrared camera (AVC-D5CE, SONY, Tokyo, Japan) was placed at the top of the enclosure formed by the screens, allowing the experimenter to observe the animals throughout the full experimental protocol. The four screens were connected to a computer from which the stimulus was configured. A sequence of vertical bars with a gradient from white to black and black to white were displaced on the screens to stimulate predominantly the ON and OFF pathways, respectively. Different spatial frequencies were used (0.011, 0.022, 0.044, 0.088, 0.177, and 0.35 cpd) and they were presented in both a clockwise and anti-clockwise sense to study the effect on the left and the right eyes, respectively. In addition, these frequencies were presented at different contrasts (100, 50, 25, 10, and 5%). The stimulus was presented for 20 s, in both directions, randomly, to make the study as objective as possible. The luminance inside the optomotor device was measured to set the maximum contrast, resulting in 120 cd/m 2 for white and 0.25 cd/m 2 for black. Statistical Analysis A statistical comparison of the mean in two groups was performed using the Student's t test for normal distributions, and the Mann-Whitney U test for non-parametric distributions. The proportions between the two groups were analyzed using Chi-square. The comparison of more than one variable in two groups was carried out using two-way ANOVA and Bonferroni post hoc. They were performed using GraphPad Prism version 5.00 for Windows (www.graphpad.com, GraphPad Software, San Diego, CA, USA). Accessed May 2020.
2021-06-27T05:23:26.956Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "95497f509adc3abb82c8ac4e8d7a8f0f4a92134f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/12/6245/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95497f509adc3abb82c8ac4e8d7a8f0f4a92134f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38065124
pes2o/s2orc
v3-fos-license
Correlation between Cognitive Functions , Fatigue , Depression and Disability Status in a Cohort of Multiple Sclerosis Patients Objective: To investigate the relationship between depression, fatigue, disability and cognitive skills of patients with multiple sclerosis in a cohort of patients with multiple sclerosis in a single center in Tehran, Iran. Methods: One hundred and forty-seven patients with multiple sclerosis with mean age of 33 years, mean disease duration of 20.20 months, mean EDSS of 2.13, and F to M ratio of 76.5% over 23% were recruited for the purpose of this study. Cognitive function was compared with healthy control subjects (n = 100). Depression was measured by Beck Depression Inventory (BDI), fatigue was assessed using Fatigue Severity Scale (FSS) and Modified Fatigue Impact Scale (MFIS), disability was evaluated by Expanded Disability Status Scale (EDSS), and cognitive function was assessed by Brief Repeatable Battery of Neuropsychological tests (BRB-N). All data were analysed using Pearson correlation. Results: Age and disability level generally correlated negatively and significantly with task performance, whereas a higher level of education was associated with better task performance. While the correlation between BDI, FSS, and MIFS was significantly positive, BDI was negatively correlated with the two subscales of BRB, namely PASAT and WLG. Higher levels of depression in patients with MS are associated with lower cognitive performance in tasks requiring higher-order working memory (WM) processes. FSS showed the Corresponding author. Introduction Fatigue, depression, cognitive problem, and disability usually complicate the course of multiple sclerosis.Fatigue has been identified in a broad range of neuroimmunological disorders like multiple sclerosis (MS) [1].Fatigue is considered an important symptom in MS patients because it affects patients' social lives, occupation, overall quality of life, and mood.Between 60% and 90% of all MS patients occasionally complain of overpowering fatigue [2].Several areas of the CNS (e.g. the premotor cortex, the limbic system, the basal ganglia, and the brainstem) are believed to be involved in the pathophysiology of MS fatigue [1].According to fMRI fatigue may be related to impaired interactions between functionally related cortical and subcortical areas [3]. Moreover, a considerable incidence and prevalence of psychological and psychiatric symptoms in patients with multiple sclerosis, compared to individuals with similar degrees of disability, has been reported in the literature [4]- [8].Depression is by far the most common psychological disturbance in MS, though other mood disorders can occur.Several studies have reported high rates of depressi symptoms in MS patients compared to controls with other chronic neurological conditions, with an overall lifetime frequency of major depression reaching 50% [9]- [12] and an annual prevalence around 20% [13].In spite of depression, neuropsychological deficits occur in about 40% -65% of individuals with MS [14].These symptoms often impair quality of life and social participation [15].Neuropsychological impairment, known as "soft symptom", is usually manifested in the following areas of cognition: memory, attention, information processing, abstract reasoning, and visuospatial skills, while primary language skills, immediate and implicit memory, and verbal intelligence appear to be unaffected [16]. Considering all the above facts, some of these symptoms can influence and even exacerbate each other.For example, fatigue or depression can influence cognitive performance of a patient in neurocognitive tests due to their psychomotor retardation and other cognitive outcomes.In this study we aim to investigate the relationship between fatigue, depression, cognitive functions, and disability status according to EDSS in MS patients. Participants One hundred and forty-seven consecutive MS patients (female = 112 [76.19%], male = 35 [23.81%]; mean age = 33 years) followed up at Mostafa MS research center in Tehran, Iran, were enrolled in this study.After obtaining a written informed consent, patients were asked to take part in a number of assessments (see below) and then a neurpsycological test (BRB-N) was conducted by a psychologist. Measurements Expanded Disability Status Scale (EDSS).Physical disability of the patients was scored using EDSS. Beck Depression Inventory-Second Edition (BDI-II).The BDI-II is a 21-item self-report measure designed to assess DSM-IV depressive symptomatology in adolescents and adults.It is a revised version of the amended DI [1].Respondents are asked to rate each of the depression symptoms, ranging from 0 (not present) to 3 (severe), in terms of how they have been feeling during the past two weeks, recording the date of completing the questionnaire.The BDI-II is designed to provide a single overall score that can range from 0 to 63.The following cut-score guidelines are suggested for patients diagnosed with major depression: minimal (0 -13), mild (14 -19), moderate (20 -28), and severe (29 -63).Authors have reported convergent validity (e.g., r = 0.93 with the BDI-IA, r = 0.71 with the Hamilton Psychiatric Rating Scale for Depression), and excellent internal consistency (α = 0.91 among psychiatric outpatients, α = 0.93 among undergraduate students) [2]. Fatigue Severity Scale (FSS).The primary measurement of subjective fatigue in this study was done by FSS.This is one of the best known and most used fatigue scales.The FSS principally measures the impact of fatigue on specific types of functioning rather than the intensity of fatigue-related symptoms [3].It has high internal consistency, has good test-retest reliability and is sensitive to change with time and after treatment.It also has good concurrent validity and is able to distinguish patients with different diagnoses (between systemic lupus erythematosus (SLE) and MS, and between CFS, MS, and primary depression) [3].This scale shows high sensitivity, reliability and internal consistency in the assessment of fatigue.The internal consistency was found to be highly satisfactory (Cronbach's alpha = 0.96 in patients and 0.88 in controls) [4].The FFS is a nine-item selfreport scale developed for use among patients with chronic illnesses.In this questionnaire, people have to rate their agreement (range 1 -7) with nine statements concerning the severity, frequency and impact of fatigue on their daily life style.Scores can range between 9 (no fatigue) and 63 (maximum fatigue).This scale was chosen for this study because it is a short questionnaire, and hence a convenient method for the patients, which provides a simple unitary measure of global fatigue severity. Modification of the Fatigue Impact Scale (MFIS).The MFIS is a component of the Multiple Sclerosis Quality of Life Inventory and, evaluates the impact of fatigue on physical, cognitive and psychosocial functioning [5].It was developed to assess the perceived impact of fatigue on a variety of daily activities.These scores are designed to measure the disability associated with fatigue (the extent to which fatigue limits activities), not the severity of symptoms [6].Patients are asked to rate on a Likert scale (range 0 -4) how often they have experienced 21 problems due to fatigue during the last month.MFIS total score ranges from 0 to 20, with the following ranges reflecting how often the person is limited in activities by fatigue: 0 -5 (never), 6 -9 (rarely), 10 -14 (sometimes), 15 -19 (often) and 20 (almost always). Brief Repeatable Battery of Neuropsychological Tests (BRB-N).BRB uses as a research tool for evaluating short-term changes in cognitive function in patients with MS and was designed to be brief [7].This test consists of 5 subtests as follows: Selective Reminding Test (SRT).The SRT is a measure of verbal learning and delayed recall of a 12-word list and uses six consecutive learning trials and a delayed trial [7].The Long-Term Storage (LTS) score represents the sum of words recalled on two consecutive trials without reminding.The total sum of the words in LTS of all six trials is recorded (SRT-LTS).The Consistent Long-Term Retrieval (CLTR) score is the sum of words recalled on all the subsequent trials without reminding.The total sum of the words in CLTR of all six trials is taken (SRTCLTR).The Total Delay score is the number of words recalled after a delay of 10 minutes [8]. Spatial Recall Test.The 10/36 SPART measures visuospatial learning and memory [8].It requires subjects to recall the placement of 10 checkers that are randomly placed on checkerboard.One score is the sum of correct responses in the three immediate recall trials (10/36 SRT).The other score is delayed recall after 15 minutes (10/36 SRT Delay). Symbol Digit Modalities Test.The SDMT investigates sustained and complex attention, information processing speed and working memory [9].It presents a series of nine symbols, each paired with a single digit, labeled 1 -9, in a key at the top of a sheet.During 90 seconds, the subject substitutes as many symbols as possible by the corresponding number and responds verbally.The score is the number of correct substitutions. Paced Auditory Serial Addition Test.The PASAT requires cognitive abilities such as mental calculation, interference suppression, and information-processing speed.Subjects must be able to rapidly refresh WM content and resist interference from a previous response.The subject is instructed to add 60 pairs of digits such that each number is added to the one that immediately precedes it and report the outcome verbally.The digits are presented by tape, first at a rate of every 3 seconds per digit, the second trial with every 2 seconds per digit.The score is the number of correct responses per trial (PASAT_3, PASAT_2) [8]. Word List Generation.The WLG explores verbal fluency on semantic stimulus by asking the subject to produce as many words as possible belonging to a semantic category (vegetables and fruits in version A, animals in version B) within 90 seconds [10].The score is the number of correct words. Statistical Analysis All results were analyzed using the software package SPSS for Windows Standard Version 18.The relationships between depression, fatigues, expanded disability, and neurological performance in patients with MS were analyzed by Pearson correlation. Results Descriptive statistics and correlations on each of the research variables are given in Table 1.As results show, BDI has statistically significant positive correlation with FSS and MFIS, but is negatively correlated with two subscales of BRB (PASAT & WLG).It means that higher levels of depression in patients with MS are associated with higher levels of fatigue and less functionality in consternation and semantic retrieval.FSS is negatively associated with all subscales of BRB (except CLTR), and conversely displayed significant positive correlation with MFIS.The relationship between FFS and subscales of BRB is more related with cognitive disturbance (like verbal learning, delayed recall, visuospatial learning and delayed recall) in comparison of BDI. In addition to the other scale that measures fatigue among patients, MFIS indicate the correlation between EDSS, PASAT, 10/36 SRT, WLG and total delay in selective reminding test subscale. Most relationships among EDSS and subscales of BRB (except PASAT-3 and WLG) were again significant and negative.These results approve that the higher expanded disability is associated with less cognitive function. According to the results, the brief repeatable battery proved to be internally reliable in Iran (α = 0.70).As Table 1 reviews the correlations, the subscales of BRB have a desirable pattern of correlations since they are associated with each other. Discussion In this study, there was a significant correlation between depression, fatigue and cognitive dysfunction in MS patients using different assessment tools.The association found between the studied variables can imply that there is a causal factor responsible for these events.In line with previous research, the present study showed that EDSS and BDI are significantly disturbed in patients with MS [11] [12].However, the mean BDI score and therefore the rate of depression found in this study was higher than that observed in similar studies. At baseline, the evolution of depression in MS patients has been shown to be independent from disability.Moreover, depression appeared to be endogenous and predictive of a poor EDSS score, suggesting that depression could be an early predictive factor for the progression of disability.These concepts are supported in the present study.higher BDI score was observed with more fatigue symptoms and poorer EDSS scores, although we did not follow up the patients to see if increased BDI score is accompanied by worse EDSS score [13]. Little is known about pathophysiology of fatigue and psychological dysfunction in MS.Many factors are said to influence fatigue in MS, such as medications, sleep disorders, body temperature, and depression.However, even after correction for common causes, the association between fatigue and depression persists [14].Also, this phenomenon has been related to MS lesions and neurological involvements, albeit not supported with enough evidence [15].Meanwhile, the current findings support the theory that inflammation and immune deregulation can influence neurotransmitter metabolism, neuroendocrine function, synaptic plasticity, and growth factor production, thus altering neural circuitry and contributing to depressive symptomatology [16].On this basis, the presence of such significant correlation between different factors including fatigue, depression, and psychological status may support the universal nature of neurocognitive involvement in MS. Moreover, the quality of life in MS patients has been found to be associated with physical disability, disease-related fatigue, and depression.Furthermore, the impact of fatigue and depression on quality of life is independent of physical disability and the other factors associated with MS [17].Therefore, early recognition of fatigue and depression, and rapid intervention in order to correct these problems can certainly increase the quality of life in MS patients. This study has some limitations.First, it is a cross sectional study and does not evaluate each patient through time.Therefore, there is a probability that some patients might appear weaker or stronger in their measurements due to environmental situations.Secondly, it is a single center study and patients are not in the same stage of the disease as they were chosen consecutively.We are aware that recent exacerbation of the disease or occurrence of MS complications may influence our results reported here.The strength of this study is investigating different psychological variables in MS patients at the same time which can provide a good overview of this group of patients within a short time. Conclusion In conclusion, we believe that fatigue, depression and cognitive dysfunction may be various features of a broad neurological dysfunction related to MS which needs to be fully identified.Further studies are needed to establish the pathophysiology of fatigue and cognitive dysfunction in MS in order to provide enough information for developing treatment modalities. Table 1 . Mean, standard deviation, and correlation of main study variables.
2017-08-16T00:16:08.669Z
2016-06-06T00:00:00.000
{ "year": 2016, "sha1": "69eb2f74b5ad9813e58ef3b202ca936059e6a9ec", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=68703", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "69eb2f74b5ad9813e58ef3b202ca936059e6a9ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
16965375
pes2o/s2orc
v3-fos-license
Genetic Dissection of New Genotypes of Drumstick Tree (Moringa oleifera Lam.) Using Random Amplified Polymorphic DNA Marker The knowledge of genetic diversity of tree crop is very important for breeding and improvement program for the purpose of improving the yield and quality of its produce. Genetic diversity study and analysis of genetic relationship among 20 Moringa oleifera were carried out with the aid of twelve primers from, random amplified polymorphic DNA marker. The seeds of twenty M. oleifera genotypes from various origins were collected and germinated and raised in nursery before transplanting to the field at University Agricultural Park (TPU). Genetic diversity parameter, such as Shannon's information index and expected heterozygosity, revealed the presence of high genetic divergence with value of 1.80 and 0.13 for Malaysian population and 0.30 and 0.19 for the international population, respectively. Mean of Nei's gene diversity index for the two populations was estimated to be 0.20. In addition, a dendrogram constructed, using UPGMA cluster analysis based on Nei's genetic distance, grouped the twenty M. oleifera into five distinct clusters. The study revealed a great extent of variation which is essential for successful breeding and improvement program. From this study, M. oleifera genotypes of wide genetic origin, such as T-01, T-06, M-01, and M-02, are recommended to be used as parent in future breeding program. Introduction Drumstick tree (Moringa oleifera Lam.), a short to medium height tree with luxurious evergreen leaves, was said to have originated from Himalayan tract in northwestern part of India [1][2][3][4]. The tree has a true diploid chromosome 2n = 28 with a distinguished tripinnate leaves having yellow or white petiole streaks [5,6]. Moringa is potentially one of the planet's most valuable plants, at least in humanitarian terms [7] and has been regarded as a wonder tree due to its great economic importance and uses [3,7]. Its pods were reported to have a protein content ranging from 20 to 30%, with a high vitamin C content. The moringa seeds were found to exhibit the property of natural coagulants/flocculants, which allows for growing of the tree for the purpose of usage by water and sewage treatment plant to clear turbidity in drinking water and sludge in sewage [8]. Similarly, the nutritive value of this plant for animals has been documented by Mendieta-Araica et al. [1], who reported that moringa contains large amount of crude protein, iron, zinc, and high concentration of vitamins A, B, and C in its foliage sample which makes it a very good feed and fodder for animals to browse and graze upon [9]. With respect to oil quality, M. oleifera seed concentrate contains about 35-45% seed oil, having odourless and colourless physical properties [10]. The edible oil is highly nutritious and is extracted by boiling the seeds with water and collecting the oil from the surface of the water [9,11]. The seed oil has high concentration of oleic acid (>73%) coupled with low polyunsaturated fatty acid, which gives the oil an outstanding and remarkable oxidative stability properties. The suitability of M. oleifera seed oil as biodiesel feed source has been tested and recommended by Da Silva et al. [12], who reported that the oil could be used as pure biodiesel or petrodiesel mixture on engine after converting it to fatty acid methyl esters (FAME) through the process of transesterification in the presence of sodium hydroxide (NaOH) as catalyst. Moreover, despite the great economic importance of this plant in terms of nutritional, social, and environmental benefits, the genetic diversity pattern, genetic makeup, and agronomical requirement needed for successful breeding and improvement, domestication, and large scale cultivation are yet to be established. This obstacle is an impediment to a successful production and commercialization of moringa and its related products [6]. Also, the knowledge of genetic diversity of tree crop is very important for rational planning of conventional, modern breeding, and improvement program for the purpose of improving the yield and quality of its produce [9,13]. In other words, the use of molecular markers, such as inter-simple sequence repeat (ISSR), random amplified polymorphic DNA (RAPD), and simple sequence repeat (SSR), has gained popularity as a genetic diversity assessment methods of tree and oil seed crops [14][15][16]. Molecular methods of genetic diversity study are a fast, efficient, reliable, and simple means of establishing genetic diversity pattern in plant [17]. The RAPD as one of the numerous molecular markers has been reported to be a reliable, reproducible, cost effective, fast, and less tedious marker, which is widely used in the field of plant breeding and molecular genetics due to its outstanding quality [18]. Therefore, this research work will study the genetic diversity of twenty new genotypes of Moringa oleifera from two populations with the aim of studying genetic diversity pattern in relation to their geographical origin, and dissection of germplasms as a means of initiating the breeding programme in the nearest future. Plant Materials. These were made up of seeds of twenty new genotypes of M. oleifera collected from six different countries ( Table 1). The moringa genotypes prior to their collection were found in the wild growing in their natural form. The countries of origin are Virgin Island, USA, Thailand, India, Tanzania, Taiwan, and Malaysia. The collection was principally made by the Asia Vegetable Research and Development Center (AVRDC) or World Vegetable Centre, Taiwan (15 accessions classified as international population), and the Institute of Tropical Agriculture, universiti Putra Malaysia (5 accessions classified as Malaysian population). The collected seeds were germinated and raised in nursery, Universiti Putra Malaysia Agricultural Park (TPU) for two months, exposed to the hardening process in the last ten days of nursery, then transplanted out to the University's agricultural experimental farm in Puchong (02 ∘ N59.035 , 101 ∘ E38.913 ), Selangor, Malaysia. Young and disease-free leaves of M. oleifera were collected for each of the genotypes during the early hours of the day; the leaves sample were wrapped in aluminum foil and labeled and kept in the freezer at −10 ∘ C. RAPD Polymerase Chain Reaction Procedure. According to the company instruction (Promega), 5 L of 5X Green GoTaq Flexi Buffer, 3 L MgCl 2 solution (25 mM), 0.5 L PCR nucleotide mix (10 mM each), 0.2 L primers (0.4 mol), and 1.0 U of Taq DNA polymerase were used for 25 L of PCR reaction including 1 L DNA template directly used after extraction [19]. In RAPD analysis, the following condition was used: initial denaturation at 94 ∘ C for 1 min followed by 45 cycles of denaturation done at 94 ∘ C for 1 min, annealing was done at 34 ∘ C for 1.5 min, and extension was done at 72 ∘ C for 2 min and a final extension at 72 ∘ C for 5 min [20]. The amplified PCR products were subjected to electrophoresis on 3% (w/v) MetaPhor agarose gel at 75 volt for 70 minutes. The gel was stained with ethidium bromide and visualized under ultraviolet (UV) light. Band Scoring. The image of the gel acquired in JPEG format was imported into UVIdoc 99.02 for band scoring. The band sizes were estimated based on DNA ladder (Promega Inc.). The absence and presence of band were scored in a binary model of 0 and 1, respectively. Band scoring was carried out only on those bands that are clear and reproducible and then those that are >50 bp. The data obtained at the end of the scoring was transferred and saved in Microsoft excel sheet. Data Analysis. Data of the twelve primers were analyzed to obtain the information on genetic diversity of the 20 moringa accessions (Table 2). Genetic similarity among the genotypes and principal component analysis (PCA) were calculated using NTSYS-pc 2.1. Cluster analysis was also carried out using the unweighted pair-group method with arithmetic average (UPGMA) based on the Nei's genetic distance matrix and dendrogram was drawn to show the clustering pattern of the different genotypes using NTSYS-pc. The percentage polymorphism of the bands (PPB), effective alleles (ne), genetic diversity index (h), Shannon's information index (I), and Nei's gene diversity were calculated using POPGEN 1.31 software. Analysis of molecular variance was conducted using GeneAIEx 6.5 to partition the variation present in the germplasm and at the same time test the variance component for RAPD phenotype. Screening of Primers. A total of 24 RAPD primers were used to study the genetic diversity of twenty genotypes of M. oleifera (Figure 1). Out of these primers, only 12 showed as distinct, reproducible polymorphic bands. A total of 108 polymorphic fragments were generated by these 12 primers with an average of 9.0. Genetic Diversity within the Two Populations. The mean percentage polymorphic loci in the two populations (international and Malaysian) were calculated to be 75.73 and 32.70, respectively ( Table 3). The observed number of alleles in the two populations from Taiwan and Malaysia is 1.50 and 0.71, respectively with 1.26 as the mean value for effective [21]. : Shannon's information index Lewontin [22]. He: expected heterozygosity. Furthermore, in order know the source of genetic variation for these Moringa genotypes, RAPD profile was analyzed using AMOVA. This was aimed to partition all the sources of variation existing in the germplasm into two major groups. The result revealed that 95% of the total genetic variation occurred as a result of variation within the population, while variation among the populations accounted for the remaining 5% of the total genetic variance (Table 4). Also the genetic variance among the population as indicated by the result ( st = 0.16) was significant at 5% probability level when permutation test was conducted. Cluster Analysis. Cluster analysis based on Jaccard's genetic similarity coefficient showed high level of genetic variation among the genotypes from the two countries. The similarity coefficient ranged from 0.38 to 0.89, with T-11 and T-15 genotypes found to have highest genetic similarity (0.89), while T-06 together with T-07 possessed least similarity coefficient (Figure 2). In addition, a dendrogram was constructed using UPGMA cluster analysis to show the genetic relationship among the twenty genotypes from different geographical backgrounds. From this dendrogram, the twenty genotypes were grouped into five major clusters at a coefficient of 0.63. Cluster III, with highest number of members, had 14 genotypes, followed by cluster I (T-01 and T-03) and cluster IV (M-01 and M-04) with two genotypes each. Clusters II (T-07) and V (T-06) have one genotype each and were ranked the least populated clusters (Table 5). Discussion Effective and efficient genotyping of any plant species through RAPD requires a careful selection of suitable primer combination in order to get detail and informative result. High level of genetic polymorphism detected by these markers is in agreement with the assumption that outcrossing plant species from natural population will have higher level of genetic diversity when compared to in-breeding plant species. This finding agrees with earlier report on similar out-crossing plant species, such as Jatropha [23] and other oil plant species. High value of Shannon's information index (0.295) for international population as compared to Malaysian population (0.184) suggests that members of this population are more diverse. This is also obvious from the way the accessions cluster together. Additionally, high level of genetic differentiation in these two populations as reflected by the genetic diversity parameters, such as Shannon's information index, expected heterozygosity, percentage polymorphism, and others, are pointing to the fact that there is wild variability in these populations of Moringa and this is very important for successful crossing and improvement programs in future. This observation follows a similar trend with the result of genetic diversity study on 75 accessions from Sudan and Guinea savanna zones of Nigeria, where six polymorphic primers of RAPD origin gave a total of 42 polymorphic bands [9]. Furthermore, interaction between various ecological and biological factors, such as genetic drift, gene flow, selection, and mating system, affects the genetic structure of any plant populations [24]. The overall genetic variability and differentiation pattern observed in these M. oleifera populations are in agreement with those of other outcrossing plant species [14,25,26] T-01 T-03 T-07 T-02 T-09 T-04 T-05 T-08 T- low significant genetic differentiation among the populations. However, higher genetic differentiation and diversity were observed within the populations of M. oleifera and this indicates a relatively restricted variability as expected. This pattern of population structure has been previously reported in other out crossing plant species [27,28]. Moreover, clustering analysis showing wide range of similarity coefficient showed that, there is high level of genetic variation within the two populations. The M. oleifera genotypes from the two populations were seen clustering together in the same group. This shows that there is no any distinct relationship between the geographical origin and the genetic distance as shown in the dendrogram. This finding implies that the genetic divergence within and between these two populations could not be explained by their geographical distance. This finding also means that isolation as a result of distance cannot be said to have been responsible for the divergence observed in these population [24]. In conclusion, these findings have proven that genetic divergence is very high in these populations and it can therefore be inferred from the data that the Moringa populations will be a very good germplasm material for the future breeding and improvement of this economically important tree crop. Genotypes that are far apart based on their genetic similarity coefficient (like T-01, T-06, M-01, and M-02) should be selected for future breeding.
2016-05-12T22:15:10.714Z
2013-04-23T00:00:00.000
{ "year": 2013, "sha1": "32240df18cfe02c1932515db7c6792dce6ac50cb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/604598.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0dd4511d7330b6bf4a53cff9a23869c27b83c23a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
28403621
pes2o/s2orc
v3-fos-license
Self-Management Educational Program for Improving Asthmatic Older Adults' Behaviors Self-management can be used to live a more effective and efficient daily life. This study was designed to evaluate the effect of self-management educational program on improving the asthmatic older adults' behaviors. Design: A quasi experimental design was used. Setting: This study was conducted at chest outpatient clinics in Benha University Hospital and Benha Health Insurance Hospital. Sample: A purposive sample of 53 asthmatic older adults was chosen from total 229. Tool: A structured interviewing questionnaire to assess asthmatic older adults' socio-demographic characteristics, history of asthma, characteristics of asthma, and their knowledge related to asthma, also their selfmanagement behaviors to manage asthma. Results: Findings of this study revealed that less than three quarters of the studied subjects were females and had asthma duration more than 10 years, their knowledge and selfmanagement behaviors regarding asthma were improved post program with highly statistically significant differences between total knowledge and total selfmanagement behaviors pre and post program implementation (p<0.001). Conclusions: This study concluded that the asthma selfmanagement program is effective for improving older adults' selfmanagement behaviors regarding asthma. Recommendation: Continues of self-management program for asthmatic older adults focusing on management behaviors especially asthmatic triggers prevention, and asthma attack management. Also, the study recommended that, periodic refreshing course and training for nurses in chest outpatient clinics about chest diseases especially asthma is needed to take active role in educating asthmatic older adults how to manage and control asthma. Introduction Asthma is a major public health problem worldwide with wide differences in prevalence and severity throughout the world. Significant increases in the prevalence and the severity have been noticed globally over the past few decades in certain geographical regions [1]. Asthma is a common chronic inflammatory condition of the airways which presents as episodes of wheezing, breathlessness and chest tightness due to wide spread narrowing of the airways symptoms can be triggered by viral infections, exercise, air pollutants, tobacco smoke or specific allergens. Several factors that influence the prevalence of asthma include obesity, allergic rhinitis, genetic, family history, exposure to allergens at an early age, and smoking history [2], [ 3], [ 4], [ 5]. Asthma is characterized by periods of inactivity punctuated by acute flares. Exacerbations may lead to utilization of urgent care services such as emergency department visits and hospitalization, and occasionally death. Controlling asthma and preventing exacerbations requires meticulous attention to self-management, including avoidance of triggers, such as cigarette smoke and allergens, regular monitoring by a healthcare provider, and proper use of daily anti-inflammatory controller medications. Unfortunately, many patients fail to maintain adequate self-management behaviors [6]. Self-management is defined as the personal application of behavior change tactics that produces a desired change in behavior. The term self-control is also used to refer to this type of behavior change program. Self-management can be used to live a more effective and efficient daily life, break bad habits and acquire new ones, accomplish difficult tasks, and achieve personal goals. Learning and teaching selfmanagement skills have many advantages and benefits to the individual actually learning or implementing the skills, those teaching it, and others who may benefit from the individual's use of the skills [7]. Nurses play a vital role in asthma management. The public relies on nurses to be accessible, well informed and reliable. It is the nurse's duty to give correct and current information and remove barriers to care. Nurses have a responsibility to assess symptom control, safe medication use and correct any erroneous information [8]. Magnitude of the Problem Globally, asthma is one of the most common chronic diseases affecting 300 million people world-wide and by 2025, another 100 million will have been affected. It estimates about 250.000 deaths from asthma every year, mainly in low-and middle-income countries. Asthma occurs at high frequency in young and older adults [5], [9], [10]. Self-management is important for any patient with a chronic disease. Asthma self-management education is essential to the control of asthma. If asthma symptoms are controlled, the patient should have fewer exacerbations, a higher quality of life, lower costs, and slower progression of airway remodelling from inflammation, less morbidity, and lower risk of death from asthma. Education directed toward asthma self-management behaviors emphasizes patient participation in symptoms monitoring and control. The asthma educator should use a collaborative education that encourages the patient to take responsibility for his or her own care [11]. Asthmatic older adults should know how to prevent and manage these episodes. Optimal selfmanagement includes self-monitoring (symptoms or symptoms and peak flow), regular medical review [12]. Aim of the Study This study aimed to evaluate the effect of self-management educational program on improving the asthmatic older adults' behaviors through: 1) Assessing older adults' knowledge and selfmanagement behaviors needs regarding asthma. 2) Developing a self-management educational program according to asthmatic older adults' needs 3) Evaluating the improvement degree of asthmatic older adults' knowledge and self-management behaviors regarding asthma. Research Hypothesis Asthmatic older adults' knowledge and self-management behaviors regarding asthma will improve after implementation of educational program. Research Design A quasi experimental research design was used to carry out this study. Setting The study was conducted at Chest Outpatient Clinics at Benha University Hospital and Benha Health Insurance Hospital. Sample A purposive sample of all older adults who attended to the previously mentioned setting within three months, from beginning of December 2014 to the end of February 2015, accounting for a total number of 53 asthmatic older adults were chosen. 33 asthmatic older adults (out from120) from Chest Outpatient Clinic at Benha University Hospital and 20 asthmatic older adults (out from109) from Chest Outpatient Clinic at Benha Health Insurance Hospital. They were chosen according to the inclusion criteria; the older adults were already diagnosed as asthmatic patients, their age over 60 years and able to provide care for themselves, and free from other chronic diseases. Tool of Data Collection A structured interviewing questionnaire developed by the researchers after reviewing of related literature and experts opinions. It was written in a simple Arabic language, and composed of five parts: Part I: Socio -demographic characteristics of older adults', which included: age, sex, level of education, marital status, occupation, and source of information. Part II: Older adults' history of asthma which included: duration of asthma, additional allergic diseases, family history, and degree of kinship (if there is a family history). Part III: Asthma attack characteristics, it included; signs and symptoms that felt by older adults, average occurrence of attack during a month, occurrence of attack during/ day and seasonal occurrence of attack. Part IV: Included older adults' knowledge regarding to asthma as meaning, causes, signs and symptoms, complications, impact of asthma on older adults' health, asthma triggering factors ( nutritional triggers, psychological, environmental, respiratory infection, activities or additional efforts and house animals' factors), and treatment Knowledge Scoring System The score of knowledge was divided into three levels: Good knowledge: 75% or more Average knowledge: 50 < 75%; and Poor knowledge: < 50%. Part V: Included asthmatic older adults' self-management behaviors through asking question to avoid attack occurrence and to care themselves during asthma attack, it included: A: Asthmatic older adults' self-management behaviors to protect themselves from attack occurrence such compliance with taking prescribed medication, avoid exposure to air draft, avoid exposure to environmental triggering factors (as dust), avoid excessive muscle efforts, avoid certain kinds of food that lead to asthma attack, perform breathing exercise and follow-up regularly. B: Older adults' self-management behaviors during attack occurrence as take prescribed medication immediately, sit in a good position during attack (semi sitting), take warm fluids during attack, providing complete relaxation and complete bed rest, if asthma persist take medication again, if it still persist go to hospital or doctor for treatment. Improving Asthmatic Older Adults' Behaviors Self-Management Behaviors Scoring System The score of self-management behaviors were divided into two levels: ≥60%, they considered satisfactory level and <60% are unsatisfactory level. Statistical Design Collected data were categorized, coded, entered, analyzed and tabulated using the Statistical Package for Social Sciences (SPSS) version18. The analyses carried out included descriptive statistics. The level of statistical significance was set at p-value < 0.05. Ethical Consideration The researchers emphasized to asthmatic older adults that the study was voluntary and anonymous. Asthmatic older adult had the full right to refuse to participate in the study or to withdraw at any time without giving any reasons. Pilot Study A pilot study was carried out on 10% of asthmatic older adults attending to the Outpatient Clinic at the Benha University Hospital and Health Insurance Hospital in order to test the applicability of tools and clarity of the included questions as well as to estimate the average time needed to fill in the sheets. Those who shared in the pilot study were excluded from the main study sample. Field Work Official letters from the Faculty of Nursing, Benha University were forwarded to the Director of each Hospital. Each Director was informed about the time and date of data collection. Each older adult was interviewed individually after explaining the purpose and method of the study and obtaining his/ her approval to participate in the study with confidentiality. Content validity of the tool was tested by a panel of five experts in Community Health Nursing field and corrections were done accordingly based on their responses. A pilot study was conducted on 10% of older adults, who were excluded from the main study sample, to test the applicability and clarity of the tool. The self -management educational program was developed based on review of related literature and assessment tool (pretest). Data were collected during the period from the beginning of December 2014 to the end of May 2015.Time plan was established and the older adults were organized into 10 groups (5-7 older adults). The program in a hospital's day started from 9 .00 a.m. to 12.30 p.m. Each older adult attended 5 sessions (2 sessions for knowledge and 3 sessions for selfmanagement behaviors). The duration of each session was 30-45 minutes according to the presented items. Each session was followed by a summary of the essential asthmatic items discussed. 1. A pre-program assessment tool using an interview for data collection during attending to the outpatient clinics. A review of current and past, local and International related literature on different aspects of problems facing asthmatic older adults was done using textbooks, articles, periodicals, internet, and magazines. 2. The asthma self management program was designed by the researchers based on results obtained from pre-program assessment tool; the content was revised and modified according to the related literature, it included The General Objective of the Self-management Educational Program Was to Improve the asthmatic older adults' knowledge, and selfmanagement behavior to control asthma. Contents of the Asthma Self Management Program Meaning of asthma and asthma attack. Meaning of self-management Classification of asthma. Types of asthma categories. The triggering factors for asthma attack. Signs and symptoms of asthma. Symptoms that refers to the worsen asthma. Asthma treatment. Types and doses of asthma medication. The nutritional requirements Older adult's self-management behaviors to avoid and manage asthma attack Measures to avoid or limit exposure to asthma triggers factors Measures to avoid excessive muscle efforts, Avoid certain kinds of food that lead to asthma attack, Importance of regular follow-up How to handle signs and symptoms of worsening asthma When and where to seek care Breathing exercise Deep breathing Coughing exercises 3. Implementation of the asthma self-management educational program was done in the outpatient clinics in the waiting area before the older adults have been examined by physician. The program was applied in five sessions, two sessions for knowledge and three sessions for selfmanagement behaviors; using the educational methods of discussions, role play, followed by demonstration and redemonstration. As well, audiovisual aids were used such as posters, a simplified and comprehensive booklet with illustrated pictures including information about asthma and its' management was written in a simple Arabic language to suit understanding level of studied subjects. 4. Evaluation of the self management program was done immediately after the implementation of the program by using the same pre-program format. Results Table (1): This table shows the socio-demographic characteristics of studied subjects, 73.6% of the studied subjects were females and 39.6% were aged 65 years or more with mean age 65.72±5.74, and 71.7% of them were illiterate. As regards marital status, 67.9 % of the studied subjects were married. Regarding to occupation, 67.9% of the studied subjects didn't work. Fig. (1): Illustrates that 62.3% studied subjects their source of information regarding asthma were doctors while, only 5.7% of them acquired their information from nurses. Table (2): Reveals that 73.6% of studied subjects had history of asthma duration more than 10 years and 81.1% of them had nasal allergy. As regards family history of asthma, 62.3% of the studied subjects had positive family history involving first degree relative. Table (3): Indicates that 64.2% of studied subjects had once attack per month while 39.6% of them asthma attack lasted more than 30 minutes, 49.1% of them asthma attack occurred at the evening and 41.5% of them attacked by asthma at the spring. Fig. (2): Illustrates that signs and symptoms felt by studied subjects during attack, 83.1% of older adults suffered from cough, while 37.7% of them suffered from chest pain during attack. According to the research hypothesis: The findings revealed a significant improving in studied subjects' knowledge and self-management behaviors regarding asthma after implementation of self-management educational program (table 4, 5, 6, and 7) Table (4): Shows that studied subjects' knowledge regarding asthma was improved post program implementation. The obvious improvement was observed regarding causes of asthma (34.0 % versus 58.5%) followed by signs and symptoms of asthma (32.1% versus 62.3) respectively post program. Over all, the results had highly statistically significant difference in relation to basic items of knowledge between pre and post program implementation (p<0.001). Table (5): Reveals that studied subjects' self-management behaviors to protect themselves from asthma occurrence were improved post program implementation. The obvious improvement was observed regarding to avoid environmental triggers factors (as dust) 84.9% compared by 56.6% pre program and 73.6% of them performed breathing exercises post program compared by 30.2% pre program. Also this table shows there was statistically significant difference between pre and post program implementation for basic items of self-management behaviors to protect themselves from asthma occurrence except the compliance with taking prescribed medications Table (6): Shows that the studied subjects' selfmanagement behaviors to control asthma during attack were improved post program. There was statistically significant difference in relation to if asthma persists the studied subjects take medications again (9.4 % versus 73.6%), sit in semi sitting position during asthma (58.5% versus 73.6%) and go to the doctor or hospital immediately if asthma persists (56.6 % versus 84.9%) between pre and post program implementation. However, there was no statistically significant difference in relation to take medication immediately, take warm fluids and rest in bed between pre and post program implementation. Table (7): Reveals that studied subjects' total knowledge scores were improved post the program implementation as 8.35±2.52 compared by 3.00±3.000 preprogram while the self-management behaviors improved as 10.16±1.94 compared by 6.73±2.23 preprogram, there was highly statistically significant relation between total knowledge and total self-management behaviors pre and post program implementation. Table 5. Frequency distribution of studied subjects regarding their self-management behaviors to protect themselves from asthma occurrence pre and post program. Discussion Asthma is the most common long term chronic disease, around the globe; it was found that 100 to 150 million people of all ages suffer from asthma. Asthma is an important cause of morbidity and mortality in the elderly nowadays. In addition, the burden of asthma is more significant in the elderly than in their younger counterparts, particularly with regard to mortality, hospitalization, medical costs or healthrelated quality of life. It also lowers productivity and reduces participation in family life [13], [14], [15]. The results of this study revealed that less than three quarters of the studied subjects were females, illiterate and married. Regarding studied subjects' occupation, more than two thirds of studied subjects didn't work, while less than one third was working. This may be due to the female stayed at their home most of time which exposed her to indoor pollution during cooking and other home activities rather than men. These findings consisted with Tageldin et al. [5] who found that 60% of the studied cases were females and mainly they didn't work and it is accepted that asthma is more common in women than in men. Also the same results consisted with Taha and Ali [16] who found that, two thirds of the patients were females, more than three quarters were married and illiterate, and only less than one third of the patients were employed. Regarding source of information, less than two thirds of older adults gained their information regarding asthma from doctors while few of them gained their information from nurses. This may reflect a deficiency in nursing educational role especially in the outpatient clinics. This finding agreed with Ozturk et al. [10] who reported that, the main source of asthma knowledge was from physicians. Moreover, Qureshi, [17] said that, the majority of the studied sample learn about asthma from physician. The results of the current study revealed that, less than three quarters of the studied subjects had duration of asthma Improving Asthmatic Older Adults' Behaviors more than ten years and more than three quarters of them suffered from nasal allergy. These findings consisted with Ozturk et al. [10] who found that, the mean duration of asthma was 13.7 ±15.4 years. These findings were congruent with Salem [18] who reported that the majority of the studied subjects with asthma had nose sensitivity. As regards family history approximately more than three fifths of the studied subjects had family history of asthma in first degree relative. This finding agreed with Mendoza [19] who found that 87% of the studied sample had a significant positive association between asthma and family history. Also the same finding supported with Shoeib [20] who found that the majority of the studied sample had positive family history of asthma in first class relative. In this respect Liu et al. [21] emphasized that a family history of asthma is an important risk factor for asthma. In the present study, result showed that less than two thirds of the studied subjects had once attack per month and less than one half was attacked by asthma in the evening. As regards signs and symptoms felt by asthmatic older adults during attack, more than three quarters of older adults suffered from cough, while more than one third of them suffered from chest pain during attack. These findings agreed with Mansour et al. [1] who stated that, asthma symptoms can differ from person to person, but most people experience a worsening of symptoms at night and the most common symptoms of asthma were; wheeze, cough, dyspnea, and chest tightness. In addition Refaat and Aref [22] reported that asthma clinical features include recurrent episodes of dyspnea wheezing and cough which occur more nocturnally and accompanied by chest pain On investigating knowledge of the studied subjects about asthma, the older adults' knowledge about asthma improved after implementation of self-management educational program compared by pre program. There was highly statistically significant differences in relation to basic items of knowledge between pre and post program implementation (p<0.001). This may be point out to a deficiency in the educational healthcare professional role. On the other hand the needs of older adults to gain knowledge on how to deal with asthma in simplified way forced them to acquiring the knowledge about asthma from the self-management program in spite of their low educational level. In this respect Taha and Ali [16] found that after implementation of the guidelines, patients' knowledge demonstrated significant improvement, which was confirmed through multivariate analysis. Also, Williams [23] concluded that asthma educational program is needed to increase patient's information. According the current study, the self-management behaviors of the studied subjects to protect themselves from asthma occurrence improved post program. The obvious improvement was observed regarding to avoid air draft, environmental triggering factors and excessive muscle efforts. This may be attributing to positive effect of selfmanagement programs which result in changing in studied sample behaviors. These findings were congruent with Pinnock, [24] who emphasized that, people living with asthma have to accommodate their long-term condition within the context of their daily life and they may need to avoid their triggers where possible. On the same line Yoo, et al. [25] mentioned that, it is necessary to avoid or reduce stimuli that may cause the acute aggravation of asthma. In this respect Mohammed, [26] found that, most of asthmatic patients had totally limited strenuous activities Moreover, the asthmatic older adults' self-management behaviors were improved post program regarding avoiding certain kinds of food which lead to asthma attack. This may be due to that older adults knew the different types of food which may aggravate asthma attack from self-management program so they avoid it. Concerning breathing exercises, more than two thirds of studied subjects had unsatisfactory behaviors pre program. This may reflect the unavailability of instruction guided resources about importance of breathing exercises for asthmatic older adults. This result corroborated with Mohammed, [26] who found that more than two thirds of studied subjects had unsatisfactory practice regarding breathing exercises. However, post self-management program implementation this numbers decreased to about one quarter of studied subjects who had unsatisfactory behaviors. Additionally, the results also confirmed that, the asthmatic older adults' self-management behaviors regarding follow up regularly improved post program. This result supported by Taha and Ali [16] who documented that patients' compliance to follow-up improved post program. There was highly statistically significant differences in relation to basic items of asthmatic older adults' selfmanagement behaviors to protect themselves from asthma occurrence between pre and post program (p<0.001)). In this respect Temple [27] highlighted that asthma selfmanagement education is essential to provide patients with the skills necessary to manage asthma and improve their health. Also this result supported with Taha and Ali [16] who proved that, patients' practices and compliance related to asthma have also improved after implementation of the study guidelines. After implementation of educational self-management program, older adults' self-management behaviors to control asthma were improved post program. The obvious improvement was observed regarding to sit in semi sitting position, if asthma persist take medication again and go to the doctor or hospital immediately if asthma persist. In this respect Burns [28] clarified that patients with asthma exacerbation should be given initial treatment and considered for hospital admission if unresponsive to initial treatment or if they have any features of acute, severe or life-threatening asthma. Moreover Riffat et al. [29] highlighted that, achieving asthma control is central in optimizing patient clinical outcome. In addition these findings were on the same line with Pinnock [24]who mentioned that, patients have to recognize when their asthma is deteriorating, and make decisions about when to adjust their medication, when to use emergency treatment and when to seek professional help. In this respect Janson et al. [30] concluded that the educational selfmanagement intervention significantly improved adherence with medical treatment and perceived control of asthma. There was highly statistically significant relation between studied subjects' total knowledge and total self-management behaviors pre and post program. This may be due to studied subjects who had lack of knowledge about asthma unable to performed essential skills of self-management behaviors and vice versa. In this respect Abd El-Rahman [31] proved that there were highly statistically significant differences between elderly people total knowledge scores and total practices score. Conclusions According to the results and research hypothesis, this study concluded that, the studied subjects' knowledge regarding asthma was improved after the program and there was highly statistically significant difference between pre and post program implementation, also their selfmanagement behaviors regarding asthma were improved after the program implementation. Moreover, there was statistically significant difference between pre and post program implementation. Findings from this study suggest that asthma self-management educational program was effective in improving asthmatic older adults' behaviors. Recommendations Based on the findings of the current study recommendations are suggested as follows Continues of self-management program for asthmatic older adults focusing on management behaviors especially asthmatic triggers prevention, and asthma attack management. Periodic refreshing course and training for nurses in chest outpatient clinics about chest diseases especially asthma is needed to take active role in educating asthmatic older adults how to manage and control asthma. Disseminate a simplified and comprehensive booklet with illustrated pictures including information about asthma and its' management for asthmatic older adults to improve management and control of asthma behaviors.
2019-03-12T13:06:44.835Z
2015-12-16T00:00:00.000
{ "year": 2015, "sha1": "87927302b5df5abfda485cea473538b02af9454f", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajns.20150406.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0819b4aaf2f8e49cb1194da3091550ec601867c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249545120
pes2o/s2orc
v3-fos-license
Lamb Waves Propagation Characteristics in Functionally Graded Sandwich Plates Functionally graded materials (FGM) have received extensive attention in recent years due to their excellent mechanical properties. In this research, the theoretical process of calculating the propagation characteristics of Lamb waves in FGM sandwich plates is deduced by combining the FGM volume fraction curve and Legendre polynomial series expansion method. In this proposed method, the FGM plate does not have to be sliced into multiple layers. Numerical results are given in detail, and the Lamb wave dispersion curves are extracted. For comparison, the Lamb wave dispersion curve of the sliced layer model for the FGM sandwich plate is obtained by the global matrix method. Meanwhile, the FGM sandwich plate was subjected to finite element simulation, also based on the layered-plate model. The acoustic characteristics detection experiment was performed by simulation through a defocusing measurement. Thus, the Lamb wave dispersion curves were obtained by V(f, z) analysis. Finally, the influence of the change in the gradient function on the Lamb wave dispersion curves will be discussed. Introduction Functionally graded materials (FGM) are based on computer-aided material design, using advanced material compounding technology to make the elements (composition, structure, etc.) of the constituent materials continuously change from one side to the other along the thickness direction. Thus, the properties and functions of the material also vary in gradient. Functional gradient materials of metal-ceramics were proposed and prepared in 1984 [1]. Since the volume content of the FGM components is continuously changed in the spatial position, and there is no sudden change in physical properties, the interlayer stress problem can be avoided and the stress concentration phenomenon can be reduced. At the same time, FGM is a good devisable material, in which one can change the spatial distribution of composition and content of the material by a target function, so as to achieve the purpose of optimizing the internal stress distribution of the structure [2]. The FGM sandwich plate consists of three layers: the top layer, the middle layer and the bottom layer. Generally, FGM sandwich plates are divided into two categories. One is FGM as the top and bottom layers of the sandwich plate, and the homogeneous isotropic materials as the intermediate layer. The other type is FGM as the middle layer of the sandwich plate, and homogeneous isotropic materials as the top and bottom layers. FGM sandwich plates have excellent overall performance, and have been used in optical, biomedical, electromagnetic and mechanical engineering, etc. [3]. The elastic waves in the FGM sandwich plate contain ultrasound guided waves and body waves. Ultrasonic guided waves cover Lamb waves, surface waves, Love waves, etc. Ultrasonic guided waves provide unique capabilities for the structural health monitoring of plate-like structures [4]. However, the guided waves have multi-mode and dispersion characteristics during propagation, and the dispersion appears to be a unique physical property. It mainly indicates that the propagation characteristics of the guided waves are affected by frequency. That is to say, the propagation velocity of a guided wave will change by frequency, which is called dispersion [5]. In addition, most guided wave modes have strong dispersion characteristics. Therefore, studying the relationship between the dispersion curve of FGM sandwich plates and material property parameters is an important part of theoretical research. Zhu et al. [6] used the matrix recursion method to establish the characteristic equations of Lamb waves of multi-layer free plates, and analyzed the dispersion characteristics of double-layer plates and sandwich plates. Wu et al. [7] studied the propagation dispersion characteristics of Lamb waves from single-layer plates to multi-layer FGM plates, and obtained the relationship between the continuous change in material properties and the Lamb wave velocity and displacement. Bruck [8] analyzed the propagation of stress waves in FGM by establishing a one-dimensional FGM model, and transitioned the FGM layered model to a continuously changing gradient model. Chen et al. [9] used a layered plate model to analyze the dispersion characteristics of FGM plates under large frequencies and thick product conditions. In all the above research, the FGMs were divided into many homogeneous or inhomogeneous layers, in order to solve the wave propagation problem. However, the layer number of FGMs plays a vital role in the numerical accuracy of the calculations. In addition, Lefebvre et al. [10] proposed the Legendre orthogonal polynomial series expansion (LOPSE) method to study the propagation properties of waves in layered-plate structures. Yu et al. [11] further introduced the Legendre series expansion method into the dispersion curve calculation of an anisotropic multilayer piezoelectric material plate with a greater difference in mechanical parameters. Compared with the rotation matrix method, a good calculation result is obtained. Dong et al. [12] studied the SH surface wave in the piezoelectric gradient half space, considering the horizontal shear direction displacement by using Laguerre orthogonal polynomials. Salah et al. [13] proposed a layered model to analyze the Love wave over a half space of an elastic substrate covered by a functionally gradient piezoelectric material plate. As mentioned above, the studies treated the FGM structures as a continuously gradient medium, and they effectively calculated the propagation characteristics of acoustic waves in FGMs without separating them into multilayer plates. However, there are few reports on the numerical simulation of Lamb wave propagation in FGM sandwich structures. Likewise, the finite element method is a numerical method with both a theoretical basis and practical significance. It was originally used by Zienkiewicz [14] to simulate wave propagation and scattering, but then Finnveden [15] successively used the spectral finite element method to study the periodic waveguide structure and the guided wave in the viscoelastic damped waveguide structure. Cheng et al. [16] studied the propagation of surface acoustic waves excited by lasers in functionally graded materials, and simulated the gradients of various mechanical and thermal parameters in functionally graded materials. Kim and Paulino [17] proposed an isoparametric gradient element model, and applied the shape function of the model to obtain the material properties of the attribute of the element node to the inner difference. Zhang and Xiao [18] applied this method to prove that the finite element model based on isoparametric gradient elements can better reflect the gradient variation in material properties. Wang and Gross [19] proposed a layered model of FGM. The material parameters of each layer change according to a continuous function and are continuous at the interface. Such a layered model achieved good results in the crack analysis of FGM structures. Nevertheless, little research paid attention to the complex multi-mode dispersion characteristics of functionally graded materials, which can provide more abundant information for non-destructive testing and the evaluation of the characteristics of FGM plates. In this research, we use the Legendre polynomial series expansion method to study the propagation of Lamb waves in functionally graded material sandwich plates, and discuss their convergence problem. The influence of gradient layer parameter changes on Lamb wave dispersion curves will also be given. In addition, the finite element model of FGM sandwich plates was established by PZFlex (Division of Applied Science, Mountain View, United States), and the experimental process of defocusing the measurement of line-focused ultrasonic transducers based on acoustic microscopy, also known as the V(f, z) measurement, was simulated. Modeling For a functionally graded sandwich plate, as shown in Figure 1, the propagation direction of the Lamb wave is along the x 1 axis. The thickness of the sandwich plate is h 1 + h 2 + h 3 , in which h 2 is the thickness of the FGM layer, and h 1 and h 3 are the thicknesses of steel and copper, respectively. The material parameters of the FGM layer vary continuously in the thickness direction. Here, we are referring to the density and the elastic constants, which are functions of x 3 . cuss their convergence problem. The influence of gradient layer par Lamb wave dispersion curves will also be given. In addition, the finit FGM sandwich plates was established by PZFlex (Division of Applied View, United States), and the experimental process of defocusing th line-focused ultrasonic transducers based on acoustic microscopy, als z) measurement, was simulated. Modeling For a functionally graded sandwich plate, as shown in Figure 1, rection of the Lamb wave is along the x1 axis. The thickness of the san h2 + h3, in which h2 is the thickness of the FGM layer, and h1 and h3 ar steel and copper, respectively. The material parameters of the FGM ously in the thickness direction. Here, we are referring to the density stants, which are functions of x3. then the equations of motion will be given as follows: Geometric relationship under the assumption of small deformati x  Free harmonics of the particle displacement can be written as foll Assuming that the displacement components of the Lamb wave are the following: then the equations of motion will be given as follows: Geometric relationship under the assumption of small deformation is as follows: Free harmonics of the particle displacement can be written as follows: where σ ij and ε ij represent stress and strain, respectively, U(x 3 ) and W(x 3 ) are the amplitudes of particle vibrations on the x 1 and x 3 direction, k is the wave number, and ω is the angular frequency. Considering the boundary problem of isotropic plates, the rectangular window function can be introduced by the following: The elastic constant and density of the material are expressed as a function of position, as follows: where N is the total number of layers, and here, N = 3. Therefore, the elastic constants and density in the sandwich plate can be expressed as follows: The middle layer of the sandwich plate is the FGM layer; the volume fraction of copper of this layer is represented as V Cu , which can be written by a power function, as follows: where n is the exponent of the power function. The propagation characteristics of Lamb waves in the FGM layer under different gradient distributions can be obtained by changing the power exponent n. Then, in the FGM layer, the relationships between elastic constants/density and volume fraction are as follows: Substituting Equation (8) into Equation (9) yields the functions of the elastic constant and density in the FGM layer, with respect to x 3 : Thus, the constitutive relationship is given as follows: Legendre Orthogonal Polynomial Expansion Substituting Equations (3), (4), (7), (10) and (11) into Equation (2) yields the wave control equation in the x 1 -x 3 plane. Then, the wave control equation in the x 1 direction is as follows: The wave control equation in the x 3 direction is as follows: The amplitudes of U(x 3 ) and W(x 3 ) of the displacements are expanded into the form of a summation of the Legendre orthogonal polynomials, which can be written as follows: where (i = 1, 3; m = 1, 2, . . . , M) are the expansion coefficients of Qm(x 3 ). Theoretically, m takes from zero to infinity, but, in fact, m takes a finite value of M. Higher-order terms can be considered as infinitesimal quantities, and M is the cutoff order of Legendre orthogonal polynomial series. It should be noted that Qm(x 3 ) is an orthogonally normalized polynomial group, as follows: Substituting the displacement amplitude Equation (14) into the wave control Equations (12) and (13) will derive the final form of Legendre polynomial expansion equations. Multiply both sides of the expanded equation by Q j (x 3 ) and integrate x 3 from zero to h 1 + h 2 + h 3 . Using the orthogonal properties of the Legendre polynomial, a matrix form of the equations can be given, as follows: where A j,m ij and M j m can be obtained from the wave control equations after expansion, which are shown in Appendix A. According to the matrix Equation (16), the relationship between the wave number k and the angular frequency ω can be obtained by solving the eigenvalues. That is how the dispersion curves of the Lamb wave in the FGM sandwich plate can be extracted. The material selected was a copper-FGM-steel sandwich plate, and the mechanical performance parameters of copper and steel are shown in Table 1. In total, the thickness of the plate is 0.4 mm, in which both the thicknesses of copper and steel are 0.1 mm, and the thickness of the FGM layer is 0.2 mm. According to Equation (8), the volume fraction of copper in the FGM layer along the thickness direction will take the indices n = 0.2, 0.5, 1, 2, 10, respectively, as illustrated in Figure 2. where , j m ij A and j m M can be obtained from the wave control equations after expansion, which are shown in Appendix A. According to the matrix Equation (16), the relationship between the wave number k and the angular frequency ω can be obtained by solving the eigenvalues. That is how the dispersion curves of the Lamb wave in the FGM sandwich plate can be extracted. Convergence Analysis of Cutoff Order M The material selected was a copper-FGM-steel sandwich plate, and the mechanical performance parameters of copper and steel are shown in Table 1. In total, the thickness of the plate is 0.4 mm, in which both the thicknesses of copper and steel are 0.1 mm, and the thickness of the FGM layer is 0.2 mm. According to Equation (8), the volume fraction of copper in the FGM layer along the thickness direction will take the indices n = 0.2, 0.5, 1, 2, 10, respectively, as illustrated in Figure 2. From the LOPSE method, it can be concluded that in the process of solving the Lamb wave dispersion curves, once the number of polynomials exceeds a certain threshold, the phase velocity will infinitely approach the eigenvalue. Calculations of the Lamb wave dispersion curves in the frequency range of 0-10 MHz under seven cutoff orders (M = 3, 4, 5, 6,7,8,9) are shown in Figure 3a-g, where the volume fraction index is n = 0.2. It can be observed that as the cutoff order M increases, the Lamb wave dispersion curve shows a convergence trend, which is consistent with the characteristics of the LOPSE method. This verifies the feasibility of the theoretical method. As can be observed from Figure 3h From the LOPSE method, it can be concluded that in the process of solving the Lamb wave dispersion curves, once the number of polynomials exceeds a certain threshold, the phase velocity will infinitely approach the eigenvalue. Calculations of the Lamb wave dispersion curves in the frequency range of 0-10 MHz under seven cutoff orders (M = 3, 4, 5, 6,7,8,9) are shown in Figure 3a-g, where the volume fraction index is n = 0.2. It can be observed that as the cutoff order M increases, the Lamb wave dispersion curve shows a convergence trend, which is consistent with the characteristics of the LOPSE method. This verifies the feasibility of the theoretical method. As can be observed from Figure 3h Taking the volume fraction curve at n = 0.2 as an example, according to Equation (10), the elastic constant CIJ and density ρ of the FGM layer can be sliced into 10 equal minor sub layers. Meanwhile, when N = 1, the corresponding material layer is Cu; and when N = 10, the corresponding material layer is steel, and the material parameters can be obtained from Table 1. The thickness of each layer is 0.02 mm, and the material parameters vary in the same step. The corresponding parameters of all layers can be obtained from Equation (10), which are shown in Table 2. Taking the volume fraction curve at n = 0.2 as an example, according to Equation (10), the elastic constant C IJ and density ρ of the FGM layer can be sliced into 10 equal minor sub layers. Meanwhile, when N = 1, the corresponding material layer is Cu; and when N = 10, the corresponding material layer is steel, and the material parameters can be obtained from Table 1. The thickness of each layer is 0.02 mm, and the material parameters vary in the same step. The corresponding parameters of all layers can be obtained from Equation (10), which are shown in Table 2. The parameters from Table 2 Effect of Volume Fraction n on Dispersion Curves Under different power exponents, the gradient distribution of m in the FGM layer is different, which has a certain influence on the Lam curves. The dispersion curves of Lamb waves in the sandwich plate u dient distributions are calculated, as shown in Figure 5. The cutoff ord orthogonal polynomial is also M = 8. Figure 5a-e show Lamb wave d five different sandwich plates, with n = 0.5, 1, 5, 10, 20, respectively. that as the power exponent increases, the phase velocity of S0 mode range gradually increases. Meanwhile, the same phenomenon shows order modes. According to Figure 5, when the power index is gradua finity, the copper content in the FGM layer almost reduces to zero, plates can be regarded as double-layered plates with a top layer of 0.1 Effect of Volume Fraction n on Dispersion Curves Under different power exponents, the gradient distribution of material parameters in the FGM layer is different, which has a certain influence on the Lamb wave dispersion curves. The dispersion curves of Lamb waves in the sandwich plate under different gradient distributions are calculated, as shown in Figure 5. The cutoff order of the Legendre orthogonal polynomial is also M = 8. Figure 5a-e show Lamb wave dispersion curves in five different sandwich plates, with n = 0.5, 1, 5, 10, 20, respectively. It can be observed that as the power exponent increases, the phase velocity of S0 mode at a low frequency range gradually increases. Meanwhile, the same phenomenon shows up in the higher-order modes. According to Figure 5, when the power index is gradually increased to infinity, the copper content in the FGM layer almost reduces to zero, and the sandwich plates can be regarded as double-layered plates with a top layer of 0.1 mm copper and a bottom layer of 0.3 mm steel. The Lamb wave dispersion curve in the copper-steel double-layered plate calculated by the Legendre orthogonal polynomial method is shown in Figure 5f. Displacement and Stress Distribution The amplitude distribution of displacements and stress components along the thickness direction is the wave structure. According to the calculation result of the dispersion curve at n = 1 in Figure 5b (16), the displacement and stress distribution in the FGM sandwich plate can be obtained, as shown in Figures 6 and 7. So, the displacement and stress distribution curves corresponding to the arbitrary modes of the Lamb wave at any frequency can be obtained. Displacement and Stress Distribution The amplitude distribution of displacements and stress components along the thickness direction is the wave structure. According to the calculation result of the dispersion curve at n = 1 in Figure 5b, the eigenvector and its corresponding eigenvalue are calculated. Then, the displacement distribution of the different Lamb wave modes at different frequencies can be obtained. An arbitrary frequency f = 2 MHz is selected, and the Lamb wave velocities corresponding to the A0 (anti-symmetric zero-order mode) and S0 (symmetrical zero-order mode) modes at this frequency are 1933 m/s and 4469 m/s, respectively. The matrix eigenvectors p 1 m and p 3 m are inversely obtained by using the angular frequency ω corresponding to the two wave velocities as the eigenvalues. Substituting p 1 m and p 3 m into the Equation (16), the displacement and stress distribution in the FGM sandwich plate can be obtained, as shown in Figures 6 and 7. So, the displacement and stress distribution curves corresponding to the arbitrary modes of the Lamb wave at any frequency can be obtained. It can be observed from Figure 6 that, with the gradual change in the material composition in the FGM sandwich plate along the thickness direction, the LOPSE method can ensure that the displacement variation in the plate is continuous. Additionally, due to the gradual change in the material composition, its displacement distribution no longer has a strict "symmetric" or "asymmetric" distribution, with respect to the center position of the plate. The advantage of the LOPSE method is that the sandwich plate can be calculated globally without delamination, thus solving the problem of stress discontinuity at the boundary. In the calculation, the stress distribution of the Lamb wave can be obtained by simply substituting the obtained displacement solution into the constitutive equation and the geometric equation. As can be observed from Figure 7, the stress components σ31 and σ33 are continuously distributed in the FGM sandwich plate, and the stress components at the top and bottom boundaries are zero. Simulation Model Based on ultrasonic microscope technology, an acoustic measurement simulation model with an FGM sandwich plate was established, and the corresponding Lamb wave It can be observed from Figure 6 that, with the gradual change in the material composition in the FGM sandwich plate along the thickness direction, the LOPSE method can ensure that the displacement variation in the plate is continuous. Additionally, due to the gradual change in the material composition, its displacement distribution no longer has a strict "symmetric" or "asymmetric" distribution, with respect to the center position of the plate. The advantage of the LOPSE method is that the sandwich plate can be calculated globally without delamination, thus solving the problem of stress discontinuity at the boundary. In the calculation, the stress distribution of the Lamb wave can be obtained by simply substituting the obtained displacement solution into the constitutive equation and the geometric equation. As can be observed from Figure 7, the stress components σ31 and σ33 are continuously distributed in the FGM sandwich plate, and the stress components at the top and bottom boundaries are zero. Simulation Model Based on ultrasonic microscope technology, an acoustic measurement simulation model with an FGM sandwich plate was established, and the corresponding Lamb wave dispersion curve was extracted. For the functionally graded material sandwich panel, the It can be observed from Figure 6 that, with the gradual change in the material composition in the FGM sandwich plate along the thickness direction, the LOPSE method can ensure that the displacement variation in the plate is continuous. Additionally, due to the gradual change in the material composition, its displacement distribution no longer has a strict "symmetric" or "asymmetric" distribution, with respect to the center position of the plate. The advantage of the LOPSE method is that the sandwich plate can be calculated globally without delamination, thus solving the problem of stress discontinuity at the boundary. In the calculation, the stress distribution of the Lamb wave can be obtained by simply substituting the obtained displacement solution into the constitutive equation and the geometric equation. As can be observed from Figure 7, the stress components σ31 and σ33 are continuously distributed in the FGM sandwich plate, and the stress components at the top and bottom boundaries are zero. Simulation Model Based on ultrasonic microscope technology, an acoustic measurement simulation model with an FGM sandwich plate was established, and the corresponding Lamb wave dispersion curve was extracted. For the functionally graded material sandwich panel, the thickness of the sandwich plate is h 1 + h 2 + h 3 , in which h 2 is the thickness of the FGM layer, and h 1 and h 3 are the thicknesses of steel and copper, respectively. In order to simulate the structural characteristics of nonhomogeneous materials (FGM layer), the corresponding material properties should vary between homogeneous steel and copper. Meanwhile, it is assumed that the material properties of each element layer are constant, and the material properties mesh uniformly along the thickness direction [22,23]. A number of subdivisions can approximate the continuous property variation; the corresponding propagation characteristics of acoustic waves are close to the graded type at this time [24]. On the other hand, when using the commercial finite element package PZFlex to simulate the distribution of sound field in materials, it is very important to assign mechanical property parameters to the corresponding layer of the FGM sandwich plate. In this problem, the uniform element with a thickness of 0.02 mm can solve the numerical simulation of sound field distribution for functionally graded material layers with a thickness of 0.2 mm. In this section, a two-dimensional finite element model for a line-focusing ultrasound transducer was built in PZFlex. The dimensional parameters and material properties of the finite element model of the line-focusing ultrasonic transducer were referred to with the ultrasonic transducer used in the experiment. In the model, a piezoelectric polymer of polyvinylidene fluoride (PVDF) film was selected as the excitation/receiving element, and the polarization direction is directed to the center of the circle. The upper surface of the film is the positive electrode and the lower surface is the negative electrode. Back10 (tungsten-loaded epoxy, 10% VF, 5.8 Mray1) was used as the backing. Water was selected as the coupling medium for detection, and a copper-FGM-steel sandwich plate was used as the specimen. Table 3 shows the material parameters of the model. The top layer of the specimen is copper, the middle is layered FGM, and the bottom is steel. The transverse/longitudinal wave velocity and density of copper and steel are known. The material parameters of the FGM layered model are obtained from the volume fraction curve (n = 0.2). The parameters of each layer are shown in Table 3. The thickness, focus radius, and full opening angle of PVDF film were set to 40 µm, 20 mm, and 80 • , respectively. Then, this finite element model can be simplified to a two-dimensional model, as shown in Figure 8. The signal excited by the line-focusing ultrasonic transducer is a transient wide-band signal. Therefore, the excitation signal in the simulation selects the Sine-Impulse broadband signal with a central frequency of 7 MHz. The simulation started at the focusing plane. Generally, at around 28 μs, the PVDF film receives the reflected echo from the bottom surface of the specimen for the first time. Thus, in this simulation, the propagating times of the acoustic waves were set to 35 μs. The finite element model is discretized by a rectangular grid, and a unit wavelength is divided by 20 grid nodes in water. It should be noted that the bottom surface of the model is set as a free boundary. In order to prevent reflection, the other boundaries of the model are set as absorbing boundaries. Simulation Results By changing the relative position of the ultrasonic transducer to achieve equal interval defocusing, a defocusing measurement simulation based on an ultrasonic microscopy technique was simulated, which is called V(f, z) analysis [25]. The defocus distance was 15 mm and the step was 0.025 mm. The finite element simulation was performed on each defocus position, and, in total, 600 sets of simulation data were obtained. The Lamb wave dispersion curve can be extracted by performing 2D Fourier transform of the time and space domains, as shown in Figure 9. The simulation started at the focusing plane. Generally, at around 28 µs, the PVDF film receives the reflected echo from the bottom surface of the specimen for the first time. Thus, in this simulation, the propagating times of the acoustic waves were set to 35 µs. The finite element model is discretized by a rectangular grid, and a unit wavelength is divided by 20 grid nodes in water. It should be noted that the bottom surface of the model is set as a free boundary. In order to prevent reflection, the other boundaries of the model are set as absorbing boundaries. Simulation Results By changing the relative position of the ultrasonic transducer to achieve equal interval defocusing, a defocusing measurement simulation based on an ultrasonic microscopy technique was simulated, which is called V(f, z) analysis [25]. The defocus distance was 15 mm and the step was 0.025 mm. The finite element simulation was performed on each defocus position, and, in total, 600 sets of simulation data were obtained. The Lamb wave dispersion curve can be extracted by performing 2D Fourier transform of the time and space domains, as shown in Figure 9. The simulation started at the focusing plane. Generally, at around 28 μs, the PVDF film receives the reflected echo from the bottom surface of the specimen for the first time. Thus, in this simulation, the propagating times of the acoustic waves were set to 35 μs. The finite element model is discretized by a rectangular grid, and a unit wavelength is divided by 20 grid nodes in water. It should be noted that the bottom surface of the model is set as a free boundary. In order to prevent reflection, the other boundaries of the model are set as absorbing boundaries. Simulation Results By changing the relative position of the ultrasonic transducer to achieve equal interval defocusing, a defocusing measurement simulation based on an ultrasonic microscopy technique was simulated, which is called V(f, z) analysis [25]. The defocus distance was 15 mm and the step was 0.025 mm. The finite element simulation was performed on each defocus position, and, in total, 600 sets of simulation data were obtained. The Lamb wave dispersion curve can be extracted by performing 2D Fourier transform of the time and space domains, as shown in Figure 9. The Lamb wave dispersion curves from the simulation were superimposed with the dispersion curves from the LOPSE method, as shown in Figure 10. It can be observed from the figure that the theoretical results solved by the LOPSE method using the volume fraction index are consistent with the finite element simulation results using the layered model. Therefore, this result lays the theoretical foundation for FGM characterization by acoustic microscopy. Conclusions In this research, the problem of Lamb wave propagation in the FGM sandwich plate without discretizing the gradient structure into a homogeneous multilayered model is solved numerically. (1) The LOPSE method is employed for solving the Lamb wave dispersion curves and their displacement and stress distributions, even when the material parameters vary continuously along the thickness direction. The convergence of the results by a polynomial method is analyzed, and the convergence solution is also obtained. Moreover, the convergence solution is basically consistent with the results calculated using the global matrix method. (2) The middle layer of the sandwich plate is FGM, in which the material parameter changes gradiently along the thickness direction. By solving the Lamb wave dispersion curve of the sandwich plate under different gradient distributions, it is obvious that the The Lamb wave dispersion curves from the simulation were superimposed with the dispersion curves from the LOPSE method, as shown in Figure 10. It can be observed from the figure that the theoretical results solved by the LOPSE method using the volume fraction index are consistent with the finite element simulation results using the layered model. Therefore, this result lays the theoretical foundation for FGM characterization by acoustic microscopy. The Lamb wave dispersion curves from the simulation were superimposed with the dispersion curves from the LOPSE method, as shown in Figure 10. It can be observed from the figure that the theoretical results solved by the LOPSE method using the volume fraction index are consistent with the finite element simulation results using the layered model. Therefore, this result lays the theoretical foundation for FGM characterization by acoustic microscopy. Conclusions In this research, the problem of Lamb wave propagation in the FGM sandwich plate without discretizing the gradient structure into a homogeneous multilayered model is solved numerically. (1) The LOPSE method is employed for solving the Lamb wave dispersion curves and their displacement and stress distributions, even when the material parameters vary continuously along the thickness direction. The convergence of the results by a polynomial method is analyzed, and the convergence solution is also obtained. Moreover, the convergence solution is basically consistent with the results calculated using the global matrix method. (2) The middle layer of the sandwich plate is FGM, in which the material parameter changes gradiently along the thickness direction. By solving the Lamb wave dispersion curve of the sandwich plate under different gradient distributions, it is obvious that the Conclusions In this research, the problem of Lamb wave propagation in the FGM sandwich plate without discretizing the gradient structure into a homogeneous multilayered model is solved numerically. (1) The LOPSE method is employed for solving the Lamb wave dispersion curves and their displacement and stress distributions, even when the material parameters vary continuously along the thickness direction. The convergence of the results by a polynomial method is analyzed, and the convergence solution is also obtained. Moreover, the convergence solution is basically consistent with the results calculated using the global matrix method. (2) The middle layer of the sandwich plate is FGM, in which the material parameter changes gradiently along the thickness direction. By solving the Lamb wave dispersion curve of the sandwich plate under different gradient distributions, it is obvious that the volume fraction of the top layer material in the FGM layer decreases and the volume fraction of the underneath layer material increases when the power exponent increases, then the dispersion relation of the Lamb wave gradually approaches a double-layer plate. (3) The finite element model of the FGM sandwich plate is established by slicing the FGM into layers, and the defocus measurement simulation by a line-focusing ultrasonic transducer was carried out based on an acoustic microscopy technique. The extracted Lamb wave dispersion curves are basically consistent with the theoretical calculation results, which further verifies the LOPSE method. Then, this research provides an approach for the FGM characterization method based on acoustic microscopy.
2022-06-11T06:16:28.690Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "de221305d9db66849b0934ab915207b7439267ec", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f998ee58f4b65b534d07fa85cc033175e292cf04", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
55278144
pes2o/s2orc
v3-fos-license
Effect of Hibiscus sabdariffa calyx extract on reproductive hormones in normal rats Medicinal plants contain physiologically active principles that over the years had been exploited in traditional medicine for the treatment of various ailments. The present study was undertaken to investigate the effects of ethanolic extract of Hibiscus sabdariffa calyx on rat reproductive hormones. The effects on the basal levels of estradiol, testosterone, prolactin and follicle stimulating hormone were conducted in experimental animals. H. sabdariffa calyx extract at a dose of 250 mg/kg produced minor effects on rat reproductive hormones, namely testosterone and estradiol while no change was observed on both prolactin and follicle stimulating hormone levels. Moreover, no histological changes were detected on both testes and ovaries of the experimental animals after 28 days of administration. It can be concluded that H. sabdariffa calyx extract at a dose of 250 mg/kg caused mild effects on rat reproductive hormones. INTRODUCTION Endocrine disrupting compounds (EDCs) are natural or synthetic compounds that have the ability within the body to alter endocrine functions often through mimicking or blocking endogenous hormones (James et al., 2013).These actions on the endocrine system have resulted in developmental deficits in various invertebrate and aquatic species (Crain et al., 2007;Elango et al., 2006) and mammals (Christopher et al., 2012).Exposures in adulthood have consequences but fetal and early life exposures appear to have more severe effects that persist through life (Rubin and Soto, 2009).Among these classes of chemicals are phytoestrogens that show effects suggestive of estrogenicity, such as binding to the estrogen receptors, induction of specific estrogen-responsive gene products, stimulation of estrogen receptor(s) and positive breast cancer cell growth (James et al., 2013).Through these interactions by acting as agonists or antagonists, EDCs are able to alter the activity of response elements of genes, block natural hormones from binding to their receptors, or in some cases increase the perceived amount of endogenous hormone in the body by acting as a hormone mimic to its receptor (Ze-hua et al., 2010). Hibiscus sabdariffa Linn (Roselle) is an annual shrub commonly used to make jellies, jams and beverages.The brilliant red colour of its calyx makes it a valuable food product, a part from its multitude of traditional medicinal uses.Infusions of the calyces are considered as diuretic, cholerectic, febrifugal and hypotensive, decreasing the viscosity of the blood and stimulating intestinal peristalsis (Salleh et al., 2002).Roselle calyx extract is a good source of antioxidants from its anthocyanins and associated with antitumor and inhibitory effects on the growth of several cancer cells (Ajiboye et al., 2011). Extracts of H. sabdariffa calyces have been reported to be rich in phytoestrogens (Adigun et al., 2006;Orisakwe et al., 2004;Brian et al., 2009;Omotuyi et al., 2011) and some reports indicated that H. sabdariffa calyces have estrogenic effects, although exact estrogen-like ingredient is not determined (Ali et al., 1989).This study was undertaken to determine to which extent H. sabdariffa calyces extract alters the basal levels of selected reproductive hormones: estradiol, testosterone, prolactin and follicle stimulating hormone as well as the histological features of both testes and ovaries of rats. Plants The dried calyces of H. sabdariffa were purchased from the local market in Wad-Medani, Sudan.The plant material was identified by the Department of Pharmacognosy, Faculty of Pharmacy, University of Gezira, Sudan. Extraction of plant material One hundred grams of coarsely powdered calyces of H. sabdariffa were extracted by maceration using ethanol (70%) in a conical flask for 72 h, kept away from light throughout the extraction period, then filtered and evaporated by a rotary evaporator at 60°C.The resulting solution was freeze dried and placed into a refrigerator until use. Experimental animals The effect of ethanolic extract of H. sabdariffa calyx on rat reproductive hormones was conducted based on the method described by Omotuyi et al. (2011).A total of 20 rats (10 each for males and females) were housed in a clean animal house subjected to an intensive nutritional program.Rats were acclimatized for a period of 14 days under standard environmental conditions.The ethical committees, University of Gezira and Ministry of Health, Gezira State, ethically approved the experimental protocol. Experimental design Albino wistar rats were divided into four groups each of five.Water control groups for both males and females and the other two groups (either males or females) received 250 mg/kg of plant extract via gastric tube daily for 28 days. Collection of blood samples Blood samples were collected from conjunctival veins using capillary tubes at 7-day intervals for 28 days. Hormonal assay The hormones were estimated using the standard protocols of enzyme-linked immunosorbent assay (ELISA) kits (Roche, Switzerland) for determination of estradiol, testosterone, prolactin and follicle stimulating hormone (FSH) levels. Histopathological examination Twenty-eight days after oral administration of the extract, all experimental animals were anaesthetized using chloroform vapour and dissected.The ovaries and testes were collected and immediately fixed in Bouins fluid for 6 h and transferred to 70% alcohol for histological processing according to Drunny and Wallington (1990).Briefly, following fixation of the right side testes and ovaries from both control and test animals, tissue sections were processed by dehydration in 95% and absolute alcohol, cleared in xylene and embedded in pure clean moltenparaffin wax from which blocks of tissues were made for sectioning.Ribbon slices of about 5.0 μm in thickness were made with the aid of a microtome (delete machine) and the sections picked with slides, which were dried in oven.The slices were stained with haemotoxylin and eosin, and then mounted using DPX onto a light microscope (delete magnification 40× for testes and 10× for ovary) for histopathological and morphological changes. Data analysis All the obtained data were expressed as means ± standard deviation and analyzed using analysis of variance (ANOVA).Comparisons with the control groups were made using One-way ANOVA.Differences were considered significant if P-value < 0.05. Effect of ethanolic extract of H. sabdariffa calyx on estradiol levels in female rats The study revealed that the ethanolic extract of H. sabdariffa calyx in a dose of 250 mg/kg exhibited a mild increase (p-value < 0.05) in estradiol level in female rats in time-dependent manner (Table 1).On day 28, estradiol reached more than twice the value observed on day 0 compared to those of water control group. Effect of ethanolic extract of H. sabdariffa calyx on testosterone levels in male rats Following intragastric administration of 250 mg/kg of ethanolic H. sabdariffa calyx extract for 28 days, serum levels of testosterone were significantly reduced (P-value < 0.05) in male rats throughout the experimental period compared to those of water control group (Table 2). Effect of ethanolic extract of H. sabdariffa calyx on prolactin and FSH levels in female rats The ethanolic extract of H. sabdariffa calyx in a dose of 250 mg/kg/ml did not cause changes in the serum levels of prolactin or FSH throughout the experimental period in female rats administered the plant extract for 28 days. Histological effects of H. sabdariffa calyx extract on rat testes and ovaries The extract did not cause histological changes on both testes and ovaries of the experimental animals when the plant extract was administered for 28 days. The reduction of serum level of testosterone in male rats produced by ethanolic extract of H. sabdariffa calyx may be explained by the estrogenic activity of the plant, an evidence raised by Orisakwe et al. (2004).Furthermore, other studies had reported a statically significant decrease in testosterone levels in laboratory animals treated with phytoestrogens (Sharpe et al., 2002;Cline et al., 2004).The precise role that oestrogens play in male reproductive development is unclear, but generally, oestrogens tend to have 'demasculinising' or antiandrogenic effects.In foetal and neonatal life, this probably results from suppression of testosterone production (Haavisto et al., 2001), or loss of androgen receptors (McKinnell et al., 2001).Oestrogens are synthesised from androgens via the action of a single enzyme (aromatase), and there is a close relationship between the actions of these two hormones.Moreover, testosterone may be converted to estrogen by aromatase (Benassayag et al., 2002). Although dietary phytoestrogens have been implicated in adverse effect upon fertility in various animals, there are few published reports of such effects in human populations consuming large amounts of these substances (Benassayag et al., 2002).In male rats, reduction of testosterone level might impair spermatogenesis and cause male infertility (Orisakwe et al., 2004).It should be noted that in the study of herbal extracts, one could not attribute the observed biological effects to a particular constituents because many other compounds are present in the plant extracts (Saied-Karblay et al., 2010).Factors such as species, age, gender, diet, dose, route of administration and metabolism strongly influence the ultimate biological response to phytoestrogen exposure (Shweta, 2009). Conclusion H. sabdariffa calyx extract caused mild effects on rat reproductive hormones and due to the pleiotropic effects of phytoestrogens in vivo, a broad panel of in vitro assays covering not only estrogenic action but also other regulating processes has to be used to assess the potential of plant-derived compounds to beneficially or adversely affect human health. Table 1 . Serum estradiol levels (pg/ml) in female rats (n=5) after oral administration of ethanolic extract of Hibiscus sabdariffa calyx and water (SD ± SEM). Table 2 . Serum testosterone levels (ng/ml) in male rats (n=5) after oral administration of ethanolic extract of Hibiscus sabdariffa calyx and water (SD ± SEM).
2019-01-19T11:28:45.743Z
2013-08-29T00:00:00.000
{ "year": 2013, "sha1": "b23e9b5c363822836764c0057c8965660330d67b", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJPP/article-full-text-pdf/39EEB5626411", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b23e9b5c363822836764c0057c8965660330d67b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
248547049
pes2o/s2orc
v3-fos-license
CircMiMi: a stand-alone software for constructing circular RNA-microRNA-mRNA interactions across species Background Circular RNAs (circRNAs) are a class of non-coding RNAs formed by pre-mRNA back-splicing, which are widely expressed in animal/plant cells and often play an important role in regulating microRNA (miRNA) activities. While numerous databases have collected a large amount of predicted circRNA candidates and provided the corresponding circRNA-regulated interactions, a stand-alone package for constructing circRNA-miRNA-mRNA interactions based on user-identified circRNAs across species is lacking. Results We present CircMiMi (circRNA-miRNA-mRNA interactions), a modular, Python-based software to identify circRNA-miRNA-mRNA interactions across 18 species (including 16 animals and 2 plants) with the given coordinates of circRNA junctions. The CircMiMi-constructed circRNA-miRNA-mRNA interactions are derived from circRNA-miRNA and miRNA-mRNA axes with the support of computational predictions and/or experimental data. CircMiMi also allows users to examine alignment ambiguity of back-splice junctions for checking circRNA reliability and examine reverse complementary sequences residing in the sequences flanking the circularized exons for investigating circRNA formation. We further employ CircMiMi to identify circRNA-miRNA-mRNA interactions based on the circRNAs collected in NeuroCirc, a large-scale database of circRNAs in the human brain. We construct circRNA-miRNA-mRNA interactions comprising differentially expressed circRNAs, and miRNAs in autism spectrum disorder (ASD) and cross-species analyze the relevance of the targets to ASD. We thus provide a rich set of ASD-associated circRNA-miRNA-mRNA axes and a useful starting point for investigation of regulatory mechanisms in ASD pathophysiology. Conclusions CircMiMi allows users to identify circRNA-mediated interactions in multiple species, shedding light on regulatory roles of circRNAs. The software package and web interface are freely available at https://github.com/TreesLab/CircMiMi and http://circmimi.genomics.sinica.edu.tw/, respectively. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-022-04692-0. Background Circular RNAs (circRNAs) are a class of long non-coding RNAs produced by pre-mRNA back-splicing with a distinct single-strand, non-polyadenylated circular loop [1]. They were observed to be more stably expressed than their corresponding co-linear mRNA isoforms [2][3][4]. Genome-wide analyses of high-throughput RNA sequencing (RNA-seq) revealed that circRNAs were abundant in animals [3,5,6] and plants [7]. Some circR-NAs are evolutionarily conserved in terms of both circle sequence and expression across mammals [5,8,9]. The best understood function of circRNAs is the regulatory role in regulating microRNA (miRNA) activities, with either miRNA sponges or scaffolds [10]. Accumulating evidence shows that circRNA-miRNA regulatory axes can involve in cancer-related [11] and neurobiological pathways [12,13], suggesting the potential implications of circRNA-miRNA-mRNA regulatory pathways in pathophysiology of human diseases. Nowadays, numerous tools [14] and databases [15] have been developed for identification and analysis of circRNAs, providing a large amount of publicly accessible circRNA resources. However, there are great discrepancies among the circRNA candidates identified by different circRNA detectors, implying the uncertainty of detected circRNAs [16,17]. Indeed, a considerable number of circRNAs detected by many currently-available tools were still derived from ambiguous alignments with an alternative co-linear explanation or multiple hits [18,19]. It is worthwhile to reexamine the alignment ambiguity of the circRNAs for further analyses. In terms of circularization, previous studies demonstrated that back-splicing can be promoted by reverse complementary sequences (RCSs) residing in the introns flanking the circularized exons [3,4,20] and affected by the competition of RCSs across flanking introns (RCS across ) or within individual flanking introns (RCS within ) [20]. Genome-wide analyses of circRNA-flanking introns further revealed that the number of RCS across was generally larger than that of RCS within [4,20], suggesting the association between RCSs and circularization. It is helpful to examine the existence of RCSs for further investigation of circRNA formation. While several circRNA databases or web-based tools [14,15,21] also provide predictions of circRNA-miRNA interactions, they are often hampered by one or more of the following limitations: (1) the provided circRNA-regulated axes are based on the cir-cRNA candidates identified/collected by the known circRNA databases only; (2) the examined circRNAs focus on human or limited species; (3) the number of query circR-NAs is limited; or (4) the corresponding circRNA-miRNA or miRNA-mRNA axes are derived from computational predictions only. To address all the above limitations, we present CircMiMi (circRNA-miRNA-mRNA interactions), a Python-based software, to identify circRNA-miRNA-mRNA interactions across 18 species (including 16 animals and 2 plants) according to user-provided coordinates of circRNA junctions. It is noteworthy that the CircMiMi-identified circRNA-miRNA-mRNA interactions are derived from circRNA-miRNA and miRNA-mRNA axes with the support of computational predictions and/or experimental data (e.g., CLIP or microarray data). The executable files for visualizing the constructed circRNA-miRNA-mRNA regulatory axes are provided. CircMiMi also provides optional functions for examining alignment ambiguity of cir-cRNAs and RCSs across/within flanking sequences of back-splice junctions (BSJs). We further utilize CircMiMi to construct circRNA-miRNA-mRNA interactions based on the circRNAs collected in NeuroCirc [22], which deposits more than 26,000 circRNAs derived from human brain tissues or neuronal cells. According to differentially expressed circRNAs (DE-circRNAs), miRNAs (DE-miRNAs) in autism spectrum disorder (ASD), a rich set of ASD-associated circRNA-miRNA-mRNA axes is also provided. With the ability in identifying circRNA-miRNA-mRNA axes across species, CircMiMi also identifies mouse circRNA-miRNA-mRNA axes based on human-mouse orthologous circRNAs. Enrichment analysis further shows that the targets of the DE-circRNA-associated axes are enriched for ASD risk genes. CircMiMi is highly automated and modularized, which is convenient to be expanded to include new experimental data in the future. Alignment ambiguity and RCS checking The workflow of CircMiMi is illustrated in Fig. 1a. CircMiMi can automatically collect the newest version of annotation information, if users do not specify the version. By inputting the coordinates of the BSJs, CircMiMi can automatically determine donor and acceptor sites of the circRNA candidates according to the input coordinates and strands. Since previous studies suggested that BSJs required canonical splice signals and tended to be located at well-annotated exon boundaries [23,24], CircMiMi offers users an optional function for checking whether the input BSJs are located at well-annotated exon boundaries and whether both the donor and acceptor splice junctions of a circRNA event are located at the annotated boundaries from the same annotated co-linear transcripts. For accuracy, this optional module also checks if the input BSJs of circRNAs are potential false-positives derived from ambiguous alignments with an alternative co-linear explanation or multiple hits. The exonic circle sequences flanking the BSJs (100 bp upstream and downstream sequences of the junctions; see Fig. 1b) are concatenated using bedtools [25]. The concatenated sequences are then BLAT-aligned [26] against the reference genome and well-annotated transcripts, with the default parameter set (-title-Size = 11 -stepSize = 11 -repMatch = 1024). A retained concatenated sequence should not map to an alternative co-linear matched sequence with > 80% similarity with the concatenated sequence or multiple hits with BLAT-mapping scores < 3 (Fig. 1b). Since different BLAT parameters may result in different alignment results, we realigned the concatenated sequences against the reference genome and well-annotated transcripts with a new BLAT-parameter set (-titleSize = 9 -stepSize = 9 -repMatch = 32,768) that is quite different from the default set. The alignment ambiguity checking is performed (See figure on next page.) Fig. 1 Overview of CircMiMi. a Flowchart of the overall pipeline. b Schematic illustration of back-splicing events arising from ambiguous alignments with an alternative co-linear explanation (left) and multiple hits (right). For the left panel, the concatenated sequence of the back-splicing event has an alternative co-linear explanation on another chromosome. For the right panel, the concatenated sequence also non-co-linearly maps to another genomic region. c Schematic illustration of sequences flanking circularized exons and the corresponding RCS across and RCS within . In this case, RCS across = 5, RCS within = 4, and RCS across -RCS within = 1. d The four main CircMiMi command lines: (1) collecting all required resources; (2) checking BSJ, alignment ambiguity, and RCS; (3) identifying circRNA-miRNA-mRNA interactions; and (4) generating a Cytoscape-executable file. For (2), "-dist" represents the considered length of the flanking sequences (± N nucleotides of the back-splice site; default value = 10,000). For (3), "-miranda-sc" represents the miRanda score threshold (default value = 155). For (2) and (3), "-p" represents the number of processor cores (default value = 1) again. Only the concatenated sequences pass the alignment ambiguity checking based on both BLAT-parameter sets are retained. Such processes were demonstrated to effectively detect potentially false circRNA candidates from alignment ambiguity [18,19,27]. RCS within Flanking sequence Flanking sequence This module further provides a function to examine RCSs across the flanking sequences (RCS across ) or within individual flanking sequences (RCS within ) of the BSJs (see Fig. 1c). For examining RCS across of a circRNA, both flanking sequences (± N nucleotides of the back-splice site; N is a parameter representing the number of nucleotides) are aligned each other using BLAST [28] with parameters -task blastn -word_size 11 -strand minus. For examining RCS within of a circRNA, each individual flanking sequence is aligned itself using BLAST with the same parameters stated above. Of note, N is a userdefined parameter. The potential RCSs should be simultaneously satisfied the following rules: bitscore > 100, alignment length > 50 bp, and identity > 80%. The BLAST-parameters are set according to a previous study [29]. Identification of circRNA-miRNA interactions CircMiMi first generates putative exonic circle sequence for each circRNA event based on user-specified species, gene annotations and versions (Ensembl, Ensembl Metazoa, Ensembl Plants, or GENCODE) ( Table 1). According to the mature miRNA sequences extracted from miRBase [30], two procedures were utilized to identify miRNA binding sites in the predicted circle sequences of circRNAs and construct potential circRNA-miRNA interactions. The first procedure screens potential miRNA binding sites at cir-cRNAs using miRanda 3.3a (https:// bioco nda. github. io/ recip es/ miran da/ README. html) [31]. Here we use a stringent parameter set with pairing score > 155 and energy score < − 20 recommended by a previous study [32]. For each predicted miRNA, the number of predicted binding site and the highest miRanda score of these binding site(s) were showed. The binding sites spanning the BSJ were also considered and represented. For human and mouse, CircMiMi provides the second procedure, which screens the miRanda-predicted miRNA binding sites and represents the binding sites supported by at least one Argonaute (Ago) CLIP-seq experiments. The Ago CLIP-seq data were downloaded from ENCORI [33] at http:// starb ase. sysu. edu. cn/. The liftOver tool [34] was employed to obtain the genomic coordinates of binding sites on the GRCh38 assembly. Construction of circRNA-miRNA-mRNA interactions After that, the miRNA-mRNA interactions were extracted from miRDB (version 6) [35] and miRTarBase (version 7.0) [36]. The former collected miRNA-mRNA axes predicted by MirTarget (version 4) [37] across five species; and the latter collected experimentallysupported miRNA-mRNA axes across 23 species. Of note, for the miRTarBase-collected miRNA-mRNA axes, we only considered the axes from the 18 species with Ensemblbased annotations (Table 1). For human and mouse, CircMiMi also extracted miRNA-mRNA interactions from ENCORI, in which the miRNA binding sites were predicted by one or more miRNA-binding prediction tools and supported by at least one Ago CLIPseq experiments [33]. By integrating the circRNA-miRNA interactions with the miRNA-mRNA interactions, CircMiMi then generates circRNA-miRNA-mRNA interactions based on the common target miRNAs of the circRNAs and mRNAs. For each input circRNA event, CircMiMi employs hypergeometric test to examine whether the identified circRNA-mRNA pairs are significantly co-regulated by miRNAs [38]. The statistical significance (P value) is determined as where N is the total number of miRNAs used to infer targets (circRNAs/mRNAs), t is the number of miRNAs that target the mRNA; c is the number of miRNAs that target the circRNA; and s is the number of miRNAs that target both the mRNA and circRNA. The P values are then adjusted across all circRNA-mRNA pairs using false positive rate (FDR) correction with Benjamini-Hochberg (BH) procedure [39]. A circRNA-miRNA-mRNA axis is retained if its circRNA-mRNA pair is significantly co-regulated by miR-NAs at FDR < 0.05. Enrichment analysis ASD risk genes (and genes from mouse models) were downloaded from the Simon Foundation Autism Research Institutive (SFARI) database (09-02-2021 release) at https:// gene. sfari. org/ [41]. The lists of genes encoding postsynaptic density (PSD) proteins and targets of FMR1, RBFOX1, and ELAVL1 were downloaded from Lee et al. 's study [42]. We assessed each ASD-relevant gene list for the targets of the axes using the similar steps stated in our previous study [13]. For example, regarding the analysis of PSD gene enrichment for the target genes of the axes, we created a two-way contingency table with rows containing numbers of PSD and non-PSD genes and columns containing numbers of target genes and non-target genes. Here we used 20,070 protein-coding genes as the background set. We evaluated the statistical significance and odds ratio using one-tailed Fisher's exact test with the fisher.test R function. P values were then FDR adjusted using BH correction. Human-mouse orthologous circRNAs were extracted from CircAtlas 2.0 [43] at http:// 159. 226. 67. 237: 8080/ new/ links. php. For empirical gene enrichment analysis [13] in Fig. 3b, e, we also took the analysis of PSD gene enrichment for the target genes of the axes in Fig. 2b as an example. We examined if the targets of the CircMiMiidentified axes (1764 genes; Fig. 3a) had a higher proportion (p obs ) of the PSD genes compared to a null distribution of the proportion observed in the 10,000 times of random sampling. For each time, the same number (1764) of genes were randomly selected from the background set. The proportions (p i ) in response to the 5 target groups for each ASD-relevant gene list were calculated. We then calculated the empirical P value (empP) for each gene list as After that, empP values were also FDR adjusted using BH correction. Implementation The CircMiMi provides four main functions: (1) collecting all required resources; (2) checking BSJ, alignment ambiguity, and RCS; (3) identifying circRNA-miRNA-mRNA interactions; and (4) generating a Cytoscape-executable file (Fig. 1d). CircMiMi is implemented in Python 3 (tested with 3.6, 3.7, 3.8, and 3.9) and tested on major Linux distributions. CircMiMi is straightforward to install via "pip install circmimi". The bedtools, BLAT, BLAST, and miRanda packages can be installed via "conda install -c bioconda bedtools = 2.29.0 blat blast miranda". For user convenience, we also provide a program to automatically collect all required resources (genomic sequences, Ensembl-or Gencodebased annotation/version, miRBase, ENCORI, miRTarBase, and miRDB) in a specified folder via the command line "circmimi_tools genref " (Fig. 1d). If users do not specify the genome/annotation versions, CircMiMi automatically accesses the newest versions from the corresponding web sites. Moreover, the users can upload users-defined miRNA sequences or miRNA-target binding information into the specified folder (i.e., refs/) to determine circRNA-miRNA-mRNA interactions. The CircMiMi command lines also include the miRanda parameters for further screening circRNA-miRNA axes (Fig. 1d). The output tables include "summary_list" and "all_interactions". The former sums up the results through the CircMiMi screening processes; and the latter represents all identified circRNA-miRNA-mRNA interactions. Result and discussion We employed the circRNA candidates collected in NeuroCirc [22] as an example for analyzing circRNA-mediated interactions based on CircMiMi. Of note, NeuroCirc encompassed 26,136 circRNA candidates, providing an integrative view of circRNA empP = 1 + 10,000 i=1 number of (p i > p obs ) 10, 001 . expression in human brain tissues. After alignment ambiguity checking, we found that 1531 out of the 26,136 (5.9%) circRNA candidates were likely to be derived from ambiguous alignments with an alternative co-linear explanation (480 events) or multiple hits (1051 events) (Additional file 1: Table S1). Ambiguous alignments may originate from repetitive sequences or paralogous genes, which often result in false positive circRNAs [19,27,44]. Compared with five other well-known circRNA databases including cir-cRNAdb [45], CircBase [46], CIRCpedia [47], CircFunBase [48], and CircAtlas [43], we can find that the percentages of circRNAs derived from alignment ambiguity remarkably decreased with increasing numbers of supporting circRNA databases, regardless of the types of alignment ambiguity (circRNA candidates with an alternative co-linear explanation or multiple hits; Fig. 2a). The percentage was significantly reduced from 65% (the circRNAs were detected in NeuroCirc only) to 0% (the circRNAs were detected in [13] and DE-miRNAs [52] were previously identified using the RNA-seq data from the same postmortem brain samples. b Enrichment analysis of 5 groups of ASD-relevant genes for the target genes of the 1777 axes. ASD-relevant genes included ASD risk genes (i.e., SFARI genes), genes encoding PSD proteins, genes encoding transcripts bound by FMR1 (FMR1 target), ELAVL1 (ELAVL1 target), and RBFOX1 (RBFOX1 target). c Visualization of the 70 circRNA-miRNA-mRNA axes that simultaneously involved DE-circRNAs, DE-miRNAs, and SFARI genes according to the Cytoscape-executable file generated by CircMiMi. The SFARI genes that are also belonged to other groups of ASD-relevant genes are marked in parentheses. d CircMiMi-identified circRNA-miRNA-mRNA axes based on 3 mouse circRNAs that were orthologous to human DE-circRNAs. e Enrichment analysis of SFARI genes (mouse models) for the target genes of the axes illustrated in (d) . For (b, e), P values were determined using one-tailed Fisher's exact test and empirical enrichment analysis, respectively. P values were then FDR adjusted using Benjamini-Hochberg correction. The enrichment odd ratios were shown in parentheses NeuroCirc and all the five databases examined) (Fig. 2a), supporting the association of circRNA reliability with alignment ambiguity. This result suggests that the circRNA candidates passing the alignment ambiguity checking may be relatively reliable for the further investigation. Since back-splicing can be facilitated by RCSs residing in the sequences flanking circularized exons [3,4,20] and affected by the competition of RCSs across flanking regions (RCS across ) or within individual regions (RCS within ) [20], RCSs were often used to investigate circRNA formation (e.g., [49] and [29]). We found that the majority (76%; 19,865 out of the 26,136 circRNAs) of the NeuroCirc-identified circRNAs were observed to have RCSs (RCS across ) in the flanking sequences of their back-splice sites (± 10 k nucleotides of the back-splice site) (Additional file 1: Table S1). Furthermore, 3847 out of the 19,865 circR-NAs exhibited (RCS across − RCS within ) ≥ 1 (Additional file 1: Table S1). The RCS information may provide a starting point for further analysis of circularization, although the existence of RCS is not the absolutely necessary factor for circRNA formation in non-mammalian species [10] such as Drosophila melanogaster [50] and Oryza sativa [51]. Both the checks of alignment ambiguity and RCS are optional in the CircMiMi pipeline (Fig. 1d). Regarding the 26,136 NeuroCirc circRNAs, we proceeded to construct potential circRNA-miRNA-mRNA interactions (Fig. 2b). We first identified potential circRNA-miRNA axes using miRanda and experimental data (Ago CLIP-seq data from ENCORI), respectively. As shown in Fig. 2b, a total of 416,103 circRNA-miRNA axes were identified. According to the miRNAs of the 416,103 circRNA-miRNA axes, we extracted miRNA-mRNA interactions from one database (miRDB) that contained bioinformatically predicted miRNA-mRNA axes and two databases (miRTarBase and ENCORI) that contained experimentally-supported miRNA-mRNA axes. A total of 2,849,904 miRNA-mRNA interactions were extracted. After that, the 468,117,154 potential circRNA-miRNA-mRNA interactions were constructed according to the common target miRNAs of the circRNAs and mRNAs. In terms of the experimental evidence of circRNA-miRNA axes and miRNA-mRNA axes, the identified circRNA-miRNA-mRNA interactions can be classified into three categories as follows (Fig. 2b). Category 1 108,816,445 axes; both circRNA-miRNA axes and miRNA-mRNA axes were supported by experimental data. Category 2 230,384,315 axes; either circRNA-miRNA axes or miRNA-mRNA axes was supported by experimental data. Category 3 128,916,394 axes; other. Since the circRNA candidates in NeuroCirc were derived from human brain tissue samples from neuronal differentiation datasets or individuals with neurodevelopmental diseases [22], it is of interest to investigate the circRNAs that were perturbed in neurodevelopmental diseases (e.g., Autism spectrum disorder (ASD) and schizophrenia) and the corresponding circRNA-miRNA-mRNA interactions. In terms of ASD, our previously identified DE-circRNAs (60 circRNAs) in ASD [13] were all included in NeuroCirc (Additional file 1: Table S1). On the basis of the 60 DE-circRNAs, CircMiMi identified 79,552 circRNA-miRNA-mRNA axes ( Fig. 3a and Additional file 2: Table S2). Of the 79,552 axes, we further extracted 1777 circRNA-miRNA-mRNA axes that involved DE-circRNAs and DE-miRNAs simultaneously ( Fig. 3a and Additional file 2: Table S2). Of note, the extracted DE-miRNAs [52] were derived from the same postmortem brain samples used for identification of the 60 DE-circRNAs. We then examined whether the target genes of the 1777 circRNA-miRNA-mRNA axes were implicated in ASD. We performed enrichment analyses (see Methods) for the gene sets previously implicated in ASD from SFARI [41] and other classes of ASD-relevant genes, including genes encoding postsynaptic density (PSD) proteins [53] and genes whose transcripts were bound by the three RNA binding proteins: FMR1 [54], RBFOX1 [55], and ELAVL1 [56]. Indeed, these target genes showed significant enrichment (all FDR < 0.05 by one-sided Fisher's exact test and empirical gene enrichment analysis) for each class of ASD-relevant genes (Fig. 3b). The 70 circRNA-miRNA-mRNA axes that simultaneously involved DE-circR-NAs, DE-miRNAs, and SFARI genes were illustrated in Fig. 3c (the detailed information of the identified interactions and ASD relevance were given in Additional file 2: Table S2). These data may provide a useful resource for further investigating regulatory mechanisms in ASD pathophysiology. Moreover, considering the 5 DE-circRNAs examined above (Fig. 3a), 3 were orthologous to mouse circRNAs according to the CircAtlas annotation [43]. On the basis of the 3 mouse circRNAs, CircMiMi identified 20,613 circRNA-miRNA-mRNA axes, which were associated 62 miRNAs and 1741 target genes in mouse ( Fig. 3d and Additional file 2: Table S2). Intriguingly, we found that these target genes were significantly enriched for the SFARI genes based on mouse models (Fig. 3e). This implies that the mouse circRNA-miRNA-mRNA axes derived from DE-circRNAs in human ASD brains is helpful for further investigation of regulatory mechanisms underlying ASD. Conclusion In this work, we describe a well-tested stand-alone software called CircMiMi, which allows users to examine alignment ambiguity and RCSs of the input circRNA candidates, construct potential circRNA-miRNA-mRNA interactions, and visualize the identified circRNA-miRNA-mRNA axes. We utilized CircMiMi to identify circRNA-miRNA-mRNA interactions for all the circRNAs collected in NeuroCirc, a large-scale resource of circRNAs in the human brain. We further constructed circRNA-miRNA-mRNA interactions comprising DE-circRNAs and DE-miRNAs in human ASD and found that the targets of axes were enriched for ASD-relevant genes, providing important insights into the underlying molecular mechanisms in ASD etiology. Since CircMiMi can be applied to circRNA candidates derived from multiple species, we also constructed mouse cir-cRNA-miRNA-mRNA interactions based on human-mouse orthologous circRNAs that were previously identified DE-circRNAs in human ASD brains. Our results revealed that the targets of such constructed axes were enriched for ASD risk genes based on mouse models. Taken together, this user-friendly tool may contribute to evaluation of circRNA reliability, investigation of circRNA formation, and cross-species functional analyses of circRNA-associated regulatory interactions, expanding our understanding of this important but understudied class of transcripts. CircMiMi will be continually updated as new experimental data of circRNA-miRNA and miRNA-mRNA interactions are available. Availability and requirements Project name: CircMiMi.
2022-05-07T13:11:33.155Z
2022-05-06T00:00:00.000
{ "year": 2022, "sha1": "837d904438ce7a015e7d5fea46d83fb2e0cecb71", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ea777b0d4eb540d8119321d63892a0634266213f", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249127033
pes2o/s2orc
v3-fos-license
Metagenomics Reveals the Diversity and Taxonomy of Carbohydrate-Active Enzymes and Antibiotic Resistance Genes in Suancai Bacterial Communities Suancai, as a traditional fermented food in China with reputed health benefits, has piqued global attention for many years. In some circumstances, the microbial-driven fermentation may confer health (e.g., probiotics) or harm (e.g., antibiotic resistance genes) to the consumers. To better utilize beneficial traits, a deeper comprehension of the composition and functionality of the bacterial species harboring enzymes of catalytically active is required. On the other hand, ingestion of fermented food increases the likelihood of microbial antibiotic resistance genes (ARGs) spreading in the human gastrointestinal tract. Besides, the diversity and taxonomic origin of ARGs in suancai are little known. In our study, a metagenomic approach was employed to investigate distribution structures of CAZymes and ARGs in main bacterial species in suancai. Functional annotation using the CAZy database identified a total of 8796 CAZymes in metagenomic data. A total of 83 ARGs were detected against the CARD database. The most predominant ARG category is multidrug-resistant genes. The ARGs of antibiotic efflux mechanism are mostly in Proteobacteria. The resistance mechanism of ARGs in Firmicutes is primarily antibiotic inactivation, followed by antibiotic efflux. Due to the abundance of species with different ARGs, strict quality control including microbial species, particularly those with lots of ARGs, is vital for decreasing the risk of ARG absorption via consumption. Ultimately, we significantly widen the understanding of suancai microbiomes by using metagenomic sequencing to offer comprehensive information on the microbial functional potential (including CAZymes and ARGs content) of household suancai. Introduction The consumption of traditional fermented food is very widespread, with renowned health benefits [1]. The metabolic activities of microbiota cause fermentation, which converts natural ingredients in food into a diverse range of molecules that constitute the unique composition of the eventual fermented food [2]. The microbial diversity is unique to each food type and influenced by the ingredients in the manufacturing process [3]. In homemade fermented food, various microorganisms that contribute to traditional fermentation come mostly from the environment, and especially from raw materials of fermented food [4]. Suancai is a traditional fermented food depending on traditional approaches in the northeast of China, where it is one of the most significant fundamental foodstuffs. During the preparation of traditional Chinese suancai, spontaneous fermentation without the use of starter cultures or sterilization results in the proliferation of numerous microorganisms. On account of the crucial role of the microorganisms in the fermentation process, a thorough comprehension of the functional potential of the suancai microbiota is essential for improving the flavor and safety of traditional fermented food. Metagenomic sequencing is shown to be an effective approach for defining the microbiota in fermented foods, obtaining species-level taxonomic resolution and predicting the functional potential [5,6]. Metagenome sequencing aids in the study of biocatalysts biodiversity in nature. That is to say, metagenomics has propelled synthetic biology study forward by discovering expression systems, proteins and bioactive compounds with a wide range of industrial applications [7,8]. It is desirable to investigate traditional homemade suancai for uncovering the microbial communities harboring biotechnologically important enzymes that are catalytically active during fermentation. Due to the misuse or overuse of antibiotics in agricultural, animal husbandry and human medical situations, ARGs have received a lot of attention around the world as an emergent environmental genetic contaminant [9][10][11]. ARGs have been found in a large number of microbial genomes [12]. Concerns about ARGs in fermented products should be a priority, given the possibility that certain microbes could shape the gut microbiome via fermented food supplements [13,14]. That would mean that food (meat and vegetables) not only acts as a reservoir for ARGs and antibiotic resistance (AR) bacteria, but also as a mediator for the transmission of ARGs and AR bacteria from the surroundings to humans via food consumption [15,16]. As a result, it is critical to improve our understanding of the existence and transmission of ARGs through food consumption [17]. A variety of ARGs encoding resistance to a wide range of antibiotics have been discovered in foodborne bacteria [18,19]. Many species-centric studies focused on the relationships between AR bacteria and the ARGs they hold [20]. The findings revealed that species belonging to the genera Enterococcus, Lactobacillus, Streptococcus, Lactococcus, Pediococcus and Weissella harbor genes conferring resistance to tetracycline, vancomycin, macrolide, erythromycin and streptomycin [21][22][23]. The ARGs of AR bacteria could be transmitted to bacteria in the gastrointestinal system through horizontal gene transfer, so it is critical to reduce the spread of AR through consumption. Current research on foodborne bacteria primarily focuses on individual pathogens and identifier microbes [24]. ARGs allow bacteria to survive throughout the face of antibiotics, and the resistance to antibiotics is reliably evaluated by phenotypic testing of isolates to a variety of antibiotics in food microbiology labs [25]. Nevertheles, the time required for this method, which relies on bacterial growth rates, can vary from one day to several weeks, and a high proportion of microbiota cannot be isolated in standard culture media [26,27]. Furthermore, horizontal gene transfer (HGT) mechanisms have shown that commensal and beneficial bacteria can acquire antibiotic resistance from pathogenic strains, highlighting the importance of studying the entire ARGs from the entire bacterial community (resistome) rather than single isolates [28,29]. Food microbiology is being revolutionized by metagenomics, which has resulted in a huge change from phenotype-based to genotype-based antibiotic resistance identification [30]. Nevertheless, the ARGs distribution may not fully represent the actual antibiotic resistance phenotypes of the microbial taxa, especially in the case of dead bacteria [31]. Nonetheless, ARG profiles do reveal the resistance potentials of microbial species in varying circumstances and with various antibiotic types. More importantly, a high throughput metagenomic approach can comprehensively provide insights into the complex community of microbial species (microbiome) as well as the pattern of antibiotic resistomes carried by those species [24,32]. In this study, three samples were collected at different stages in the suancai fermentation process, and the distribution and phylogenetic patterns of carbohydrate-active enzymes and ARGs were determined by a metagenomic approach. Our research will provide a foundation for future function mining of suancai microbiome. Sample Collection and Sequencing In this study, Chinese northeast suancai was processed and samples were collected at different time points following the procedure we previously described [33]. Briefly, the suancai brine was thoroughly mixed before being collected from the upper, middle and lower layers of the jar respectively. The samples were collected every day during the fermentation process for physicochemical index measurement in triplicates. The nitrite content showed an increasing trend at the beginning of fermentation (before day 3), which accumulated a nitrite peak at day 3. Afterwards the nitrite content sharply decreased, finally reaching a stable value at day 7. Based on the nitrite concentration, samples A (day 3), B (day 5) and C (day 7) during the fermentation were selected for sequencing. Metagenomic DNA was extracted using the QIAamp DNA Microbiome kit following the manufacturer's protocol (QIAGEN Inc., Germany). Sequencing libraries were generated from metagenomic DNA (1 µg) using NEBNext ® Ultra™ DNA Library Prep Kit for Illumina (NEB, USA) according to the manufacturer's protocol. Index codes were added to attribute sequences to each sample. In brief, DNA libraries of fragments (size of 350 bp) were prepared respectively for each sample. The samples were sequenced on the Illumina NovaSeq 6000 platform at Novogene Bioinformatics Technology Co., Ltd. (Tianjin, China). Metagenome Assembly and Taxonomic Assignment Raw data was preprocessed in order to obtain clean data for subsequent analysis. The detailed processing steps for quality control are provided in the Supporting Information. The clean data were assembled and analyzed by SOAP denovo software V2.04 [34]. The assembled scaftigs were then disconnected from N connection, leaving the Scaftigs without N. The samples' clean data were mapped to each scaffold separately by Bowtie software V2.2.4 to obtain the reads that were not used, which were then combined and processed as mentioned above for mixed assembly. Using the number of reads and the length of the genes on alignment, the abundance of each gene in each sample was calculated. The equation was shown as follows, r represents the number of reads matched to the genes and L represents the length of genes [35][36][37]. To obtain the taxonomic annotation, the amino acid sequences of the predicted genes were aligned in the NCBI nr database with DIAMOND (blastp, cut-off E-value of 1 × 10 −5 ) [38]. Taxonomic abundances were normalized by dividing the number of reads of a specific taxon by the total number of reads assigned to bacterial 16s rRNA in the sample. Functional Annotation To gain knowledge of the main functional and metabolic pathway, Kyoto Encyclopedia of Genes and Genomes (KEGG) [39,40], Evolutionary Genealogy of Genes: Nonsupervised Orthologous Groups (eggNOG) [41] and Carbohydrate-Active enzymes (CAZy) [42] databases were used for functional annotation of genes. Unigenes were aligned against these databases by using BLASTP, the mapped contigs were screened with an e-value threshold of 1 × 10 −5 . In the case of each sequence's blast result, the best blast hit was used for further analysis [43,44]. ARGs Identification Antimicrobial resistome analysis was carried out by aligning unigenes to CARD database v2.0.1 [45] using the blastp, e-value ≤ 1 × 10 −30 . The ARG abundance was expressed as fragments per kilobase per million fragments of contigs containing ARGs. Based on the aligned result by the Resistance Gene Identifier (RGI) tool, the abundance distribution of resistance genes in each sample, the taxonomic attribution analysis and the resistance mechanism of ARGs analysis were performed. Statistical Analysis R-3.5.1 was used for statistical analysis. The heatmaps were transformed into Z values on the base of relative abundance and were performed with "pheatmap" packages. A dissimilarity matrix was generated on the basis of the abundance of unigenes using the Bray-Curtis index [46] with package vegan. To identify the number of shared ARG subtypes across three samples, a Venn diagram was created by jvenn (a Venn tool). Metagenomic Assembly Revealed CAZymes Both eggNOG-based and KEGG-based results revealed the richness of functional capabilities in relation to carbohydrate transport and metabolism and amino acid metabolism in the suancai metagenomic data ( Figure S1). Functional domains for synthesis, degradation and modification of complex carbohydrates are regarded as CAZymes (Carbohydrate-Active enzymes). The CAZy database is used to annotate CAZyme-encoding genes belonging to the six CAZy families: glycoside hydrolases (GHs), glycosyltransferases (GTs), polysaccharide lyases (PLs), carbohydrate esterases (CEs), auxiliary activities (AAs) and carbohydrate-binding modules (CBMs). The metagenomic contigs of the suancai samples were queried against the CAZy database, which revealed the highest number of CAZymeencoding genes in sample A at each family ( Figure 1). In line with the Bray-Curtis distance based on CAZy relative abundance, the CAZyme-encoding genes belonging to the six CAZy families are closer between B and C ( Figure 2), which is in agreement with the eggNOG and KEGG analyses ( Figure S2). This reflects that the changes of microbiota composition cause different genes functioning at varying time points in suancai fermentation. A total of 8796 putative CAZymes were discovered in the metagenomic results ( Figure 3). To be specific, the maximum number of contigs were matched to GHs (4306), followed by GTs (2770) across the three metagenomes. The remaining putative CAZyme hits were assigned to CBMs (994), CEs (415), PLs (157) and AAs (154). GTs, at high percentages in the metagenomic data, are acknowledged to catalyze the glycosidic linkages synthesis by transferring sugar moiety from phospho-activated sugar donors to saccharide or non-saccharide acceptors. The biosynthesis of disaccharides, polysaccharides and oligosaccharides is conductd by glycosyltransferase reactions [47]. Statistical Analysis R-3.5.1 was used for statistical analysis. The heatmaps were transformed into Z values on the base of relative abundance and were performed with "pheatmap" packages. A dissimilarity matrix was generated on the basis of the abundance of unigenes using the Bray-Curtis index [46] with package vegan. To identify the number of shared ARG subtypes across three samples, a Venn diagram was created by jvenn (a Venn tool). Metagenomic Assembly Revealed CAZymes Both eggNOG-based and KEGG-based results revealed the richness of functional capabilities in relation to carbohydrate transport and metabolism and amino acid metabolism in the suancai metagenomic data ( Figure S1). Functional domains for synthesis, degradation and modification of complex carbohydrates are regarded as CAZymes (Carbohydrate-Active enzymes). The CAZy database is used to annotate CAZyme-encoding genes belonging to the six CAZy families: glycoside hydrolases (GHs), glycosyltransferases (GTs), polysaccharide lyases (PLs), carbohydrate esterases (CEs), auxiliary activities (AAs) and carbohydrate-binding modules (CBMs). The metagenomic contigs of the suancai samples were queried against the CAZy database, which revealed the highest number of CAZyme-encoding genes in sample A at each family ( Figure 1). In line with the Bray-Curtis distance based on CAZy relative abundance, the CAZyme-encoding genes belonging to the six CAZy families are closer between B and C ( Figure 2), which is in agreement with the eggNOG and KEGG analyses ( Figure S2). This reflects that the changes of microbiota composition cause different genes functioning at varying time points in suancai fermentation. A total of 8796 putative CAZymes were discovered in the metagenomic results ( Figure 3). To be specific, the maximum number of contigs were matched to GHs (4306), followed by GTs (2770) across the three metagenomes. The remaining putative CAZyme hits were assigned to CBMs (994), CEs (415), PLs (157) and AAs (154). GTs, at high percentages in the metagenomic data, are acknowledged to catalyze the glycosidic linkages synthesis by transferring sugar moiety from phospho-activated sugar donors to saccharide or non-saccharide acceptors. The biosynthesis of disaccharides, polysaccharides and oligosaccharides is conductd by glycosyltransferase reactions [47]. Phylogenetic Distribution of CAZymes Despite CAZymes being distributed throughout the suancai microbiome, phylogenetic results of CAZyme encoding contigs demonstrated that a substantial proportion of CAZymes was contributed by bacteria belonging to order Pseudomonadales, Enterobacterales, Lactobacillales and Sphingobacteriales. The top 10 CAZymes in our metagenomic data are shown in Figure 4a. CBMs, with carbohydrate-binding activity, enhance the catalytic functions of CAZymes via making the carbohydrate-active modules more accessible to target substrates [48]. The CBM50 family, which comprises various enzymes belonging to the GH18, GH19, GH23, GH24, GH25 and GH73 families, i.e., enzymes that cleave peptidoglycan or chitin, was most abundantly present among CBM modules. The presence of CBMs involved in binding to polysaccharides suggested efficient recognition of a wide spectrum of carbohydrate polymers by GH family enzymes. Phylogenetic Distribution of CAZymes Despite CAZymes being distributed throughout the suancai microbiome, phylogenetic results of CAZyme encoding contigs demonstrated that a substantial proportion of CAZymes was contributed by bacteria belonging to order Pseudomonadales, Enterobacterales, Lactobacillales and Sphingobacteriales. The top 10 CAZymes in our metagenomic data are shown in Figure 4a. CBMs, with carbohydrate-binding activity, enhance the catalytic functions of CAZymes via making the carbohydrate-active modules more accessible to target substrates [48]. The CBM50 family, which comprises various enzymes belonging to the GH18, GH19, GH23, GH24, GH25 and GH73 families, i.e., enzymes that cleave peptidoglycan or chitin, was most abundantly present among CBM modules. The presence of CBMs involved in binding to polysaccharides suggested efficient recognition of a wide spectrum of carbohydrate polymers by GH family enzymes. Phylogenetic Distribution of CAZymes Despite CAZymes being distributed throughout the suancai microbiome, phylogenetic results of CAZyme encoding contigs demonstrated that a substantial proportion of CAZymes was contributed by bacteria belonging to order Pseudomonadales, Enterobacterales, Lactobacillales and Sphingobacteriales. The top 10 CAZymes in our metagenomic data are shown in Figure 4a. CBMs, with carbohydrate-binding activity, enhance the catalytic functions of CAZymes via making the carbohydrate-active modules more accessible to target substrates [48]. The CBM50 family, which comprises various enzymes belonging to the GH18, GH19, GH23, GH24, GH25 and GH73 families, i.e., enzymes that cleave peptidoglycan or chitin, was most abundantly present among CBM modules. The presence of CBMs involved in binding to polysaccharides suggested efficient recognition of a wide spectrum of carbohydrate polymers by GH family enzymes. Catabolic enzymes that catalyze the cleavage of O-glycosidic bonds in carbohydrates are known as Glycoside hydrolases (GHs). These are high-efficiency catalysts for hydrolysis of most dominant and prevalent carbohydrates. Metagenome sequences for encoding β-galactosidases (GH1), β-glucosidase (GH3), lytic transglycosylases (GH23) and other abundant enzymes were discovered. The heatmap depicted the variations in relative abundance of the top 35 CAZymes (Figure 4b). One of the dominant GH families was the GH13, which is subdivided into~40 subfamilies. Among the key enzymes of the GH13 family are α-amylase, α-glucosidase, oligo-α-glucosidase, sucrose phosphorylase and branching enzyme. (Figure 4b). One of the dominant GH families was the GH13, which is subdivided into ~40 subfamilies. Among the key enzymes of the GH13 family are α-amylase, α-glucosidase, oligo-α-glucosidase, sucrose phosphorylase and branching enzyme. GH70 enzymes are transglucosylases produced by lactic acid bacteria (LAB). Many LAB strains from fermented vegetables are considered to be potential probiotics with immunomodulatory activity in vitro and in vivo [49]. CAZymes in Lactobacillus were known to be important in probiotic function, biomass transformation and vegetable tissue softening. GH70 enzymes are very interesting biocatalysts with strong applications in the food, pharmaceutical and cosmetic sectors. Here we unveiled the microbiological distribution of GH family enzymes in suancai. Notably, GH70 enzymes were all mapped to Leuconostoc, belonging to species L. mesenteroides, L. fallax, L. citreum, L. gelidum and L. carnosum (File S1). Some of these species were reported to be able to produce large amounts of extracellular polysaccharides, which can be employed as prebiotics or for other purposes in the food industry [50]. The aforementioned species were also frequently found in fermented vegetables. As a result, we assumed that these predominant microbial LAB species might play vital roles in determining the functional and sensorial properties of suancai products. This provides a reference for the identification and characterization of GH70 enzymes in LAB. Occurrence and Characteristics of ARGs during Suancai Fermentation According to the results based on strict matches, the study characterized ARG occurrence and abundance in the suancai fermentation process. By using the CARD database and RGI tool, there were 65 shared ARGs among a total of 83 ARGs detected in three samples that were identifiable ( Figure 5). Compared with sample B and C, sample A GH70 enzymes are transglucosylases produced by lactic acid bacteria (LAB). Many LAB strains from fermented vegetables are considered to be potential probiotics with immunomodulatory activity in vitro and in vivo [49]. CAZymes in Lactobacillus were known to be important in probiotic function, biomass transformation and vegetable tissue softening. GH70 enzymes are very interesting biocatalysts with strong applications in the food, pharmaceutical and cosmetic sectors. Here we unveiled the microbiological distribution of GH family enzymes in suancai. Notably, GH70 enzymes were all mapped to Leuconostoc, belonging to species L. mesenteroides, L. fallax, L. citreum, L. gelidum and L. carnosum (File S1). Some of these species were reported to be able to produce large amounts of extracellular polysaccharides, which can be employed as prebiotics or for other purposes in the food industry [50]. The aforementioned species were also frequently found in fermented vegetables. As a result, we assumed that these predominant microbial LAB species might play vital roles in determining the functional and sensorial properties of suancai products. This provides a reference for the identification and characterization of GH70 enzymes in LAB. Occurrence and Characteristics of ARGs during Suancai Fermentation According to the results based on strict matches, the study characterized ARG occurrence and abundance in the suancai fermentation process. By using the CARD database and RGI tool, there were 65 shared ARGs among a total of 83 ARGs detected in three samples that were identifiable ( Figure 5). Compared with sample B and C, sample A contained a lower diversity of resistance genes. The diversity abundance of ARGs increased obviously during suancai fermentation. The top 20 abundant ARGs accounted for over 80% of all the annotated ARGs and were considered to be representative ARGs (>80%) (Figure 6). This suggested that the distribution of ARGs was concentrated in suancai. contained a lower diversity of resistance genes. The diversity abundance of A creased obviously during suancai fermentation. The top 20 abundant ARGs accou over 80% of all the annotated ARGs and were considered to be representativ (>80%) (Figure 6). This suggested that the distribution of ARGs was concent suancai. Using KEGG, the representative ARG subtypes identified mainly annotate ferent classes (Table 1): multidrug resistance gene (adeF, OXA-141, Erm43, MexS OXA-50, mdsC, MexB, OXA-388, OXA-351, MexW), lincosamide (lnuA, lmrC, lmr noglycoside (APH3-Vla, APH3-VI), peptide (arnA), fosfomycin (fosB), fusidic aci and tetracycline (tetS) resistance genes. Each sample has matched these represent sistance genes. Across three suancai samples, multidrug resistance was the m quently assigned gene category. Microorganisms tend to develop multidrug resis counter environmental pressures. The multidrug resistance genes adeF and OXA the prevalent ones in distribution; to be specific, the relative abundance in each exceeded 10% (Table S1). For sample B, the lincosamide antibiotic gene lnuA was t abundant ARG: it was significantly increased compared to those in sample A or C. The genes discovered in suancai samples encoded resistance against lincosamid noglycosides, macrolides, phenicols, fluoroquinolone and tetracyclines etc, whe contained a lower diversity of resistance genes. The diversity abundance of ARGs increased obviously during suancai fermentation. The top 20 abundant ARGs accounted for over 80% of all the annotated ARGs and were considered to be representative ARGs (>80%) (Figure 6). This suggested that the distribution of ARGs was concentrated in suancai. Using KEGG, the representative ARG subtypes identified mainly annotated to different classes (Table 1): multidrug resistance gene (adeF, OXA-141, Erm43, MexS, ErmD, OXA-50, mdsC, MexB, OXA-388, OXA-351, MexW), lincosamide (lnuA, lmrC, lmrD), aminoglycoside (APH3-Vla, APH3-VI), peptide (arnA), fosfomycin (fosB), fusidic acid (fusD) and tetracycline (tetS) resistance genes. Each sample has matched these representative resistance genes. Across three suancai samples, multidrug resistance was the most frequently assigned gene category. Microorganisms tend to develop multidrug resistance to counter environmental pressures. The multidrug resistance genes adeF and OXA-141 are the prevalent ones in distribution; to be specific, the relative abundance in each sample exceeded 10% (Table S1). For sample B, the lincosamide antibiotic gene lnuA was the most abundant ARG: it was significantly increased compared to those in sample A or sample C. The genes discovered in suancai samples encoded resistance against lincosamides, aminoglycosides, macrolides, phenicols, fluoroquinolone and tetracyclines etc, whereas the Using KEGG, the representative ARG subtypes identified mainly annotated to different classes (Table 1): multidrug resistance gene (adeF, OXA-141, Erm43, MexS, ErmD, OXA-50, mdsC, MexB, OXA-388, OXA-351, MexW), lincosamide (lnuA, lmrC, lmrD), aminoglycoside (APH3-Vla, APH3-VI), peptide (arnA), fosfomycin (fosB), fusidic acid (fusD) and tetracycline (tetS) resistance genes. Each sample has matched these representative resistance genes. Across three suancai samples, multidrug resistance was the most frequently assigned gene category. Microorganisms tend to develop multidrug resistance to counter environmental pressures. The multidrug resistance genes adeF and OXA-141 are the prevalent ones in distribution; to be specific, the relative abundance in each sample exceeded 10% (Table S1). For sample B, the lincosamide antibiotic gene lnuA was the most abundant ARG: it was significantly increased compared to those in sample A or sample C. The genes discovered in suancai samples encoded resistance against lincosamides, aminoglycosides, macrolides, phenicols, fluoroquinolone and tetracyclines etc, whereas the ARGs of the product itself might reduce the efficacy of these antibiotics. Figure 7 demonstrates that the relative abundance of most ARGs become lower during fermentation, and are lowest in sample C. This phenomenon, together with no additives being used during traditional household suancai fermentation, suggests that the primary source of ARGs might mainly be a direct result of raw materials. Correlation of ARGs and Their Potential Hosts To confirm and compare the microbial origin of ARGs with the total microbial genes, the ARGs and total microbial genes were assigned to different taxa using resistance gene identifier (RGI) in CARD Resistance Database. The species attribution analysis of resistance genes was conducted (File S2). Taxonomic annotation revealed that most of the dominant species that matched with ARGs were assigned to Pseudomonas (P. fluorescens, P. taetrolens and P. fragi), Serratia (Serratia sp. Leaf51), Erwinia (E. amylovora, E. pesicina), Stenotrophomonas (S. maltophilia), Rahnella and some LABs, such as Leuconostoc (L. gelidum, L. carnosum), Lactobacillus (Lactobacillus versmoldensis and Lactobacillus sakei), Lactococcus (Lactococcus lactis) and Weissella (W. soli). The majority of the ARG-carrying species belonged to the Pseudomonas genus. They are common inhabitants in fermented vegetables due to the cold storage and their flexibility in nutritional requirements, which makes suancai a suitable substrate for them to grow. Figure 8 shows the taxonomic attribution results at phylum level. In sample A, the distribution of ARGs and total microbial genes at the phylum level was 67% and 84% for Proteobacteria, 15% and 10% for Firmicutes, respectively (Figure 8a). In sample B, the assignment of ARGs and total microbial genes at the phylum level was 65% and 72% for Proteobacteria, 16% and 19% for Firmicutes (Figure 8b). In sample C, the distribution of ARGs and total microbial genes at the phylum level was 66% and 80% for Proteobacteria and 14% and 9% for Firmicutes (Figure 8c). According to the findings, the majority of ARGs in homemade northeast suancai are found in Proteobacteria and Firmicutes. Somewhat differently, a previous study characterized the profiles of ARGs in ready-to-eat vegetables showed that the phylum-level assignment of ARGs and total microbial genes was 62% and 39% for Proteobacteria, 17% and 31% for Firmicutes respectively [14]. Its result showed that compared to other genes, ARGs were more likely to be found in Proteobacteria. However, in our homemade suancai, ARGs were more prone to exist in Firmicutes. The reason might be that most industrial ready-to-eat vegetable foods were produced by using starter cultures to initiate the fermentation, which probably contributes to their reduced diversity compared to spontaneous fermented foods [51]. This is unsurprising, given that homemade spontaneous fermented suancai has not been sterilized or treated with food additives that kill pathogenic as well as health-promoting/probiotic organisms. Furthermore, because homemade raw suancai is more vulnerable to environment and contamination during handling, its bacterial diversity is likely to be higher. This is in line with previous research which demonstrated ARGs varied across food substrate and between starter-type and spontaneous fermentations [51]. Correlation of ARGs and Their Potential Hosts To confirm and compare the microbial origin of ARGs with the total microbial genes, the ARGs and total microbial genes were assigned to different taxa using resistance gene identifier (RGI) in CARD Resistance Database. The species attribution analysis of resistance genes was conducted (File S2). Taxonomic annotation revealed that most of the dominant species that matched with ARGs were assigned to Pseudomonas (P. fluorescens, P. taetrolens and P. fragi), Serratia (Serratia sp. Leaf51), Erwinia (E. amylovora, E. pesicina), Stenotrophomonas (S. maltophilia), Rahnella and some LABs, such as Leuconostoc (L. gelidum, given that homemade spontaneous fermented suancai has not been sterilized or treated with food additives that kill pathogenic as well as health-promoting/probiotic organisms. Furthermore, because homemade raw suancai is more vulnerable to environment and contamination during handling, its bacterial diversity is likely to be higher. This is in line with previous research which demonstrated ARGs varied across food substrate and between starter-type and spontaneous fermentations [51]. In our study, APH3-Vla and APH3-VI belonging to the APH gene family originated from Gammaproteobacteria (including Yersiniaceae and Pseudomonadales). The presence of APH3-Vla and APH3-VI in fermented suancai is consciously worrying as aminoglycoside 3 -phosphotransferases can mediate high-level resistance against a few aminoglycosides. These genes could be carried on plasmids or encoded on chromosomes; APH3 is the latter, but a transposon-mediated mechanism for spreading resistance genes has been proposed [52,53]. Because the gene had previously only been described in P. aeruginosa, and was recently reported to have allegedly originated from L. mesenteroides in yogurt, the pathways of resistance gene transfer associated with this gene should be evaluated further. The result shows that the abundance of APH3-Vla and APH3-VI is highest in sample A. This phenomenon, together with no additives during traditional household suancai fermentation, raises the suspicion that the source of the APH may be a direct result of raw materials. With regard to the analytical data obtained in this study, some of the recognized ARG hosts were reported previously. For example, the resistance gene MexVW is commonly carried by Pseudomonas [54], and the resistance gene emrD has been determined in Enterobacter [55]. In our species attribution results, gene adeF, whose CARD ontology classifies it as a gene conferring resistance to tetracycline and fluoroquinolone antibiotics, is only attributed to phylum Proteobacteria (class Gammaproteobacteria). Gene OXA-141, as a broad spectrum β-lactamase previously detected in P. aeruginosa, is also only attributed to phylum Proteobacteria (class Gammaproteobacteria). Gene lnuA, a gene conferring resistance to lincomycin antibiotic, mapped to phylum Firmicutes (class Bacilli). Resistance Mechanisms The percentage of resistance mechanisms was calculated for each sample based on the ARG abundances. In our suancai samples, the most dominant mechanism of detected ARGs was the antibiotic efflux, which included 36 genes, followed by antibiotic inactivation, which included 30 genes. The remaining resistance mechanisms, such as antibiotic target alteration and antibiotic target protection, only included 17 genes. Because results at lower taxonomic levels lack reliability, visual results for resistance mechanisms of microbiome are presented only at the phylum level ( Figure 9). The result shows the ARGs of antibiotic efflux mechanism are mostly Proteobacteria. The resistance mechanisms of ARGs in Firmicutes are mostly antibiotic inactivation, followed by antibiotic efflux. Notably, the ARGs involve both antibiotic target alteration and antibiotic efflux mechanism found is only in P. syringae at the species level (Table S2), which is known as a plant pathogen. The ARGs mechanism of antibiotic target protection only involves tetracycline (tetS, tetL, tet32) resistance genes, and is only detectable in Pseudomonas and Weissella at genus level. Table S2. Compared with previous studies on kefir strains and yogurt products, the only mechanism discovered was antibiotic target protection. In one yogurt grain sample, antibiotic target alteration, antibiotic target replacement (51.28%) and antibiotic target protection (48.72%) are probable resistance mechanisms [56]. The mechanism differs markedly across fermented vegetables and dairy products. Microbial diversity and functional changes (e.g., AR) are driven by fermentation substrates in fermented foods, and the raw material has a significant influence on the resistance mechanisms of the microbiome in fermented foods. Compared to other fermented products, household fermented vegetables with more abundant microbes require more attention on resistance mechanisms of antibiotic efflux. Metagenomics analyses in the study depend on shotgun DNA sequencing and cannot yet be directly linked to the phenotypes of antimicrobial resistance, especially those Table S2. Compared with previous studies on kefir strains and yogurt products, the only mechanism discovered was antibiotic target protection. In one yogurt grain sample, antibiotic target alteration, antibiotic target replacement (51.28%) and antibiotic target protection (48.72%) are probable resistance mechanisms [56]. The mechanism differs markedly across fermented vegetables and dairy products. Microbial diversity and functional changes (e.g., AR) are driven by fermentation substrates in fermented foods, and the raw material has a significant influence on the resistance mechanisms of the microbiome in fermented foods. Compared to other fermented products, household fermented vegetables with more abundant microbes require more attention on resistance mechanisms of antibiotic efflux. Metagenomics analyses in the study depend on shotgun DNA sequencing and cannot yet be directly linked to the phenotypes of antimicrobial resistance, especially those originating from dead bacteria. Nevertheless, published studies show that naturally competent bacteria can take up DNA released by dead microorganisms [57], implying their potential contribution to the transmission of ARGs. In terms of food security, quality control for microbial species with abundant and diverse ARGs is essential for minimizing the risk of ARGs incorporation during the consumption of traditional suancai. The findings of resistance mechanisms will serve as a guide for further control measures for specific microbial species. Conclusions In this study, a metagenome sequencing method was used to investigate the metagenomics of suancai, a traditional fermented food in the northeast of China. KEGG-based and eggNOG-based analysis results revealed a significant potential for carbohydrate transport and metabolism and amino acid metabolism. The species encoded kinds of CAZymes, notably GHs and GTs, implying their potential activities in carbohydrate metabolism. Phylogenetic analysis of CAZyme encoding contigs showed that a large proportion of CAZymes was contributed by bacteria belonging to order Pseudomonadales, Enterobacterales, Lactobacillales and Sphingobacteriales. GH70 enzymes were present in L. mesenteroides, L. fallax, L. citreum, L. gelidum and L. carnosum. Taken together, 8796 putative CAZymes were discovered in the metagenomic data providing a thorough understanding of the presence of diverse CAZymes in microbial species of suancai. Although ARGs have been found in a variety of environments, little is known about their distribution and phylogenetic information in fermented foods. The alignment results against the CARD database showed the existence of Pseudomonas as the most abundant Gram-negative genus in fermented suancai bearing ARGs. Most ARGs exist in Proteobacteria and Firmicutes. The most predominant ARG category is multidrug-resistant genes. The four, mainly microbial, resistance mechanisms in suancai samples are antibiotic efflux, followed by antibiotic inactivation, antibiotic target alteration and antibiotic target protection. Therefore, it would be necessary to discreetly monitor the microbial subpopulation that holds ARGs and to optimize the sanitation conditions in suancai production processes to reduce the risk of drug-resistant genes transfer and develop effective strategies to control AR. This study revealed a wealth of information about carbohydrate-active enzymes and antibiotic resistance genes in suancai. The knowledge presented here will provide significant opportunities for improving suancai production and harnessing the health-promoting potential in the future. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/genes13050773/s1, Figure S1: Statistical map drawn from unigenes annotation results indicated substantial representation of carbohydrate metabolism in the metagenomes; Figure S2: The cluster tree of CAZy family genes based on Bray-Cutis distance; Table S1: Percentage abundance of annotated ARGs in three samples; Table S2: Profile of resistance mechanism and taxonomic distribution of ARGs. File S1: Taxonomic Attribution Information of All Annotated CAZymes in the Metagenomes. File S2: Taxonomic Attribution Information of All Annotated ARGs in the Metagenomes.
2022-05-29T05:20:44.747Z
2022-04-27T00:00:00.000
{ "year": 2022, "sha1": "89ee07b3db50b3995c35db7b378487e16626edce", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "89ee07b3db50b3995c35db7b378487e16626edce", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13696346
pes2o/s2orc
v3-fos-license
A novel ultrafast-low-dose computed tomography protocol allows concomitant coronary artery evaluation and lung cancer screening Background Cardiac computed tomography (CT) is often performed in patients who are at high risk for lung cancer in whom screening is currently recommended. We tested diagnostic ability and radiation exposure of a novel ultra-low-dose CT protocol that allows concomitant coronary artery evaluation and lung screening. Methods We studied 30 current or former heavy smoker subjects with suspected or known coronary artery disease who underwent CT assessment of both coronary arteries and thoracic area (Revolution CT, General Electric). A new ultrafast-low-dose single protocol was used for ECG-gated helical acquisition of the heart and the whole chest. A single IV iodine bolus (70–90 ml) was used. All patients with CT evidence of coronary stenosis underwent also invasive coronary angiography. Results All the coronary segments were assessable in 28/30 (93%) patients. Only 8 coronary segments were not assessable in 2 patients due to motion artefacts (assessability: 98%; 477/485 segments). In the assessable segments, 20/21 significant stenoses (> 70% reduction of vessel diameter) were correctly diagnosed. Pulmonary nodules were detected in 5 patients, thus requiring to schedule follow-up surveillance CT thorax. Effective dose was 1.3 ± 0.9 mSv (range: 0.8–3.2 mSv). Noteworthy, no contrast or radiation dose increment was required with the new protocol as compared to conventional coronary CT protocol. Conclusions The novel ultrafast-low-dose CT protocol allows lung cancer screening at time of coronary artery evaluation. The new approach might enhance the cost-effectiveness of coronary CT in heavy smokers with suspected or known coronary artery disease. Background Cardiac computed tomography (CT) scan is an ideal diagnostic tool for identifying coronary artery disease in patients with low or intermediate risk [1]. In recent years, cardiac CT is being often performed in patients who are at high risk either for coronary artery disease or lung cancer. The update edition of the National Institute for Health and Care Excellence (NICE) guidelines recommends cardiac CT as the first-line diagnostic tool for patients with new-onset chest pain due to suspected CAD [2]. Also, symptomatic patients with known coronary artery disease and previous percutaneous coronary intervention who have an unclear stress test but whose presentation suggests a high likelihood of having an instent restenosis or a 'de novo' stenosis might benefit from cardiac CT [3]. In 2014, the U.S. Preventive Services Task Force recommended annual lung cancer screening with ultra-low dose computed tomography for current and former heavy smokers aged 55 to 80 years [4]. Indeed, lung cancer screening in patients with suspected or known coronary artery disease undergoing cardiac CT may provide the opportunity to implement recommendation for lung cancer screening in clinical practice [5]. The aim of this pilot study was to test the diagnostic ability and radiation exposure of a novel ultra-low-dose CT protocol that along with coronary artery evaluation allows lung screening with no increase in contrast or radiation dose. The new technique overcomes the limitation of a double dose of contrast and radiation usually needed to assess cardiac and lung regions during two different examinations. Study population We studied 30 current or former heavy smokers aged 55 to 79 years. All were symptomatic subjects with effortinduced or typical chest pain with suspected or known coronary artery disease. Subjects were excluded in case of contraindications to iodinated contrast such as allergies and chronic kidney failure, or if there was any suspicion of pregnancy (Table 1). All cases underwent cardiac CT for assessment of coronary arteries. Additionally, all individuals had CT scanning for early lung cancer detection. Invasive coronary angiography was performed subsequently in all patients who had evidence of ≥ 1 coronary stenosis (> 70% reduction of vessel diameter). The study conforms to the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the Institutional Board Review Committee of our Institution (ID Number: 671/2017/D). All participants gave their written informed consent for the entire study, including radiation exposure. The STARD (Standards for Reporting of Diagnostic Accuracy Studies) guidelines for reporting studies of diagnostic accuracy were followed [6]. Study procedures All subjects underwent simultaneous CT evaluation of coronary arteries and thoracic area (Revolution CT, General Electric, Boston, MA, US). The CT scanner operates in prospectively ECG-triggered sequential scanning mode, i.e. a tool adopted in spiral acquisitions in order to optimize radiation dose by adjusting the x-ray tube current. In case of heart rate > 70 beats/min, study subjects were given 50 mg of metoprolol orally 2 h before CT examination. ECG-gated helical prospective acquisition started from the carena to the apex of the heart to evaluate coronary arteries (100 kVp, variable mAs, thickness 0,625 mm, about 6 s apnea), followed by fast, low dose acquisition, from pulmonary apex to the bases, on the whole chest (100 to 120 kVp, auto mAs to adapt to the patient BMI, thickness 1,25 mm, 3 s apnea) ( Fig. 1). A single IV iodine bolus (70-90 ml) was used. A bolus of 1 mL/kg of body weight (minimum of 70 mL) of iodixanol (Ultravist 370, Bayer HealthCare Pharmaceuticals, Berlin, Germany) followed by 80 mL of saline solution was continuously injected into a right antecubital vein through a catheter using a 5 mL/s flow rate. The segmental analysis of the coronary arteries was performed using the classification proposed by the American Heart Association which takes into consideration 16 segments [7]. When present, the intermediate branch (labeled as segment 17) was included in the analysis. Independently of reference vessel size, a coronary stenosis was considered significant if the diameter was ≥ 70%. The evaluation of any coronary stenosis was carried out, independently, by two investigators (FP and GP) who were blinded to the patients' clinical characteristics. Coronary assessment was performed through a dedicated • Microalbuminuria • Lack of consent Fig. 1 Ultrafast single protocol. The ECG-gated helical prospective acquisition started from the carena to the apex of the heart to evaluate coronary arteries (a, field of vew of cardiac scan), followed by fast, low dose acquisition, from pulmonary apex to the bases, on the whole chest (b, field of view of thoracic scan) workstation (Vitrea2 FX, Vital Images, Plymouth, MN, USA) which allows the automatic identification of the coronary arterial borders [8]. When data analysis could not be performed in all coronary artery segments, the proportion of non-assessable segments was quantified. Assessment of thoracic images obtained by CT scanning was performed by two investigators (AB and MP) with documented expertise in radiologic lung imaging. Pulmonary nodules were evaluated following the guidelines for screening of lung cancer published by the National Comprehensive Cancer Network (NCCN) [9]. A nodule was defined as a rounded or irregular opacity in the lung parenchyma, that was well or poorly defined, and had a diameter ≤ 3 cm. Also, pulmonary nodules were labeled as solid opacity, if there was a homogenous soft-tissue attenuation, or as ground-glass opacity, if there was a an area of hazy increased lung opacity with indistinct margins of pulmonary vessels. A positive test result in CT screening for lung cancer was defined by the finding of a noncalcified solid nodule ≥6 mm or a ground-glass nodule> 5 mm [9]. Contrast-to-noise ratio and signal-to-noise ratio were measured for quantitative assessment. Radiation doses delivered during CT scans were collected from patient CT acquisition protocols. Dose-length product (DLP) was recorded for each patient. Effective radiation dose (ED) was estimated using the formula "ED (mSv) ≈DLP x k", where k is a conversion coefficient specific for adult chests (0.014 mSv/ mGy × cm) [10]. Quantitative coronary angiography Invasive coronary angiography was accepted as the reference standard for the purpose of the study. In the week preceding CT scanning, all patients had left and right coronary angiography using the transfemoral or transradial approaches. In order to indentify coronary lesions with a significant (> 70%) stenosis, quantitative coronary angiography was used. Briefly, two investigators (FP and GT), blinded to the patients' characteristics, performed all measurements independently. Coronary angiograms were evaluated off-line by means of a system that allows automated detection of the coronary artery edges (Cardiovascular Medical System, MEDIS Imaging Systems, Leiden, The Netherlands) [11]. Prior to coronary angiography, a bolus of intracoronary nitroglycerin (200 micrograms) was administered. Assessment of coronary wall morphology was done on angiographic views obtained after administration of nitroglycerin [12]. Of note, the investigators took into consideration all coronary lesions and irregularities that could be visually detected at coronary angiograms. When multiple coronary lesions were present in a single artery, they were labeled as distinct if separated by a normal tract of the arterial wall. The percent diameter stenosis was measured in the angiographic view that showed the most significant narrowing. For calibration, the catheter tip filled with contrast was used. This allowed to derive the reference diameter by interpolation. We measured all coronary segments that had a diameter > 2 mm showing a stenosis ranging between 20 and 100% [11]. Assessment of coronary stenosis was based on the formula: reference diameterminimal lumen diameter/reference diameter × 100. Statistics Data analysis included descriptive statistics. All data are reported as mean ± standard deviation, range, or percentage as appropriate. Statistical analysis was computed using SPSS 18.0.2 (IBM Corporation). The significance level for differences was set at p ≤ 0.05. Demographic Thirty current or former heavy smoker subjects with chest pain and suspected CAD (20 men, mean age: 66 ± 9 years; range: 59-78 years) underwent simultaneous CT evaluation of the coronary arteries and the complete thoracic area (Table 2). Coronary artery evaluation At CT scanning, coronary artery segments were judged to be assessable in 28/30 (93%) patients, as there were no step artifacts and motion artifacts were uncommon (3-point score: 0.59 ± 0.55) and did not affect coronary evaluation. In 2/30 (7%) patients, a total of 8 segments were judged to be non assessable because of motion artefacts. Accordingly, per-segment analysis disclosed an overall 98% assessability (477/485 segments). Coronary angiography was carried out in 10/30 (33%) patients who were found to have ≥ 1 coronary stenosis ≥ 70% at CT scanning (Fig. 2). Noteworthy, the invasive evaluation disclosed that CT scanning had correctly shown the majority (20/21) of significant (> 70%) coronary stenoses. In one patient only, coronary angiography found an 80% stenosis that was defined as non significant at CT scanning. Furthermore, cardiac CT showed significant in-stent restenosis in one of the patients who had had percutaneous coronary intervention (Fig. 3). Pulmonary CT evaluation Pulmonary nodules were detected in 5 patients. All cases presented with solid nodules ≥ 6 mm (range: 6-11 mm), thus requiring to schedule follow-up surveillance CT thorax. Three other patients presented with solid nodules smaller than 6 mm, which were therefore considered negative according to National Comprehensive Cancer Network (NCCN) recommendations. No recurrence was found in a patient 5 years after right upper lobectomy (Fig. 2). Technical characteristics The mean contrast-to-noise ratio and mean signal-tonoise ratio were respectively 12.5 ± 4.6 and 12.9 ± 3.3. Effective dose was 1.3 ± 0.9 mSv (range: 0.8-3.2 mSv). Noteworthy, no contrast or radiation dose increment was required with the new protocol as compared to conventional coronary CT protocol. Discussion Cardiac CT offers a detailed anatomical assessment of CAD comparable to invasive coronary angiography [1]. Accordingly, CT coronary angiography has become rapidly an effective alternative to the traditional invasive angiography for screening and evaluating CAD. Indeed, the new generation of CT scanners has shown to yield high sensitivity and specificity in detecting angiographically significant stenoses [3]. Cardiac CT is said to be the diagnostic test to be preferred for evaluating patients with stable angina because of its favorable cost/benefit ratio. According to the guidelines of the National Institute for Health and Care Excellences (NICE), cardiac CT should be offered to all chest pain patients in whom CAD is suspected [2]. Especially in case of a high pre-testing cardiovascular risk profile, cardiac CT has been shown by randomized controlled trials to improve detection of CAD when incorporated in chest pain pathways [13,14]. Of note, subjects with high cardiac risk are also current or former smokers and therefore have also a high-risk of lung cancer. Indeed, tobacco is a major risk factor for both CAD and lung cancer [15,16], and previous studies have already ascertained that patients with coronary or cerebrovascular atherosclerosis are more likely to develop lung cancer [17]. Lung cancer remains the most common cancer in men and the third most common in women [18]. Early diagnosis is an important tool to reduce morbidity and mortality, and CT screening demonstrated a 20% decrease in the lung cancer mortality for high-risk populations such as heavy-smokers (> 30 pack/year) from 55 to 74 years [19,20]. On the basis of available findings, the US Preventive Services Task Force now recommends annual lung cancer screening with ultra-low dose computed tomography for current and former heavy smokers aged 55 to 80 years. With this background, there is an increased awareness that high-risk subjects undergoing imaging for cardiovascular conditions could also benefit from lung cancer screening. Radiation exposure has long been felt as a major limitation of CT screenings, but recent investigations have shown that ultra-low-dose CT is safer to screen high-risk patients [21][22][23]. Recently, it has been shown that associating a chest ultra-low-dose CT scan to the cardiac CT protocol for patients with suspected CAD is useful for lung cancer screening [5]. Our investigation confirms and extends this previous finding, as it shows that ultra-lo-dose CT is effective and safe for simultaneous CAD and lung cancer screening. Indeed, we report the first-in-man application of a novel ultra-low-dose CT protocol that allows simultaneous coronary artery and lung screening. The results obtained in the first series of 30 highrisk subjects that underwent the novel examination show that either coronary artery and lung evaluations were feasible. Noteworthy, no contrast or radiation dose increment was required as compared to conventional coronary CT protocol. The novel technique overcomes the limitation of a double dose of contrast and radiation usually needed to assess cardiac and lung regions during two different examinations. Our study has some limitations. The major limitation lies on sample size. Even with important preliminary results, larger studies following up more patients for longer periods are needed to confirm the role of the novel ultrafast single protocol for lung cancer screening especially for reduction in mortality. Some studies have shown a reduced diagnostic performance to detect pulmonary nodules for obese patients undergoing fastlow-dose CT protocol, as higher body mass indexes are associated with increased image noise [23,24]. A further limitation is constituted by the lack of a control group. As a consequence, we were unable to compare the novel ultra-low dose CT protocol with the standard CT scans. Conclusions The new ultrafast-low-dose CT protocol seems to be effective and safe for simultaneous coronary artery evaluation and lung cancer screening. Such an approach may enhance the cost-effectiveness of coronary CT in heavy smokers with suspected CAD. Further studies are needed to assess the potential of the novel protocol to reduce cardiovascular and pulmonary morbidity and mortality in clinical practice.
2018-05-08T20:34:16.109Z
2018-05-08T00:00:00.000
{ "year": 2018, "sha1": "577a2292cb74371275accac381d8a27fdf67c61b", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-018-0830-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "577a2292cb74371275accac381d8a27fdf67c61b", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16261576
pes2o/s2orc
v3-fos-license
Electrodynamic coupling of electric dipole emitters to a fluctuating mode density within a nano-cavity We investigate the impact of rotational diffusion on the electrodynamic coupling of fluorescent dye molecules (oscillating electric dipoles) to a tunable planar metallic nanocavity. Fast rotational diffusion of the molecules leads to a rapidly fluctuating mode density of the electromagnetic field along the molecules' dipole axis, which significantly changes their coupling to the field as compared to the opposite limit of fixed dipole orientation. We derive a theoretical treatment of the problem and present experimental results for rhodamine 6G molecules in cavities filled with low and high viscosity liquids. The derived theory and presented experimental method is a powerful tool for determining absolute quantum yield values of fluorescence. Introduction.-Fluorescing molecules located close to a metal surface (at sub-wavelength distance) or inside a metal nano-cavity, dramatically change their fluorescence emission properties such as fluorescence lifetime, fluorescence quantum yield, emission spectrum, or angular distribution of radiation [1][2][3][4]. This is due to the change local density of modes of the electromagnetic field caused by the presence of the metal surfaces [5]. Although a large amount of studies have dealt with the investigation of this effect, they all have considered fixed dipole orientations of the emitting molecules, so that each molecule exhibits a temporally constant mode density during its de-excitation from the excited to the ground state. However, when molecules are dissolved in a solvent such as water, their rotational diffusion leads to rapid changes of dipole orientation even on the time-scale of the average excited state lifetime. We will show here that this dramatically influences the coupling of the molecules to the local, strongly orientation-dependent density of modes and the resulting excited state lifetime. This is enormously important for applications of tunable nanocavities for fluorescence quantum yield measurements. Theory.-Let us consider an ensemble of molecules within a planar nano-cavity, which had been excited by a short laser pulse into their excited state. Due to the electrodynamic coupling to the cavity, these molecules will exhibit an emission rate K that depends on their vertical position within the cavity, and on the angle θ between their emission dipole axis and the vertical. In what follows, we assume that the excited state lifetime is so short that one can neglect any translational diffusion of a molecule within the cavity. However, this is in general not the case for its rotational diffusion time which can be on the same order as the excited state life-time. Then, for a given position within the cavity, the probability density p(θ, t) to find a molecule still in its excited state at time t with orientation angle θ obeys the following evolution equation where the first term on the right hand side is the rotational diffusion operator [6] multiplied with rotational diffusion coefficient D, and the second term accounts for de-excitation. For the sake of simplicity, we omit any explicit indication of the position dependence of the involved variables. The emission rate K itself is given by a weighted average of the wavelength dependent rates k(θ, λ), where F 0 (λ) is the free-space emission spectrum of the molecules as a function of wavelength λ. For a planar cavity, the rates k themselves can be decomposed into where the k ⊥, (λ) are the rates for a vertically and a horizontally oriented emitter, respectively. Within the semi-classical theory of dipole emission [7], these rates are given by where the index µ is either ⊥ or , and where k nr and k rad are the free-space non-radiative and radiative transition rates, respectively, τ 0 is the free-space excited state lifetime, Φ is the intrinsic quantum yield of fluorescence, S µ (λ) are the wavelength-dependent emission rates of an oscillating electric dipole with orientation µ within the cavity, and S 0 is the free-space emission rate, which is independent on orientation and wavelength (thus neglecting optical dispersion of the solvent). The emission rates S µ (λ) are calculated in a semi-classical way by firstly using a plane wave representation of the electromagnetic field of an emitting electric dipole of given orientation (and position) [8]; secondly calculating the interaction of each plane wave component with the cavity; and finally finding the emission rate as the integral of the Poynting vector of the total field over two surfaces sandwiching the emitter on both sides. An exemplary result for such a calculation is shown in Fig. 1. The initial distribution p(θ, t = 0) right after excitation is defined by the polarization and intensity of the focused excitation light. These can be found by again expanding the electromagnetic field of the focused laser beam into a plane wave representation [9,10], and calculating the interaction of each plane wave with the cavity [11]. If one denotes the horizontal and vertical components of the excitation intensity at the position of the molecules by I and I ⊥ , respectively, then p(θ, t = 0) is given by Computational results for I ⊥ and I are shown in Fig. 2, Computational results for the normalized excitation intensity distribution within the same nano-cavity as described in Fig. 1. For better visualization of the cavity's geometry, the figure shows also the silver layers as gray-shaded areas. The left side of the figure shows the excitation intensity for horizontally oriented molecules, the right side for vertically oriented molecules. for the same cavity geometry as in Fig. 1. Next, the solution to Eq. (1) can be found by expanding p(θ, t) into a series of Legendre polynomials P (cos θ): where the a (t) denote time-dependent expansion coefficients. Inserting that into Eq. (1) yields an infinite set of ordinary differential equations for the a (t), where the transition matrix M is defined by the integrals with the abbreviations K ⊥, = k ⊥, (λ) λ , and ∆K = K ⊥ − K . By carrying out the integration, one finds that the only non-vanishing components of M are given by From the initial condition, Eq. (5), one finds that the only non-vanishing initial values a are a 0 (t = 0) = 1 2 Although Eq. (7) represents an infinite set of differential equations, it occurs that for our experimental conditions (see below) a truncation of the series expansion of Eq. (6) at a maximum max = 10 yields an accurate solution to the problem that does not change when further increasing this truncation value. It remains to find an expression for the observable fluorescence emission. This is given by the integral where u(θ, λ) is the orientation and wavelength dependent fluorescence detection efficiency, λ denotes integration over all wavelengths, and the first integration extends over the whole inner space of the cavity. Due to the rapid fall-off of the excitation intensity when moving a few micrometers away from the center of the focused laser beam, the integration over space can be cut off accordingly. Similarly to the emission rate, the detection efficiency can be represented by with u ⊥ and u being the detection efficiencies for a vertically and horizontally oriented emitter. The most significant cause which makes these detection efficiencies different is the strongly orientation-dependent angular distribution of radiation of the emitters which is collected differently by the detection optics with finite aperture. The detection efficiencies are calculated again via a plane wave representation of the emitted electromagnetic field, for details see [12,13]. It should be noted that the detection efficiency goes down to zero when approaching the silver mirrors so that only fluorescence from molecules at least a few nanometers away from the cavity surfaces contributes to the detected signal. When inserting the expansion (6) into Eq. (11) and integrating over θ one finds that only the amplitudes a with ∈ {0, 2, 4} contribute to the final result, while the constant factors C are given by Finally, the observable mean fluorescence lifetime τ is found as Experiment.-A homemade nano-cavity consists of two silver mirrors with sub-wavelength spacing. The bottom silver mirror (35 nm thick) was prepared by vapor deposition onto commercially available and cleaned microscope glass coverslides (thickness 170 µm) using an electron beam source (Laybold Univex 350) under high-vacuum conditions (∼10 −6 mbar). The top silver layer (85 nm thick) was prepared by vapor deposition of silver onto the surface of a plan-convex lens (focal length of 150 mm) under the same conditions. Film thickness was monitored during vapor deposition using an oscillating quartz unit and verified by atomic force microscopy. The complexvalued wavelength-dependent dielectric constants of the silver films were determined by ellipsometry (nanofilm ep3se, Accurion GmbH, Göttingen) and subsequently used for all theoretical calculations. The spherical shape of the upper mirror allowed us to reversibly tune the cavity length by retracting from or approaching to the cavity center. It should be noted that within the focal spot of the microscope objective lens the cavity can be considered as a plane-parallel resonator [14]. For the lifetime measurements, a droplet of a micromolar solution of rhodamine 6G molecules in water or glycerol was embedded between the cavity mirrors. The cavity length was determined by measuring the white light transmission spectrum [14,15] using a spectrograph (Andor SR 303i) and a CCD camera (Andor iXon DU897 BV), and by fitting the spectra with a standard Fresnel model of transmission through a stack of plan-parallel layers, where the cavity length (distance between silver mirrors) was the only free fit parameter. Fluorescence lifetime measurements were performed with a home-built confocal microscope equipped with an objective lens of high numerical aperture (UPLSAPO, 60×, N.A. = 1.2 water immersion, Olympus). A whitelight laser system (Fianium SC400-4-80) with a tunable filter (AOTFnC-400.650-TN) served as the excitation source (λ exc = 488 nm). The light was reflected by a dichroic mirror (Semrock BrightLine FF484-FDi01) towards the objective, and back-scattered excitation light was blocked with a long pass filter (Semrock EdgeBasic BLP01-488R). Collected fluorescence was focused onto the active area of an avalanche photo diode (PicoQuant τ -SPAD). Data acquisition was accomplished with a multichannel picosecond event timer (PicoQuant HydraHarp 400). Photon arrival times were histogrammed (bin width of 50 ps) for obtaining fluorescence decay curves, and all curves were recorded until reaching 10 4 counts at the maximum . Finally, the fluorescence decay curves were fitted with a multi-exponential decay model, from which the average excited state lifetime was calculated according to Eq. (15). Fig. 3 shows the result of the measured average fluorescence lifetime of rhodamine 6G in water (blue dots) and glycerin (red dots) within the nano-cavity as a function of maximum transmission wavelength (which is lin- [16] and citations therein. The large fit value of the rotational diffusion time for rhodamine in glycerol, which is by nearly two orders of magnitude larger than the fluorescence lifetime, indicates that rotational diffusion is practically frozen during de-excitation of the excited molecules, which is similar to the limiting case of fixed dipole orientations. Contrary, the fitted rotational diffusion value in water is significantly shorter than the lifetime, indicating a situation where the emitters perceive an environment with a rapidly fluctuating mode density of the electromagnetic field. Both situations, rapid and slow rotational diffusion, lead not only to quantitatively different results for the dependence of average lifetime on cavity size as seen in Fig. 3, but also to qualitatively different behavior: While for slow rotators, the average lifetime can exceed, for specific cavity size values, the free space lifetime (dotted lines in Fig. 3), the average lifetime for rapidly rotating molecules will always be smaller than the free-space lifetime. The reason for that can be understood when inspecting Figs. 1 and 2: The focused laser beam will predominantly excite molecules with horizontal orientation (see Fig. 2), for which the emission rate can be lower than the free-space rate. If the molecules do not rotate, one can thus observe, for specific cavity size values, average lifetime values which are longer than the free-space lifetime. However, if molecular rotation is much faster than the average excited state lifetime, than the emission rate will be dominated by that for vertically oriented molecules (which is much faster than that for horizontally oriented ones, see Fig. 1) and will always result in average lifetime values smaller then the freespace lifetime. Finally, it should be emphasized that the excellent agreement between theoretical model and experimental results offer the fascinating possibility to use lifetime measurements on dye solutions in tunable nanocavities for simple and direct determination of the fluorescence quantum yield, a quantity which is notoriously difficult to determine by other methods [15]. Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged (SFB 937, project A5).
2012-03-13T17:54:38.000Z
2012-03-13T00:00:00.000
{ "year": 2012, "sha1": "22a5dc7769d55aed02b4ee3011f8a558a8b1778d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.2876", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "22a5dc7769d55aed02b4ee3011f8a558a8b1778d", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
246082492
pes2o/s2orc
v3-fos-license
Tryptophan 2,3-Dioxygenase-2 in Uterine Leiomyoma: Dysregulation by MED12 Mutation Status Uterine leiomyomas (fibroids) are common benign tumors in women. The tryptophan metabolism through the kynurenine pathway plays important roles in tumorigenesis in general. Leiomyomas expressing mutated mediator complex subunit 12 (mut-MED12) were reported to contain significantly decreased tryptophan levels; the underlying mechanism and the role of the tryptophan metabolism-kynurenine pathway in leiomyoma tumorigenesis, however, remain unknown. We here assessed the expression and regulation of the key enzymes that metabolize tryptophan. Among these, the tissue mRNA levels of tryptophan 2,3-dioxygenase (TDO2), the rate limiting enzyme of tryptophan metabolism through the kynurenine pathway, was 36-fold higher in mut-MED12 compared to adjacent myometrium (P < 0.0001), and 14-fold higher compared to wild type (wt)-MED12 leiomyoma (P < 0.05). The mRNA levels of other tryptophan metabolizing enzymes, IDO1 and IDO2, were low and not significantly different, suggesting that TDO2 is the key enzyme responsible for reduced tryptophan levels in mut-MED12 leiomyoma. R5020 and medroxyprogesterone acetate (MPA), two progesterone agonists, regulated TDO2 gene expression in primary myometrial and leiomyoma cells expressing wt-MED12; however, this effect was absent or blunted in leiomyoma cells expressing G44D mut-MED12. These data suggest that MED12 mutation may alter progesterone-mediated TDO2 expression in leiomyoma, leading to lower levels of tryptophan in mut-MED12 leiomyoma. This highlights that fibroids can vary widely in their response to progesterone as a result of mutation status and provides some insight for understanding the effect of tryptophan-kynurenine pathway on leiomyoma tumorigenesis and identifying targeted interventions for fibroids based on their distinct molecular signatures. Supplementary Information The online version contains supplementary material available at 10.1007/s43032-022-00852-y. Introduction Uterine leiomyomas (fibroids, LM) are benign neoplasms that arise from uterine smooth muscle and represent the most common benign tumor in reproductive-age women. While a majority of LM are asymptomatic, they commonly cause menorrhagia, dysmenorrhea, and infertility and approximately 30% of identified fibroids require intervention [1]. While some medical therapies have shown promise in symptomatic management, hysterectomy or myomectomy remains the most common approach to treatment. Advances in minimally invasive surgery have improved recovery and minimized hospital stays overall for these patients, but the socioeconomic burden remains significant, costing an estimated $34.4 billion for the 200,000 hysterectomies and 30,000 myomectomies performed each year to treat fibroids [2]. New approaches to medical management for fibroids are desperately needed. With few exceptions, LM originate from somatic mutations in myometrium (MM) cells, resulting in progressive loss of growth regulation leading to unchecked growth. A variety of mutations have been identified that seem to give rise to unique growth patterns [3]. One distinct subtype carries a mutation in mediator complex subunit 12 (mut-MED12) and comprises 70% of all LM. The most common mutation within this family is a single point mutation at codon 44 in exon 2 of the MED12 gene, the G44D mutation [4]. MED12 mutation leads to a configurational change that alters its interaction with transcriptional co-activator pathway proteins including cyclin C, leading to a loss in CDK activity [5]. Estrogen and progesterone driving LM cell proliferation, survival, and extracellular matrix formation [11]. Progesterone regulates TDO2 expression in endometrium and breast tissue, contributing to both normal tissue function and tumor growth [12,13]. To better understand the role of the tryptophan-kynurenine pathway in LM, with the ultimate goal of developing new therapeutics for LM, here we examined the effect of MED12 mutation on tryptophan metabolism. We tested the hypothesis that progestins regulate TDO2 in LM and MM tissues and that MED12 mutation impairs progestin-regulated TDO2 expression, leading to a decrease in tryptophan in mut-MED12 LM. Tissue Collection The study was approved by the Northwestern University Institutional Review Board, and informed consent was obtained from all participants for collection and use of LM and MM tissues (Reproductive Tissue Registry STU00018080). All tissues were obtained from premenopausal women undergoing either myomectomy or hysterectomy (age 47 ± 4 years, range 42-54 years). Patients receiving hormone treatment 6 months prior to surgery were excluded. Matched LM and MM tissues were collected from each patient and underwent complete pathologic assessment prior to use in experiments. Immunoblot Analysis Total protein was extracted from primary LM or MM cells using RIPA buffer and quantified using BCA assay (23,225; Thermo Fisher Scientific) per the manufacturer's protocol. Protein was then diluted in 4X LDS sample buffer (NP0007; Thermo Fisher Scientific), electrophoresed on a 4% to 12% Novex Bis-Tris polyacrylamide precast gel (NP0321BOX; Thermo Fisher Scientific), and transferred onto polyvinylidene difluoride membrane. The membranes were incubated with primary antibody against TDO2 (15,880-1-A, Proteintech) at 4 °C in 5% nonfat milk overnight, followed by incubation with HRP-linked anti-rabbit IgG (7074S, Cell Signaling Technology) for 1 h at room temperature. β-actin (HRP-60008, Proteintech) was used as loading control. Detection was performed using Luminata Crescendo horseradish peroxidase substrate (WBLUR0100; Millipore). Genotyping MED12 mutation screening was performed by Sanger sequencing using genomic DNA isolated from snap-frozen LM and MM tissues. The MED12 mutation status in primary cell cultures was confirmed using cDNA isolated from cultured cells. Genomic DNA or cDNA was amplified using a hot start DNA polymerase kit (#71,086-3, Sigma) and primers as previously described [4], followed by sequencing at Northwestern University Sanger Sequencing Core. Recent high throughput sequencing studies have revealed recurrent and mutually exclusive driver mutations in LM: including MED12 mutations, high mobility group AT-hook 2 (HMGA2) rearrangements, biallelic inactivation of fumarate hydratase (FH), and collagen, type IV, alpha 5 and collagen, type IV, alpha 6 (COL4A5-COL4A6) deletions [15]. LM are usually categorized into different subtypes based on their gene mutation signature. In this study, we termed LM that did not express mutant MED12 as wild type (wt)-MED12 LM. Statistical Analysis Each experiment was conducted utilizing cells from at least three patients run in triplicate, followed by statistical analysis. All data are expressed as the mean ± standard error of mean (SEM). P values were calculated using Student's t test (to compare two groups) or one-way ANOVA followed by Dunnett's multiple comparison test (to compare three or more groups) using the GraphPad Prism software (GraphPad Inc., San Diego, CA). Differences were considered statistically significant when P < 0.05. TDO2 Gene Expression Is Upregulated in Leiomyoma Expressing mut-MED12 Based on a previous study showing different levels of tryptophan in mut-MED12 and wt-MED12 LM [6], we examined the gene expression levels of three enzymes involved in tryptophan metabolism through kynurenine pathway, TDO2, IDO1, and IDO2, by RT-qPCR in wt-MED12 LM (n = 6), mut-MED12 LM (n = 18), and matched MM tissues (n = 24). As shown in Fig. 1A, TDO2 expression was significantly higher in mut-MED12 LM compared to MM (35.96-fold, P < 0.0001) and wt-MED12 LM (13.66-fold, P < 0.05). We observed a trend of increased TDO2 expression in wt-MED12 LM vs MM tissues (2.6-fold), but it did not reach statistical significance. Western blot analysis confirmed that TDO2 protein level was also higher in mut-MED12 LM (Fig. 1B). We also evaluated the difference of the TDO2 expression levels in MM tissues between follicular (n = 15) and luteal (n = 10) phases of menstrual cycle and did not find significant change (Fig. 1C). IDO1 mRNA expression in LM and matched MM was low and not significantly different between LM and matched MM (Fig. 1D). Therefore, we compared IDO1 expression in LM vs their matched MM without separating LM into different genotypes. IDO2 mRNA expression was low and undetectable by RT-qPCR in both tissues (data not shown). These data suggest that dysregulation of TDO2 is responsible for the previously observed lower levels of tryptophan in LM carrying the MED12 mutation [6]. Progestins Inhibit TDO2 Gene Expression in wt-MED12 Leiomyoma Cells Previous studies have reported the regulation of TDO2 by progesterone and its receptor (PR) in endometrial and breast cancer cells [12,13], leading us to evaluate the effect of progestins on TDO2 expression in wt-MED12 LM cells. We treated primary cultures of wt-MED12 LM cells with increasing doses (10 -8 , 10 -7 , 10 -6 , and 10 -5 M) of two progesterone agonists, R5020 and MPA, for 24 h. Each experiment was performed in triplicate in tissues from three unique patients. As shown in Fig. 2A upper panel, TDO2 gene expression decreased with increasing doses of R5020, with significant decreases at 10 -6 and 10 -5 M compared with vehicle-treated cells. The greatest reduction was observed at 10 -5 M of R5020 (0.56 ± 0.09 of control, P < 0.01). MPA treatment also dose-dependently inhibited TDO2 expression, with a significant decrease at 10 -8 M MPA to 0.58 ± 0.1 of the control level (P < 0.05; Fig. 2A lower panel). Maximal inhibition was observed at 10 -6 M (TDO2 expression 0.36 ± 0.11 of control level [P = 0.001]). Sanger sequencing of genomic DNA isolated from tissue (data not shown) and cDNA isolated from treated cells from 3 patient samples revealed a pure wt-MED12 genotype (Supplemental Fig. 1A). Progestins Inhibit TDO2 Gene Expression in Myometrial Cells We also evaluated the effect of progestin treatment on TDO2 expression in MM cells which always express wt-MED12 [4]. Cells obtained from eight unique patients were used and all treatments were completed in triplicate. Cells were each treated with increasing doses of R5020 and MPA. Figure 2B shows a dose-dependent decrease in TDO2 mRNA levels with increasing concentrations of R5020 (upper panel) and MPA (lower panel). R5020 at 10 -7 M significantly decreased TDO2 expression with the peak reduction occurring at MED12 Mutations Affect Progestin-Mediated TDO2 Gene Expression in G44D mut-MED12 Leiomyoma Cells Next, we characterized whether MED12 mutation affects progestin-mediated TDO2 gene expression, which potentially contribute to the differential TDO2 expression observed in mut-MED12 LM vs. wt-MED12 LM. We assessed the dose response of G44D mut-MED12 LM cells subjected to the same treatment described above. Each experiment was done in triplicate and repeated in three unique patients. Figure 2C shows TDO2 mRNA levels after treatment with different doses of R5020 (upper panel) and MPA (lower panel) compared to vehicle. High doses of R5020 elicited a trend toward decreased TDO2 expression, but the effects did not reach statistical significance compared to vehicle controls (Fig. 2C, upper panel). MPA significantly decreased TDO2 mRNA levels starting at 10 -7 M (0.43 ± 0.08 of control level, P < 0.005), but increases in dose did not further downregulate TDO2 expression (Fig. 2C, lower panel). Supplemental Fig. 1C shows the Sanger sequences for the treated cells from 3 patients. Note that each patient sample represents a mixture of wt-MED12 and G44D mut-MED12 cells. It has been reported that in vitro cell culture causes loss of mut-MED12 LM cells [16]; therefore, we treated first passage cells with 10 -5 M R5020 or 10 -6 M MPA based on the dose that elicited maximal downregulation of TDO2 expression in wt-MED12 LM cells ( Fig. 2A). Each experiment was done in triplicate and repeated in three unique patients. Figure 3A shows that in G44D mut-MED12 LM cells, treatment with R5020 at 10 -5 M had a minimal effect on TDO2 expression compared to vehicle-treated cells. Likewise, MPA at 10 -6 M downregulated TDO2 mRNA level slightly but significantly in G44D mut-MED12 LM cells (0.82 ± 0.04 of control level, P < 0.05, Fig. 3B). For both progesterone agonists, the effect on TDO2 expression was blunted in mut-MED12 compared to wt-MED12 cells, which showed significantly reduced mRNA levels in response to both R5020 (Fig. 3A, P < 0.001) and MPA (Fig. 3B, P < 0.05). Sanger sequencing of cDNA from the treated cells from all 3 patients revealed a pure G44D mut-MED12 genotype. The sequences shown in Supplemental Fig. 1D are cropped to highlight the "hot-spot" of MED12 mutations at codon 44 in exon 2. These findings suggest that MED12 mutation decreases LM cells' response to progestins, leading to a blunted downregulation of TDO2 gene expression that may account in part for the higher TDO2 expression and lower tryptophan levels in mut-MED12 LM cells. Discussion In this study, we showed that TDO2 gene expression is upregulated in LM expressing mutated MED12, and that the regulatory effect of progestins (R5020 and MPA) on TDO2 expression in MM and wt-MED12 LM cells is lost or decreased in LM cells expressing G44D mut-MED12. These findings suggest that MED12 mutation may disturb progesterone signaling in LM that regulates TDO2 gene expression, leading to upregulated TDO2 gene expression and decreased tryptophan levels in mut-MED12 LM. LM were once thought to have a single phenotype, but it has become clear that each fibroid represents a monoclonal tumor with a mutation signature, and focus has shifted to identifying common mutations and targeted approaches to prevention and treatment. In the past decade, a number of All values were normalized to TBP and compared to vehicle (0.1% ethanol). *P < 0.05, **P < 0.005, ***P < 0.001, ****P < 0.0005. ns: not significant mutation groupings have been identified, the most common of which include mutations in MED12. Mut-MED12 has been identified in 70-75% of LM [4,15,17]. The MED12 encodes the mediator complex, which is highly conserved in eukaryotes and plays an important role in regulation of transcription through its interactions with specific transcription factors and RNA polymerase II; however, its role in fibroid growth and development has not been fully elucidated [4,18]. LM harboring MED12 mutations have distinct transcription profiles, and candidate genes involved in LM tumorigenesis, such as IGF2 and WNT4, are specifically upregulated in mut-MED12 LM. The underlying mechanisms linking MED12 mutations to LM pathogenesis remain unclear and no clinically relevant therapies have been proposed to target specific mutation signatures [19][20][21][22] Computational analyses have suggested that mut-MED12 in LM could have altered interactions with transcriptional co-activators; this idea is supported by findings that mut-MED12 disrupts the MED12-Cyclin C binding interface, leading to a loss of mediator-associated CDK activity [5,23,24]. Previously, we reported that mut-MED12 associates with PR at the chromatin level and the interactions between PR and chromatin are dysregulated in LM expressing the G44D mut-MED12 [25]. In this study, we found that progestins significantly inhibit TDO2 expression in MM and LM cells expressing wild-type MED12, and that MED12 mutation decreased the efficacy of this inhibitory effect, leading to upregulated TDO2 expression in mut-MED12 LM. It has been reported that TDO2 expression is higher in PR-negative versus PR-positive breast cancer tissues, suggesting that progesterone may inhibit TDO2 expression via PR in breast cancer cells [12]. Paradoxically, progesterone stimulates TDO2 expression in mouse uterine stroma cells [13]. The mechanisms underlying progesterone/PR-mediated target gene expression are complicated. PR exists as two isoforms, PR-A and PR-B, which may have distinct or similar functions depending on the promoter context and cell type [26][27][28]. In vitro cell culture model system which lack key in vivo conditions or cofactors may interfere with progesterone/PR signaling [29,30]. Adding another layer of complex, glucocorticoids regulate TDO2 expression in a tissue-specific manner, and glucocorticoid receptor and PR share the same DNA-binding motif [31][32][33]. Given the important roles of tryptophan-kynurenine pathway in tumorigenesis, further studies are needed to clarify the effect of progesterone/PR signaling on TDO2 expression in LM using ex vivo tissue explant or in vivo xenograft mouse model. Three enzymes (TDO2, IDO1, and IDO2) catalyze the first rate-limiting step of tryptophan metabolism through kynurenine pathway. We found that TDO2, but not IDO1/2, was upregulated in mut-MED12 LM, suggesting that TDO2 dysregulation may account for the reduced tryptophan levels in mut-MED12 LM subtype [6]. Soon after we submitted our manuscript, Chuang et al. also reported the aberrant overexpression of TDO2 expression in mut-MED12 LM [34]. TDO2 has been extensively studied in other organ systems and identified as an important factor in tumorigenesis and growth [7][8][9][10]12]. Progesterone is critical to fibroid growth and anti-progestin therapies have shown promise in therapeutic management, but efficacy varies widely by patient, potentially due to variation in mutation status of treated fibroids [35][36][37]. Differential regulation of TDO2 in response to progesterone agonists in mut-MED12 LM versus wt-MED12 LM and MM may explain the variability in response to progesterone therapy and provide insight into therapeutic management of this specific LM subtype. This study is limited by a small sample size and focus on a single candidate gene and a single MED12 mutation category, G44D. However, our novel finding that MED12 mutation in LM affects tryptophan metabolism is an important step toward defining the molecular and metabolic signatures of leiomyoma subgroups. By more completely understanding how varying genotypes affect hormonal response and regulation, we can more precisely target development of medical therapies. Continued research on this topic may identify targeted interventions for fibroids based on molecular signatures. Author Contribution A.P.H., P.Y., and S.E.B.: substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; drafting the article or revising it critically for important intellectual content; and final approval of the version to be published. I.N., J.S.C., S.A.K., and S.L.: drafting the article or revising it critically for important intellectual content; and final approval of the version to be published. Funding This study was supported by NIH grants P01 HD057877 and P50 HD098580 to S.E.B. and P.Y. Data Availability Raw data were generated at Northwestern University. Derived data supporting the findings of this study are available from the corresponding author S.E.B. on request. Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-01-22T14:44:25.327Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "ccc1617e0dc91d39691aee3dc143e2595f9dd160", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s43032-022-00852-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "ccc1617e0dc91d39691aee3dc143e2595f9dd160", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1297251
pes2o/s2orc
v3-fos-license
Cone photoreceptor definition on adaptive optics retinal imaging Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 20–35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5° eccentricity, the cone density (cones/mm2 mean±SD) was 15.3±1.4×103 (automated) and 13.9±1.0×103 (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. INTRODUCTION In vivo cellular imaging of the human retina has been made possible through the emergence of high resolution adaptive optics (AO) retinal imaging systems. 1 Prior to the development of AO retinal imaging devices, assessment of ultra-structural features and arrangement of cones was via histology of enucleated globes or biopsy specimens. However, the ex vivo techniques of laboratory histology are limited by artefacts of tissue processing and restrict observations to a single time point. The advent of AO has led to a steady development of prototype devices over the past 17 years. 2 These have been based either on confocal scanning laser ophthalmoscope (SLO) 3 or fundus floodillumination cameras. 4 In 1996, Miller and colleagues produced a research prototype fundus camera using monochromatic light with a small field of illumination and a non-coherent laser source. This device enabled the imaging of the cone mosaic in healthy eyes in vivo. 5 The technique involved fine correction of the subject's astigmatism and defocus with trial lenses. 5 Further improvement in image resolution was achieved by incorporating an AO system based on a deformable mirror. 6 This system continuously and automatically compensated for ocular aberration based on feedback from a wavefront sensor 7 that enabled diffraction-limited retinal imaging. Images from in vivo AO devices have the advantage of no tissue processing artefacts and the ability to carry out serial cone imaging in the same subject. Histological examination has shown cone photoreceptors to have the following characteristics: A density of approximately 50 000 cones/mm 2 at 1°t emporal to fovea, significant reduction in density from the centre of the retina up to 6.2°(2 mm) and a hexagonal pattern of organisation. [8][9][10] The rtx1 adaptive optics camera (AOC) (Imagine Eyes, Orsay, France) uses a flood-illumination camera for image capture. The size and design of the device as well as the recent European regulatory approval (CE mark) allow it to be used in a normal clinical setting. Given this, it is critical to document the ability of the rtx1 AOC to successfully identify cone photoreceptors and understand its limitations. Cone imaging has been described qualitatively in macular disease by Paques and colleagues and others [11][12][13] and quantitatively by way of cone density in healthy subjects using the rtx1 AOC. 14 Crucially though, detailed qualitative and quantitative analysis of the signals in relation to cone matrices such as photoreceptor organisation, cone density and intercone distance in age-matched controls has not been reported. There have been studies on AO SLO prototype systems, which have correlated their images with those from the histology literature. 15 16 However, these cannot be presumed to imply that the cone signals from the rtx1 AOC images are comparable. The aim of this study was to assess the feasibility of cone photoreceptor image capture and to analyse the images for retinal photoreceptor parameters in comparison with previous histological and AO imaging data available in the literature. MATERIAL AND METHODS Subjects Ten healthy volunteers were recruited from the staff of Moorfields Eye Hospital and UCL Institute of Ophthalmology; age range 20-35 years (mean=26 years, SD=3); one male and nine female volunteers. The study protocol was approved by the Moorfields and Whittington NHS Research Ethics Committee and complied with the tenets of the Declaration of Helsinki (2008 Revision). Open Access Scan to access more free content Clinical investigations, inclusion criteria All subjects had a complete eye examination to exclude any ocular pathology or media opacities and to confirm bestcorrected visual acuity was 6/6 or better for inclusion in the study. Subjects' refraction was recorded and ranged from spherical equivalent plano to −6.50 D (mean=−2.50 D). For imaging, low-order aberrations were corrected internally by a telecentric system where necessary. Axial length was measured using an IOLMaster (Carl Zeiss Meditec, Germany) and ranged from 22.08 to 26.02 mm (mean=24.22 mm, SD=1.58 mm). Retinal imaging Scanning laser ophthalmoscopy Confocal SLO was performed using a Spectralis SLO (Heidelberg Engineering, Heidelberg, Germany) device. The infrared SLO fundus image obtained was used as a topographical reference for the location of the various eccentricities at which the AO images were acquired (figure 1). AO imaging Imaging was performed using the rtx1 AOC device through undilated pupils, following 5 min of dark adaptation. This automated en-face reflectance imaging system uses an infrared (wavelength, λ=850 nm) flash for illumination and an AO system consisting of a Shack-Hartmann sensor and a deformable mirror for correcting aberrations. The field of imaging is 4×4°w hich is equivalent to 1.2×1.2 mm on the retina based on the Gullstrand model eye of axial length 23.0 mm. A set of 40 frames is captured live. During image processing, each of the 40 frames is coregistered and averaged by the internal software provided by the manufacturer. During this process, an image with a resolution of 750×750 pixel ( px) is converted to 1500×1500 px. The final image produced based on the axial length of the model eye has a resolution of 0.8 μm/px. The left eyes of subjects were imaged, although both eyes fulfilled the inclusion criteria. Images were obtained at 5°(∼1.5 mm), temporal to the foveal centre in all study eyes. Three subjects were chosen at random and imaging also performed at 0, 2, 3, 5 and 7°to examine photoreceptor density at multiple retinal eccentricities (figure 2A). The magnified AO retinal images are shown in figure 2B. Image analysis Cone density and packing regularity Cone mosaic is a two-dimensional variable. The two most common types of matrices used to describe cone mosaics are cellular density and packing regularity. To calculate cellular density, we manually counted cone photoreceptors using an image-processing program (ImageJ, National Institutes of Health, Bethesda, MD, USA). The count was then divided by the area of the retina sampled. Packing regularity was analysed using the following methods: 1. Nearest-neighbour method as described by Wassle and Riemann. 17 2. Voronoi domain method as described by Shapiro and colleagues. 18 3. Autocorrelation methods as described by Rodieck 19 and Cook. 20 4. Power spectrum method as described by Yellott. 21 These methods have been previously described for analysis of spatial distribution of rods and cones in vitro and in vivo. 22 23 Automated algorithm cone identification The retinal images were processed with a customised program coded using MATLAB R2010a (MathWorks Inc, Natick, Massachusetts, USA), similar to a previously described method by Li and Roorda. 15 The manufacturer's counting software was not used, as it could not perform the count on the sampled windows used in this study. The acquired images were converted to 8-bits and cropped to 300×300 px (∼240×240 μm) sampling window. A low-pass filter was applied prior to the automated counting algorithm for all subjects at all eccentricities. The number of spurious peaks were reduced by transformation to frequency domain using fast Fourier transform and preprocessed with a low-pass filter before converting them back to the spatial domain (figure 3A). The regional maxima of the photoreceptors' centres were computed using an 8-connected neighbourhood. A Delaunay triangulation with its corresponding Voronoi tessellation was calculated resulting in a set of edges linking all the maxima points. Average number of photoreceptors surrounding each cone was calculated by determining the average number of edges originating from each maximum point. The average distance of all the edges was taken as the average interphotoreceptor distance. The photoreceptor size was approximated by measuring the area of the joint pixels surrounding each maximum, with a greyscale value greater than the average between a peak and its local baseline. The grey scale value of the local baseline is calculated as the average value of the pixels that form the edges of the Voronoi cell of a given peak (figure 3B). The equivalent diameter of a circle with the same area as the one calculated is taken as the diameter of the photoreceptor. Inter-photoreceptor distance was therefore measured by automated technique from the centre of one photoreceptor to its neighbours. 15 Automated and manual cone counting Automated and manual counting was performed using 10 high quality images of controls at 5°retinal eccentricity, with a central sampling area 300×300 px (equal to the central 240×240 μm). The observer was masked to the identity of the subjects during this process. The density of photoreceptors with varying retinal eccentricity was also calculated using both manual and automated counting techniques in three subjects at 2, 3, 5 and 7°retinal eccentricities. The AO images captured at 0°were not included as part of either counts as the device was unable to resolve any retinal structure less than 4 μm, as noted in figure 2A,B. Voronoi domain analysis Voronoi tessellation was performed on the AO retinal images of the 10 subjects following cone identification by the automated algorithm. The percentage of cone photoreceptors showing optimal hexagonal (n=6) tiling as well as 5-and 7-sided (n±1) organisation was calculated for each of the 10 images. We manually excluded the polygons on the edges of the image to avoid any bias to result (figure 2C. Voronoi tessellation of subject A's retinal image at 5°retinal eccentricity). Voronoi quantification was also completed for the three subjects at the 2, 3, 5 and 7°retinal eccentricities. Power spectrum analysis Spatial regularity (hexagonal packing) of photoreceptors is known to result in a ring structure in the power spectrum of a retinal image. 15 This analysis was performed in our study ( figure 3A). eccentricities. The cones are clearly visible in the 3, 5 and 7°images, but not at 0°, and this is due to the resolution limit of the rtx1 AO camera being 4 μ and therefore is not able to resolve the highest density of cone packing at the foveola. (B) Magnified areas of the red box from figure 2A of AO images of subjects A, B and C at 0, 3, 5 and 7°retinal eccentricities. The magnified AO images of A2 through A3 to A4, and similarly for B2 to B4 and C2 to C4 clearly show the cone photoreceptors with decreasing density at increasing retinal eccentricity as well as the loss of their packing regularity in A4, B4 and C4. The cone photoreceptors in images A1, B1 and C1 are not discernible due to the highest cone packing density at 0°which is beyond the device's resolution. (C) Voronoi tessellation of subject A's retinal image at 5°retinal eccentricity. Cone density with varying eccentricity The density of photoreceptors decreased with increasing retinal eccentricity temporally as noted in figure 2A,B. The paired mean cone density from manual and automated counts at 2, 3, 5 and 7°were 26.5×10 3 and 24.2×10 3 cones/mm 2 , 19.5×10 3 and 20.8×10 3 cones/mm 2 , 13.8×10 3 and 15.6×10 3 cones/mm 2 and 11.2×10 3 and 12.9×10 3 cones/mm 2 , respectively. Details of the densities of each of the three subjects (A, B and C) at the eccentricities from the manual and automated count results are shown in tables 1 and 2. Inter-photoreceptor distance Inter-photoreceptor distance measurement calculated by automated technique, from the centre of one photoreceptor to the neighbouring ones, as described by Li and Roorda, 15 had a range of mean inter-photoreceptor distance of 7.9-9.3 mm at 5°F igure 2 Continued for the healthy subjects. The overall mean of the 10 controls at 5°eccentricity was 8.6±0.4 μm (mean±SD). The inter-photoreceptor distances for the three subjects imaged at 2, 3, 5 and 7°are also recorded in table 2. Voronoi quantification The hexagonality of cone photoreceptor (n=6) tiling for the 10 subjects at 5°retinal eccentricity ranged from 45% to 55%, mean of 49%. With the inclusion of both the 5-and 7-sided organisation (n±1), the percentage range was 92%-98% with a mean of 95%. Packing regularity of cones The regularity of hexagonal cone arrangement has been demonstrated by spatial frequency analysis. Figure 4 shows the classic ring structure in the power spectrum at 2, 3 and 5°, which are synonymous with spatial regularity at these eccentricities but beyond 7°the ring is just visible in two subjects and not in the other. This is due to the decreasing degree of regularity beyond 5°. DISCUSSION We have shown that retinal imaging in healthy eyes using the rtx1 AOC is feasible and has enabled assessment of cone characteristics. The density of speckled signals following manual and automated counting correlated well with the data from published retinal histology literature. Inter-photoreceptor distance and packing regularity of speckled signals also suggest that these arise from cones. The photoreceptor densities (cones/mm 2 (mean±SD)) at 5°( 1.46 mm) temporal to fovea were 15 316±1405 (automated) and 13 901±962 (manual). This correlates closely to that from retinal histology studies on donated healthy human eyes by Curcio and colleagues of 16 188 cones/mm 2 , extrapolated from the graph at 1.46 mm (figure 5). Our AOC data also correlate closely to that from an AO SLO study by Song and colleagues of 16 300±2850 cones/mm 2 (mean±SD) at retinal eccentricity of 1.35 mm. 8 16 Our automated count is also similar to that found in a previous study by Lombardo et al 14 on the same device which found mean cone density at 1300 mm (4.45°) eccentricity to be 14 198±2114 cones/mm 2 . The concordance of cone densities between all of these studies is clearly visible in figure 5, where cone density is plotted against retinal eccentricity. The only disparate figure in the literature was from Jonas et al 9 where cone density at 1.5 mm (∼5°) retinal eccentricity was recorded as 6000 cones/ mm 2 . This was less than half of all other studies. It is possible that this is due to the inclusion of eyes up to 90 years old, thereby not being age-matched and indicating cone loss later in life. Decreasing cone density with increasing retinal eccentricity at 2, 3, 5 and 7°temporally was confirmed by both manual and automated counting. The rate of change followed the Curcio graph well including the gradient decreasing at greater eccentricities (figure 5). The change was confirmed by the inverse relationship noted between inter-photoreceptor distance and increasing eccentricities as indicated in table 2. This provides an internal validation for the automated algorithm calculation method. The automated algorithm does have some limitations: the range of inter-photoreceptor distance we obtained at 5°for each of the healthy subjects was 7.9-9.3 mm. This was close, but not equivalent, to the histology measurement data of 6-8 mm. However, the histology data from Curcio and Sloan 24 looked at the minimum inter-photoreceptor spacing between the cones at eccentricity greater than 1 mm. We were unable to measure from the same range, as the noise in the system would create an artificially low minimum inter-photoreceptor distance. Furthermore, with the automated counting algorithm there were false positives at higher eccentricities in some participants. This was due to the noise in the image and therefore we manually selected and consistently applied a low-pass filter at all the retinal eccentricities when using the automated system. Possibly the most compelling cone related feature of the AO images we have observed is the packing pattern. The regularity of the cone matrix was confirmed using the spatial frequency technique. The power spectrum ring was shown at 2, 3 and 5°, though to a lesser degree at 7°(figure 4). The rings decrease in intensity with eccentricity that is consistent with histological findings of significant decrease in cone density from around 1.4 mm (equal to approximately 5°) as noted by Curcio and Sloan. 24 The consequent increase in rod photoreceptors at this Cone density versus retinal eccentricity for manual and automated count dataset of the three subjects at 2, 3, 5 and 7°and mean of 10 subjects at 5°-plotted on graph with histology data from Curcio et al, 8 Jonas et al 9 and adaptive optics scanning laser ophthalmoscope data from Song et al. 16 Exponential pattern was noted. eccentricity begins to disrupt the orderly packing. We also demonstrated regular hexagonal ordering of cones using the Voronoi method. Li and Roorda 15 had previously demonstrated this hexagonal photoreceptor packing in 2007 using Voronoi domain analysis with their AO SLO prototype. We studied the age group of 20-35 year olds but did not have a sufficient sample size or age range to analyse the effect of age. There was conflicting evidence in the literature concerning the effect of age. Gao and Hollyfield 25 did not find any differences in foveal cone densities in donor eyes ranging from 20 to 90 years, while Song et al 16 noted a reduction in cone density at the fovea with increasing age in their AO study. The sampling area we chose was considerably larger than most of those quoted in the literature. We decided to use a larger sampling window of 240×240 mm to reduce measurement error. Work from the Carroll lab on an AO SLO device found that with decreasing window size, the error rate for cone density measurement increased. 26 Most studies used a window of around 50×50 mm. These included histology studies such as that of Hirsch and Miller 27 who used a window of 56×56 mm and a recent in vivo imaging study using AO SLO by the Burns and colleagues which demonstrated good reproducibility in cone density count, with an area of 50×50 mm on a subject imaged twice in 6 months at the same retinal locus. 16 The two studies by Lombardo et al, 28 were carried out to assess cone density as a function of eccentricity 14 and symmetry between the two eyes in healthy subjects, but did not assess all the features crucial to confirm that the signals being studied by the device are from cone photoreceptors. Curcio et al 8 noted that at 1.3-1.4 mm (approximately 5°t emporal to fovea), cones were larger and circular in shape and that rods encircle these cones. The areas of darkness and indistinct reflections in-between the cone reflections in our images are most likely to be rods. The reason we are unable to delineate the rods is that the rtx1 AO device has a resolution of only up to 4 mm. This study addresses all aspects which are crucial in defining and confirming the cone photoreceptor matrices on this AOC. Curcio and colleagues 8 found that in two human donor eyes, photoreceptor diameter at fovea was 1.6 and 2.2 mm respectively. This accounts for why foveal cone imaging was not possible with this device. Future devices will need to significantly improve in order to resolve the fine and closest packing of cone photoreceptors at the fovea. CONCLUSIONS By studying photoreceptor matrices, we have been able to demonstrate that the signals captured by the rtx1 AOC are most likely due to the cone photoreceptors. Furthermore, these cone reflectance images correlate quantitatively with accepted retinal histology findings from the literature. 8 10 24 It is likely that AO based devices and photoreceptor imaging will play a part in the future diagnosis and monitoring of retinal diseases and treatments. The reproducibility of the images and the consistency of quantification in disease states will need to be confirmed before the full potential of this device as a clinical investigation tool can be confirmed.
2018-04-03T00:41:37.747Z
2014-04-11T00:00:00.000
{ "year": 2014, "sha1": "4e6713d687887b828e51d2cdbce966bc645bc120", "oa_license": "CCBY", "oa_url": "https://bjo.bmj.com/content/98/8/1073.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4e6713d687887b828e51d2cdbce966bc645bc120", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
231587049
pes2o/s2orc
v3-fos-license
Serum Levels of Vitamin D and Dental Caries in 7-Year-Old Children in Porto Metropolitan Area Vitamin D deficiency has been associated with significant changes in dental structures. In children, it can induce enamel and dentin defects, which have been identified as risk factors for caries. This study aimed to assess the association between low serum 25-hydroxyvitamin D (25(OH) D) levels (<30 ng/mL) and the prevalence of caries in the permanent teeth and mixed dentition of 7-year-old children. A sample of 335 children from the population-based birth cohort Generation XXI (Porto, Portugal) was included. Data on children’s demographic and social conditions, health status, dental health behaviours, dental examination including erupted permanent first molars, and blood samples available for vitamin D analysis were collected. Dental outcomes included the presence of caries, including non-cavitated lesions (d1–6mft/D1–6MFT > 0), and advanced caries (d3–6mft/D3–6MF > 0). Serum 25(OH) D was measured using a competitive electrochemiluminescence immunoassay protein-binding assay. Bivariate analysis and multivariate logistic regression were used. Advanced caries in permanent teeth was significantly associated with children’s vitamin D levels <30 ng/mL, gastrointestinal disorders, higher daily intake of cariogenic food, and having had a dental appointment at ≤7 years old. Optimal childhood levels of vitamin D may be considered an additional preventive measure for dental caries in the permanent dentition. Introduction Dental caries remains the most prevalent oral disease among children and adolescents, increasing progressively with age [1]. Physical, biological, environmental, and behavioural factors play an important role in dental caries aetiology [2]. The microbiological and environmental factors that cause this dental disease have been extensively studied [1][2][3][4]. In the last few decades, preventive efforts were mainly directed towards behavioural factors; however, attention should also be paid to the hosts' susceptibility, namely, teeth vitamin D levels and dental caries is worth investigating following strict methodology regarding intervening factors to find a possible cause-effect relationship. The purpose of this study was to comprehensively assess the association between low serum 25(OH) D levels (<30 ng/mL) and the prevalence of dental caries in permanent teeth and mixed dentition in 7-year-old children from the population-based cohort Generation XXI. Study Design and Participants The sample was obtained from the population-based cohort Generation XXI, which was assembled in the five level-III public maternity units in the Porto Metropolitan Area (Northern Portugal) during 2005-2006 [31]. Generation XXI participants were recruited according to the following eligibility criteria: mothers living in one of the six municipalities of the Porto metropolitan area, delivering at the public maternities that covered those municipalities and giving birth to live babies with a gestational age >24 weeks. At enrolment, the maternity units were responsible for 91.6% of the deliveries in the whole eligible population. A total of 8647 children and 8495 mothers were enrolled in Generation XXI at baseline [32]. The follow-ups of the entire cohort occurred between April 2009 and July 2011, April 2012 and April 2014, and July 2015 and July 2017, when children were 4, 7, and 10 years of age, respectively. Trained interviewers conducted face-to-face interviews and applied structured questionnaires at baseline and in the follow-up evaluations to collect data on demographic and social conditions, lifestyle, children's health status, and objective anthropometric measures [32]. At the second follow-up, all Generation XXI 7-year-old children were invited for a dental examination appointment. In this follow-up, 908 children underwent a dental evaluation, and 4595 had blood samples collected. The present study considered a subsample of 3357-year-old children whose permanent first molars had erupted at the time and who had blood samples available for vitamin D analysis. However, not all mothers' and children's characteristics were registered for every subject, so the number of subjects for each variable varies slightly. The characteristics of the children who attended the dental visit and of their mothers were compared with the remaining cohort evaluated at baseline (Table S1). The comparison showed that the children in the present sample had a higher gestational age than the children in the remaining cohort (95.6% vs. 90.6%, p = 0.006). No significant differences were found regarding the mothers' age and education, the monthly income and the children's birth weight. Data Collection Information on the children's and mothers' socioeconomic and demographic characteristics, health history, and lifestyle was collected at birth and at the 7-year follow-up, using structured questionnaires applied to the child's caregiver. Some of the variables were grouped or recoded. The following variables from the baseline evaluation were used in the present study: the mother's age (continuous variable) and education level (≤9 years, 10-12 years or >12 years); and the child's gestational age (<37 weeks or ≥37 weeks) and birth weight (<2500 g, 2500-3800 g or >3800 g). The following variables from the 7-year follow-up were used in this study: the child's sex (male or female); the household income (≤1000 €, 1001-1500 € or >1500 €); the child's conditions or diseases (gastrointestinal, bone, muscle, and joint disorders, kidney diseases, growth and liver disorders, epilepsy, cerebral palsy, congenital malformations, and number of bone fractures), when present; the child's vitamin and drugs intake; and the child's activities (minutes a week spent reading, watching TV, and doing outdoor activities). This study also used the anthropometric measures collected. The children's body mass index (BMI) was classified according to standard age-and sex-specific BMI z-scores developed by the World Health Organisation (WHO) [33]. Trained interviewers applied a food frequency questionnaire to evaluate the children's diet. The parents/caregivers were asked how many times, on average, their child had consumed each of several food items in the previous 6 months: "≥4 times per day;" "2-3 times per day;" "once a day;" "5-6 times per week;" "2-4 times per week;" "once a week;" "1-3 times per month;" "<1 a month;" or "never." All consumption frequencies were converted into daily frequencies (e.g., once a week was converted into 1/7 days = 0.14 times per day), as previously described [34]. There were two food groups defined under two new variables called "cariogenic foods" (ice cream, breakfast cereals, crackers, cookies, sweet pastry, chocolate, sugar and candies) and "cariogenic drinks" (chocolate milk, sweetened carbonated drinks and other sweetened drinks). Both these variables were analysed as continuous variables. Dental Examination The entire cohort was invited to participate in the 7-year follow-up evaluation, and 81% of the children were re-evaluated. As part of the physical evaluation, the children were invited for an oral examination. The 7-year-olds are an important age group because the dentist can assess dental caries not only in the primary dentition but also in the first permanent teeth, which have a different exposure time to other caries risk factors. At this visit, trained dentists applied a questionnaire to the children's parents/caregivers on toothbrushing frequency and other dental health-related behaviours, including if the child had already attended a dental appointment (yes/no). The parents/caregivers were asked how many times per day, on average, did their child brush their teeth: "<1 time per week;" "1-2 times per week;" "3-6 times per week;" "once a day;" "2 times per day: 1 at bedtime;" "2 times per day: none at bedtime;" "≥3 times per day: one at bedtime and the others after meals;" "≥3 times per day: none at bedtime or after meals." The children's toothbrushing frequencies were converted into three groups: <1 time per day, once a day, and ≥2 times per day. Furthermore, four trained, calibrated dentists examined the children in a standard chair with a halogen lamp, using a dental mirror and a probe. Plaque removal was also performed, using a sterile gauze to remove it and dry the tooth surfaces, for examining caries lesions according to the International Caries Detection and Assessment System II (ICDAS II) criteria [35,36]. No additional detection methods, including radiographs, were used. The intra-and inter-examiner calibration consisted of a session including an e-learning programme, a theoretical course with images, and a training observation of patients of the same age against a gold standard. To assess consistency between observations, each examiner repeated one of every ten clinical observations during the recording stage. The dentists' intra-and inter-examiner calibration showed a linear weighted kappa of 0.80 and 0.75, respectively, for ICDAS II with a good agreement [37]. The caries status was determined using the decayed, missing and filled teeth index for the primary (dmft) and permanent dentitions (DMFT), based on the WHO standard methodology [38], and additionally including the incipient lesions in the decayed component. Advanced caries lesions (ICDAS II codes 3-6) were distinguished from initial non-cavitated caries lesions (ICDAS II codes 1-2). Based on the clinical data obtained, two main outcome variables were created for caries. The first was "dental caries status," defined as the presence or absence of dental caries, including initial non-cavitated caries, in permanent teeth (ICDAS II 1-6 decayed, missing and filled permanent teeth: D 1-6 MFT), and mixed dentition (d 1-6 mft and D 1-6 MFT). A d 1-6 /D 1-6 lesion was recorded when the tooth showed a first (1) or distinct (2) visual change in enamel, a localised enamel breakdown due to caries with no visible dentine underlying shadow (3), an underlying dark shadow from dentin with or without localised enamel breakdown (4), a distinct cavity with visible dentine (5), or an extensive distinct cavity with visible dentin (6) [35,36]. The second outcome was "advanced dental caries," defined as the presence or absence of advanced caries lesions in permanent teeth (ICDAS II 3-6 decayed, missing and filled permanent teeth: D 3-6 MFT) and in mixed dentition (d 3-6 mft and D 3-6 MFT). A d 3-6 /D 3-6 lesion was recorded using codes (3), (4), (5) and (6), as previously described. Children with at least one tooth affected by caries in mixed dentition or permanent teeth (d 1-6 mft/D 1-6 MFT > 0 or D 1-6 MFT > 0) were considered to have dental caries. Children with at least one tooth affected by advanced caries lesion in mixed dentition or permanent teeth (d 3-6 mft/D 3-6 MFT > 0 or D 3-6 MFT > 0) were considered to have advanced dental caries. [39]. Thus, vitamin D adequacy was classified according to the following 25(OH) D cut-off levels: deficiency, ≤20 ng/mL; insufficiency, 21-29 ng/mL; and sufficiency, ≥30 ng/mL [17]. These were later dichotomised into adequate (≥30 ng/mL) and not adequate (<30 ng/mL), for statistical analysis purposes. The season of blood sample collection was also considered, as it may affect the children's vitamin D concentrations [40]. For that purpose, the year was divided into two seasons: summer (April to September) and winter (October to March). Statistical Analysis Descriptive statistics included frequencies (counts and percentages) for qualitative variables and median values and interquartile range (1st and 3rd quartiles) for quantitative data. There were four binary (presence or absence) outcome variables for caries derived from the ICDAS II index: non-cavitated lesions in (1) permanent teeth (D 1-6 MFT) and (2) mixed dentition (d 1-6 mft and D 1-6 MFT), and advanced dental caries in (3) permanent teeth (D 3-6 MFT) and (4) mixed dentition (d 3-6 mft and D 3-6 MFT). We evaluated the doseresponse relationship between 25(OH) D levels and children's activities (weekly minutes spent reading, watching TV, and doing outdoor activities) using Spearman's correlation coefficient. The bivariate analysis included the chi-square or Fisher's exact test to determine the association of each qualitative independent variable with the dental caries status and the advanced dental caries. Median birth weight and median cariogenic food and drinks intake of children with dental caries status and advanced dental caries were compared using the Mann-Whitney test for independent samples. The association between vitamin D and dental caries status and vitamin D and advanced dental caries was examined based on crude odds ratio (OR) and 95% confidence intervals (CIs) from logistic regression. For theoretical reasons [40], the potential effect of the season of blood collection on the children's levels of vitamin D was also assessed by including an interaction term (vitamin D < 30 ng/mL × winter season) in the models. Further multivariate logistic regression analyses were performed to evaluate the adjusted OR and 95% CIs for dental caries status and advanced dental caries. Variables with a p < 0.20 in the bivariate analysis regarding dental caries status and advanced dental caries were included in the first step of the model and backward stepwise Wald elimination was performed (p < 0.05 for covariate inclusion and p > 0.20 for exclusion). The models were adjusted for maternal education in years (≤9, 10-12 or >12 years) and the season of blood sample collection (summer and winter) was tested as an interaction variable. The variable family household income was excluded in these models due to the evidence of multi-collinearity with maternal education. A statistical significance of p < 0.05 was considered in all analyses. Statistical analyses were conducted using the statistical software package IBM SPSS Statistics v25 (IBM Corp. released 2017, Armonk, NY, USA: IBM Corp.) Ethical Consideration The project Generation XXI was conducted according to the Declaration of Helsinki. The Ethical Committee of the São João Hospital/Faculty of Medicine of the University of Porto approved all procedures involving human subjects/patients. The Portuguese Authority of Data Protection also approved this study (n • 5833/2011). The parents or legal tutors of each participant received an explanation on the purpose and design of the study and gave written informed consent at the baseline and follow-up evaluations. Regarding caries outcomes, no significant differences were found between the mixed dentition and the permanent teeth groups for the children's sex, birth weight, gestation age and z-BMI. Children with dental caries status or advanced dental caries in permanent teeth had younger mothers, a lower monthly family income, and a higher prevalence of vitamin D levels below 30 ng/mL than children with no caries in these teeth (Table 1). Children with advanced dental caries in the mixed dentition had mothers with a higher BMI than children with no advanced dental caries, but no statistical differences were found in maternal BMI for dental caries status. Significantly more mothers of children with dental caries status or advanced dental caries had completed less than ten schooling years compared to mothers of children without caries. Similarly, on average, the daily frequency of cariogenic food and drinks intake was higher in children with dental caries status or advanced dental caries in mixed dentition and permanent teeth than in children with no caries (Table 1). Regarding the potential effect of children's weekly minutes spent reading and watching TV on their vitamin D levels, no association was found. On the other hand, we found a significant positive correlation (p = 0.031) between children's outdoor activities and vitamin D levels, but its correlation coefficient was 0.118, which indicates that the correlation is negligible and, thus, has no clinical relevance (Table S2). Therefore, the time spent in these activities was not considered when assessing variables related to dental caries or interacting with vitamin D levels and dental caries. Bold entries denote statistical significance (p < 0.05). * p-value: Two-sample Student t-test, Mann-Whitney U-test, chi-square test or Fisher's exact test, as appropriate; p < 0.001 (Bonferroni correction); Yes ¥ , d 1-6 mft/D 1-6 MFT > 0: dental caries in mixed dentition; No ¥¥ , d 1-6 mft/D 1-6 MFT = 0: no dental caries in mixed dentition; Yes ‡ , D 1-6 MFT > 0: dental caries in permanent teeth; No ‡ ‡ , D 1-6 MFT = 0: no dental caries in permanent teeth; Yes Ø , d 3-6 mft/D 3-6 MFT > 0: advanced dental caries in mixed dentition; No ØØ , d 3-6 mft/D 3-6 MFT = 0: no advanced dental caries in mixed dentition; Yes § , D 3-6 MFT > 0: advanced dental caries in permanent teeth; No § § , D 3-6 MFT = 0: no advanced dental caries in permanent teeth; ES ref., Endocrine Society reference; IQR, Interquartile Range. The prevalence of dental caries status was 64.5% in the mixed dentition and 23.9% in permanent teeth. The prevalence of advanced dental caries was 62.4% in the mixed dentition and 20.0% in permanent teeth (Table 2). Nearly one-quarter of the children showed dental caries status in permanent teeth when incipient and cavitated lesions were considered, and one-fifth when only cavitated lesions were included. The prevalence of dental caries status and advanced dental caries was higher in both the permanent teeth and mixed dentition of children with 25(OH) D levels <30 ng/mL ( Table 2). Table 2. Prevalence of dental caries status and advanced dental caries in the mixed dentition and the permanent teeth for the whole sample and according to vitamin D reference values. The association of dental caries status and advanced dental caries with the children's vitamin D levels, in our sample, was not affected by the season of blood collection (Table 3). In crude models, children with 25(OH) D levels <30 ng/mL at 7 years of age, when compared with those with 25(OH) D levels ≥30 ng/mL, had a significantly higher frequency of dental caries (OR = 2.00; 95%CI: 1.13-3.56; p = 0.018) and advanced dental caries (OR = 1.93; 95%CI: 1.04-3.56; p = 0.036) in permanent teeth (Table 3). Table 3. Crude association of dental caries status and advanced dental caries with the children's vitamin D levels and the interaction between vitamin D levels and season. Regarding exposure variables in mixed dentition and permanent teeth, dental caries and advanced dental caries were associated with mothers who completed <10 and 10-12 schooling years, children with gastrointestinal disorders, having had a dental appointment at ≤7 years old, toothbrushing <1 time per day, higher daily intake of cariogenic food, higher daily intake of cariogenic drinks, and children's vitamin D levels <30 ng/mL (Table S3). Dental Caries Status The adjustment for maternal education did not attenuate the association between dental caries status in permanent teeth (D 1-6 MFT > 0) and the following independent variables: higher daily intake of cariogenic drinks and having had a dental appointment at ≤7 years old. The association between vitamin D levels <30 ng/mL and dental caries status in permanent teeth was not significant in the final multivariate logistic regression model (OR = 1.64 (95%CI: 0.87-3.03); p = 0.127). Nonetheless, remarkably, this analysis indicated a significant adjusted association between vitamin D levels <30 ng/mL and advanced dental caries in permanent teeth (OR = 2.27 (95%CI: 1.05-5.00); p = 0.037). A higher daily intake of cariogenic food, having had a dental appointment at ≤7 years old and children with gastrointestinal disorders were also factors associated with advanced dental caries in permanent teeth after adjusting for maternal education ( Table 4). The multivariate logistic regression analysis indicated a significant unadjusted and adjusted association between vitamin D levels <30 ng/mL and the presence of advanced dental caries in permanent teeth. Table 4. Association (multivariate logistic regression model) between dental caries status and advanced dental caries in permanent teeth and mixed dentition and the mothers' and children's characteristics. Regarding the mixed dentition, the same independent variables were associated with dental caries status (d 1-6 mft and D 1-6 MFT > 0) and advanced dental caries (d 3-6 mft and D 3-6 MFT > 0) after adjustment for maternal education. The daily intake of more than one cariogenic food and having had a dental appointment at ≤7 years old were associated with dental caries status and advanced dental caries in the mixed dentition. No statistically significant association was observed between vitamin D levels or the interaction term (vitamin D < 30 ng/mL × winter season) and dental caries status and advanced dental caries in the mixed dentition (Table 4). Discussion In this study, we analysed the relationship between serum 25(OH) D levels and dental caries status and advanced dental caries in the mixed dentition and permanent teeth of a convenience sample of Portuguese children. Our primary finding was that 25(OH) D levels <30 ng/mL were associated with dental caries in the permanent teeth of 7-year-old children. This association between vitamin D threshold and advanced dental caries was not attenuated after adjusting for maternal education. Moreover, children with vitamin D levels ≥30 ng/mL showed a significantly lower proportion of dental caries and advanced caries in the mixed dentition. Nevertheless, serum 25(OH) D concentrations were not significantly correlated with caries in the mixed dentition both in the adjusted and the unadjusted models. This study followed a rigorous methodology, aiming to bridge limitations detected in similar studies published in this area [30]. It also included variables not previously studied that could affect the relationship between vitamin D and dental caries, such as children's activities and the season of blood sample collection. Limited data on vitamin D concentrations among the European paediatric population are available from several countries [9]. A recent systematic review verified that, despite the abundance of solar UVB radiation in the Southern Europe and Eastern Mediterranean regions, more than one-third of the studies reported mean 25(OH) D levels <20 ng/mL. That systematic review highlighted an evident vitamin D deficiency across all population subgroups, which was highest among neonates/infants and adolescents, who go through critical periods of bone and overall growth and development [41]. The mean level of 25(OH) D (standard deviation) in our whole sample was 27.9 (8.2) ng/mL. Considering the seasons, the children's mean 25(OH) D concentration was 30.2 (8.8) ng/mL in the summer months and 24.3 (5.5) ng/mL in the winter months. When comparing the 25(OH) D levels of our Portuguese sample with other Southern Europe countries, children from Spain (Pamplona and Asturias), Italy (Tuscany, Florence and Verona) and Turkey (Istanbul) had 25(OH) D levels significantly lower than our children sample. Even if stratified by season, Istanbul's mean levels of vitamin D are significantly lower than those observed in our sample in both the summer and winter months [41]. Interestingly, in our sample, the season of blood collection (summer vs. winter) did not affect the association between dental caries and vitamin D levels. This finding may result from children having insufficient sun exposure, even in summer months, due to a more sedentary life, and using excessive amounts of high-factor sunscreen when in the sun, following skin cancer prevention campaigns [39]. Despite advances in prevention and management, 60% to 90% of schoolchildren experience dental caries, potentially resulting in pain, infection, and hospitalisation [1]. Dental caries is one of the most common diseases observed in paediatric patients worldwide [42], and thus, understanding the underlying mechanisms that relate early life events to a later occurrence of carious lesions will be key to develop current and more holistic long-term dental-caries preventive strategies in the future. A growing body of evidence has reported vitamin D may help in preventing dental caries through its role in enamel and dentin formation [43] and induction of defensins and cathelicidins, which have antimicrobial properties [12]. Furthermore, interventions that provide adequate levels of vitamin D are theorised to reduce the prevalence of dental caries in children, affecting other health outcomes [24]. Herzog et al. reported no significant association between different vitamin D levels and dental caries in the mixed dentition of noninstitutionalised children aged 5 to 12 years in the United States [24]. These results are in line with ours regarding the experience of advanced dental caries in the mixed dentition since these authors did not consider incipient caries lesions. Dudding et al. found no evidence of an inverse causal effect of vitamin D on dental caries but found an association between low vitamin D and early caries onset [44]. On the other hand, Schroth et al. [20], who also examined the association between vitamin D levels and dental caries experience in the mixed dentition stage in a representative sample of Canadian children aged 6 to 11 years, suggested that optimal vitamin D concentrations (≥30 ng/mL) were associated with 39% lower odds of dental caries and dmft/DMFT in young school-aged children [20]. Our results for mixed dentition did not corroborate this finding. Furthermore, a recent randomised controlled trial reported no relationship between lower vitamin D levels and a higher risk of dental caries in permanent and primary teeth. Nevertheless, a high dose of vitamin D supplementation during pregnancy was associated with approximately 50% reduced odds of enamel defects in the offspring at 6 years of age [22]. In our sample, optimal children's 25(OH) D levels (≥30 ng/mL) were associated with 56% lower odds of advanced dental caries in permanent teeth. These results are in agreement with the conclusions of Grant's review, which suggested that optimal 25(OH) D levels were protective against caries [12]. Kühnisch et al. found that higher serum 25(OH) D values were associated with a reduced incidence of caries in permanent teeth [29]. This finding is in line with our results concerning advanced dental caries in permanent teeth since their results are related to cavitated dental lesions. Kim et al.'s results also agree with ours, as they found that children with low vitamin D levels had a higher proportion of caries in permanent teeth, mainly in permanent first molars [45], which represent the majority of permanent teeth assessed in our study. Other covariates that were associated with caries in this sample included gastrointestinal disorders, cariogenic foods and drinks, and children having had a dental visit at ≤7 years old. Frequent consumption of excess amounts of sugar-sweetened beverages is a risk factor for obesity, type-2 diabetes, cardiovascular disease and dental caries [46]. Llena and Calabuig verified that a cariogenic diet, especially soft drinks, was associated with a high overall DMFT score and a high DMFT score only in first molars [47]. Our results agree with these findings. Infrequent dental visits have been associated with an increased risk of untreated dental caries [48]. In the current study, children who had had a dental appointment at ≤7 years old had higher odds of having dental caries in both permanent teeth and mixed dentition than children who had never been to the dentist. This surprising finding might result from parents having sought dental care before because their child already had dental caries with treatment needs. The importance of preventive dental care in young children had not yet been instilled in caregivers. Our findings are in agreement with previous studies [20,49]. Our results should be interpreted considering the eruption period of all teeth that children present in their oral cavity at 7 years old. At this age, children have all primary teeth, which erupted between 6 months and 3 years of age, and permanent teeth that erupted between 6 and 7 years old. At 7 years old, the primary teeth have been more exposed than the recently erupted permanent teeth to other risk factors for caries, such as intake of sugar-sweetened beverages and foods and oral hygiene habits, and, thus, it is not surprising that the association between vitamin D and overall caries disappears over time. Accordingly, it is hypothesised that other risk factors for caries may overlap the possible preventive effect of vitamin D in the dental caries process. Therefore, low vitamin D levels may be related to caries only in permanent teeth that have recently erupted. One of the major strengths of this study was the use of circulating serum 25(OH) D levels, measured by a reliable assay, as this is the best indicator of total vitamin D from both endogenous and exogenous sources [17]. Furthermore, the inclusion of intervening variables not previously considered clearly contributed to a broader and integrated view on this issue. Another strength of this study was the adjustment of this association for maternal education. Maternal education has a direct effect on children's dental caries experience [50,51] and naturally influences socioeconomic status, which is a well-recognised social determinant of children's oral health [52]. Not only do socioeconomic factors influence oral health, but they also may place the children at risk of poor nutrition, thereby possibly impacting their 25(OH) D levels [7]. Another distinctive aspect that contributes to the quality of our study is the method used to record dental caries, based on the ICDAS II. Using this system, we categorised lesions as cavitated and non-cavitated and conducted a complex analysis based on the severity of the lesions that is not generally conducted in other investigations. The separate analysis of the mixed dentition and the permanent teeth also allowed obtaining more reliable results, considering that primary and permanent teeth erupt at different ages and, thus, the time of exposure to other caries risk factors could confound the results' analysis. Moreover, trained, calibrated dentists performed the dental examinations, which improved consistency in procedures and registration of the caries diagnosis, and they were blinded to the children's 25(OH) D levels. In our sample, the proportion of dental caries in the mixed dentition was within the expected range based on the third national survey performed in Portugal. However, dental caries prevalence in permanent teeth was much higher in our sample, from Porto, than in children from Northern Portugal in the national epidemiological survey [53,54]. A limitation of this study is the cross-sectional nature of data, which does not allow us to determine causality. This type of study design does not provide any prior knowledge of children's vitamin D status at the time their teeth were developing [29]. However, knowing that vitamin D levels may not change dramatically during childhood, those children with adequate and optimal levels of 25(OH) D at 7 years old likely had beneficial concentrations in the past, during the previous period of permanent tooth development, thereby ensuring proper dental development of enamel and dentin that would be more resistant to caries [20]. Another limitation of this study is not including a randomly selected sample but rather a convenience one and, so, the generalizability of our findings may be limited. Dental examinations did not include radiographs and, therefore, the results may underestimate the true prevalence of untreated caries and restorations. We also recognise that including additional information related to children's prematurity, type of medication used, and diseases could have allowed a more accurate analysis of these variables' influence on the levels of vitamin D and, thus, contribute to obtaining more accurate results. The findings of this study suggest that 25(OH) D levels <30 ng/mL are associated with dental caries in permanent teeth. On the other hand, children's optimal 25(OH) D concentrations are associated with 56% lower odds of advanced dental caries in permanent teeth. Our findings were supported by a rigorous methodology, providing consistent and reliable results. Therefore, considering that vitamin D may influence oral health, its importance in preventing children's dental caries should be reinforced. Early childhood oral health policies must always focus on preventive measures regarding behavioural caries risk factors, but improving children's nutrition and interventions that provide adequate levels of vitamin D should also be considered as a priority. Conclusions Based on the results of this study, children's 25(OH) D levels <30 ng/mL are associated with advanced dental caries in permanent teeth in 7-year-old children. In the mixed dentition, other social and behavioural factors appear to be associated with both dental outcomes in our Portuguese sample. Optimal levels of vitamin D in childhood may be considered an additional preventive measure for dental caries in the permanent dentition. Supplementary Materials: The following are available online at https://www.mdpi.com/2072-664 3/13/1/166/s1, Table S1: Comparison between the characteristics of the eligible participants and the remaining cohort evaluated at baseline * (Number of participants and percentages; median and interquartile range). Table S2: Dose-response association between 25(OH) D levels and children's activities at 7 years of age (children's weekly minutes spent reading, watching TV, and doing outdoor activities). Table S3: Bivariate analysis between dental caries and advanced dental caries in mixed dentition and permanent teeth with independent (exposure) variables. Data Availability Statement: Data used in this study were from the Generation XXI birth cohort and it is under the responsibility of Professor Henrique Barros, head of the Department of Public Health and Forensic Sciences, and Medical Education of the University of Porto Medical School, and president of the Institute of Public Health of the University of Porto. For the present study, individual-level information was used, that cannot be disseminated due to confidentiality issues. A formal request to the person responsible (Professor Henrique Barros: hbarros@med.up.pt) can be made by anyone interested in developing scientific research based on data collected within the Generation XXI study. Further information can be found at the Institute of Public Health website: http://ispup.up.pt/research/research-structures/.
2021-01-13T06:17:19.867Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a9f16f4b4e19be1cc63dc6a5914253da134294b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/1/166/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e98e62ce7322a25002bbb1d4c7bd40fa49956bf4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
59137365
pes2o/s2orc
v3-fos-license
Infrasound in the Atmosphere of the Earth Infrasonic as the factor of the influence on the climate of the Earth is submitted. The results of researches between the infrasonic and the sunny activity are showed. It is necessary the ecological aspects in intercommunications between the infrasonic and atmosphere effects, parts of biosphere to identify. Introduction In this work it is makes the analysis: in what case we have the infrasound in atmosphere. We have some factors, that infrasound waves are produced. Specific influence of low frequency acoustic waves on living beings are reviewed. The theme is based on the fact that the resonance frequencies of the most important human organs are in the frequency range 0.5-20 Hz (L. Pimonov, V. Gavro, E.N. Malyshev, M.A. Isakovich, A.V. Rimsky -Korsakov, V. Tempest). Infrasound signal attenuation in atmosphere is not enough, which speaks proportionality of a decay factor to a frequency quadrate. Therefore sometimes an infrasound term as "an ultrasonic neutrino». Uptake of energy of infrasonic waves by frequency in the inferior atmospheric slices compounds of 0,1 Hz 2•10 -9 dB/km. Therefore, the study of infrasound is very important. Main Part Formation of space and atmospheric weather, changes in the Earth's climate is now involved in many world organizations. In Ukraine it is States Space Agency and anthers organization that are the party of it from Dniepropetrovsk, Kiev, Lvov. In world that are Intergovernmental Oceanographic Commission, World Metrological Organization. In USA it is National Center for Atmospheric Research, Colorado, Boulder; University Corporation for Atmospheric Research. Intergovernmental Group of Experts on Climate Change (IPCC). It is agreed with the National Academy of Sciences of the "Group of eight, G8 an international club that unites government (United Kingdom, Germany, Italy, Japan, Russia, USA, France and Japan) and others. Intergovernmental Panel on Climate Change, IPCC) it is a big club that unites government United Kingdom, Germany, Italy, Japan, Russia, USA, France and Japan) and others. Developed a research program on climate change of the Earth. For example, World Weather Research Program, World Climate Research Program, International Geosphere Biosphere Program. Recently more and more attention it is given to every possible oscillating motions of atmosphere, including to infrasonic waves. The purpose of the present operation is examination of generation of infrasonic waves in an Earth's atmosphere and their interaction with the atmospheric phenomena. At the present stage less all is clarified atmospheric links at levels below 200 km. Without their definitive installation it will be difficult to solve completely and bottoms of is solar-terrestrial relationships of cause and effect. Infrasonic oscillations in an Earth's atmosphere grow out of activity of the numerous parents: earthquakes (earth crust oscillations), a tsunami. It is known that the infrasound sources are: eruptions of volcanoes, falls, thunder-storms, oscillations of a surface of the sea, forest fires, the strong wind, turbulence of atmosphere, activity of the person (explosions, gun shots, gyration of lobes wind current generator, transport actuators), electromagnetic radiations, motions of meteors, galactic rays, gravitational actions of moon and the Sun, corpuscular flows from the Sun. It is earthquake, mountains, the storms of the sea, the sources of vortices in atmosphere, the sun activity and geomagnetic variations and others [1]. The scientists of the department of Ukrainian Institute of Space of State Space Agency of Ukraine Science Academy (SSAU) in Lvov registered the infrasound of earthquakes in Turkey (A.A. Negoda, S.А. Soroka and other scientists) [2]. They defined that the infrasound is connected with the sun activity. When the sun activity is high the infrasound decries in atmosphere and vice versa when the sun activity is decries high in atmosphere the level of the infrasound is drops. Now we have others questions like: how does infrasound connect with sun's activity? Analysis of seismicity phenomenon with an infrasound that was made in Turkey on a continuance 1997-2000 has been carried out. Spectral characteristics of an infrasound and seismic activity will well be agreed. On fig. 1 spectroscopic densities of diurnal energy of an infrasound and seismic activity for a continuance 1997-2000 are displayed, that infrasonic oscillations "are sensitive" to changes of seismic activity in radius to 2000km. The optimum dimension of radius of this field lies within the limits 1000-1500km. The greatest interest to IS -waves appear after the represents analysis of the phenomenon's in IOA that was made before the catastrophic earthquakes in region. An examination of infrasonic spectrums in an Earth's atmosphere before large earthquakes has displayed their characteristic changes before large earthquakes. Infrasonic and gravity waves above 100 km where their voltage is great, continuously raising and sinking various atmospheric slices, in addition promote intermixing of various components. On fig. 2. energy of an infrasound and solar activity in 1997-2000 have displayed. Only at catastrophic earthquakes the phase trajectory goes out attraction field. And, the exit of a phase trajectory from field of an attraction and an inlet in dangerous field (in drawing it is scored by a vertical dotted line) does not descend instantaneous. It descends for some days. The scientist V.I. Krasovsky [3] proposed to divide the atmosphere to lower and upper atmosphere. In lower atmosphere we have the clouds, that give us the rain or we have the precipitates. Earthquakes generate infrasound in the atmosphere. Infrasound reaches the surface of storm clouds, affects the surface. Rain intensifies. So infrasound manifests itself as an integral part, which forms the climate and the impact on the biosphere. But in upper atmosphere the astronauts detected the silvery clouds ( fig. 4). Various space particles like a little meteorites travel from the space and stop and crystallizes at the distance of 80 km from the surface of the Earth. And turn into the silvery clouds. At first astronauts noticed the wave structure of these clouds. And the length of the waves quells to the length of infrasonic waves. These clouds never produce precipitates. They only produce the shade on the surface of Earth. It depends on temperature and pressure of air in atmosphere. It is factor of the influence of the climate of the Earth. The wave structure with the length of infrasound is because earthquakes produce the infrasound. The infrasonic waves of the quakes travel in high atmosphere. They amplitude not change a lot. But on high 80 km there are infrasonic waves. There is in sound channel and infrasonic waves (IS -waves) turn on 90 degrees and continued to travel in direction that is parallels on surface on the Earth. The infrasonic waves moved by meridians to poles of the Earth. When infrasonic waves connecting with magnetic particles we can see the phenomenon of aurora at North Pole [3]. Because geomagnetic particles connected with infrasonic waves. The conducted operations have displayed, that this interaction makes essential impact on an ionosphere. It is proved, that perturbations of an ionosphere of an infrasound from earthquakes and eruptions of volcanoes it is accompanied by a birth of magnetic storms. Earlier the scientists were supposed and considered that the disturbing in an ionosphere (IOA) is connect only with solar flares. At the present stage the scientists are connect the disturbing in an ionosphere with IS -waves. The factor, making the significant impact on infrasonic oscillations of atmosphere, is seismic activity. And it can be an exposure on preparatory processes and the link of intensity of seismic processes. It can be connect with solar activity. The scientists found effect at analysis of global seismicity and 11-year-old solar cycles. Influence of seismic activity on IOA is very complex process and is not reduced only to piston radiance fluctuating lithosphere plates. Here it is necessary to consider manifold physicochemical processes, both in lithosphere, and in atmosphere. IOA can be generated by gaseous release from flaws at the lithosphere increase of seismic activity, oscillations lithosphere plates, aerosol in homogeneities in atmosphere. IOA can create on a surface of the Earth alternate stresses and in pour on the significant depths in lithosphere. Infrasonic oscillations influence on the velocity of fluids travel, on the electrical fields and on local seismic oscillations by stimulation in the lithosphere. Thus, the infrasound in atmosphere can be generating as effect of seismic oscillations and awake the influence on the atmosphere. In character of interchange by vibration energy between lithosphere and atmosphere processes of preparation of large earthquakes can show. For examinations of the infrasonic canal of lithosphere-atmospheric links it has been introduced two coefficients of seismic activity. The first proportional to a quadrate of the maximum magnitude in the given day in the given region, second -to a quadrate of the total of magnitudes of all seismic events with magnitude ≥3 for a day in the given region. It was considered two regions. One of it is dimensioned on a longitude 10 ˚-45˚Е and latitude 35 ˚-60˚N, and second -on a longitude 10 ˚-55˚Е and latitude 20 ˚-60˚N. The first and second regions powered up the basic bands of heightened seismicity of the central and east Europe, and also Turkey. Infrasound's measuring was made in a point with co-ordinates 48˚41΄N, 26˚30΄Е [2]. It is noted, that the greatest level the infrasonic background reaches during the maximum heating of atmosphere. One of the parents of this phenomenon is forest fires. For last two years the reality of such processes is confirmed. For a long time already the radiant of warming up of the upper atmosphere represents all views of infrasonic oscillations, powering up and interior gravity waves. Most awake investigated died these phenomena Canadian geophysics Haynes. Scientists executed necessary measuring with the help infrasound metric complex which powers up two modules of pressure. Modules have disposed apart 85 meters from each other. Measuring conducted within 5 minutes, then 5 minutes a rest and again measuring. Having analyzed the gained effects, they have come to conclusion, that level of an infrasonic background is not constant. It varies both throughout a year and within days. In the afternoon it strengthens, reaching peak about 11 hours per winter time and approximately at 16 o'clock in the summer. That is the greatest level the infrasonic background reaches during the maximum warm-up of atmosphere. Therefore reaching of stratums of an ionosphere by the infrasonic waves generated in an Earth's atmosphere by operation vertically-axial wind aggregate is real. From the above-stated follows, that to a variation of frame of the upper atmosphere, geomagnetic disturbances and auroras does not give in to explanation only agents of a solar parentage. Inferior atmosphere is essential modulates effects of a solar parentage. One of the parameters characterizing auroras, the ionic density is. In an aurorally region the nonuniform ionization arises because of non-uniformity of interfering flows of vigorous charged particles. It appears that the ionic density fluctuates because of gravitational and infrasonic waves in which there are density changes of atmosphere and altitudes of levels of its identical values. The fluctuation field is usually termed as diffuse stratum F. The USA in 1969, the USSR in 1973 and the USSR together with France in 1975 have made experiments on making of artificial auroras during which time from a missile at the altitude in some honeycombs of kilometers the bundle of electrons of high energies was injected in atmosphere. Realization of controllable experiments together with tetramerous observations uncloses new trajectories to examination of auroras and processes in the upper atmosphere. In a band of auroras there are jet currents. Jet currents in a band of auroras are rather impulsive and consequently; oscillations of a manifold spectrum of infrasonic waves with continuances from seconds till several o'clock also can lead. The more wind speed, more effective an energy conversion of a jet current in infrasonic waves. Interaction of electromagnetic radiation with optical in homogeneities of atmosphere can lead to generation of ultrasonic oscillations in a broad band of frequencies. It is necessary to expect therefore, that in a spectrum of infrasonic oscillations of atmosphere rhythmic of solar activity should show. As a result of experiments on observation of electromagnetic responses to infrasound perturbations in the atmosphere, created by means of a portable ultrasonic emitter, link of an infrasound with geomagnetic variations is proved. Thus, the Sun, interplanetary medium, atmosphere and lithosphere and represent unified system, and the essential role in processes of their interaction is played by infrasonic waves. The infrasonic and magneto hydrodynamic waves originating both in the uppermost atmosphere, and behind its limits below and in a magnetosphere. Recently a lot of attention is given to the infrasonic and magneto hydrodynamic waves originating both in the uppermost atmosphere, and behind its limits below and in a magnetosphere. Pressure of sound waves invokes atmosphere inflating. Waves with the frequency exceeding 0,1 glc, are customary waves and are spread with velocity of a sound (g -acceleration of gravity). Waves with frequency, smaller 0,1 glc, go with a little bit smaller velocity. Last, named by gravity waves, always have a wave length exceeding altitude of a homogeneous atmosphere. Thus, the wave length of gravity waves will be not less than hundred kilometers. Thereof in various bands of atmosphere temperature lapse rates and the thermal instabilities generating IOA are organized. The organized infrasound can influence fluctuations of intensity of interaction of ultra rays with atmospheric aerosols. In earth crust percussions and vibrations of very low sound frequencies from the diversified radiant are, including from explosions are observed. For an infrasound small uptake in various mediums owing to what infrasonic waves in air, water is characteristic and in earth crust can be spread to very far distances. The wind power concerns to one of perspective directions of the solution of the given problem. Wind energy installations are typical emitters of an infrasound. Major diffusion to the world curls to Darya and Savonius vertically-axial (VA) phylum [4] have gained two-and three-blade is horizontal-axial (HA) wind energy installations (WEI) fan phylum, and also. At periodic action on medium of gyrating lobes the sound field is generated. The circular rotation frequency of three-blade curls ВЭУ-250С and ВЭУ-500С of Ukraine compounds 47,6 rpm. Their three-blade curls generate in a surrounding medium an infrasound with frequency of 2,4 Hz. The circular rotation frequency of two-blade curls IN WEG-0020 and WEG-0030, developed by the International Is scientific-industrial corporation "VESTA", compounds 28-90 rpm. Their two-blade curls generate in a surrounding medium an infrasound with frequency of 2-7 Hz. It is displayed, that for the lobe in length of 12 m at velocities of an airflow, smaller 10 km/s, the frequency of a fundamental component calculated on a Stokes formula, is infrasonic and abandons 0,4 Hz. Performances of a sound field of a curl are horizontal-axial wind electric power stations are calculated on a procedure developed in operations [4]. Here it is displayed, that the acoustic field propeller has a directional characteristic. As a result of the conducted examinations it is erected, that the noise level in an acoustic field propeller on a very low sound frequency apart 300 m from wind aggregate compounds of 2,4 Hz 63 dB. Summary Infrasound radiant's in an Earth's atmosphere are systematized. Link of an infrasound with the phenomena in upper and in inferior atmospheres is viewed. Changes in the infrasound spectrum, invoked lithosphere are featured by processes. An Energy of an infrasound increases at solar activity slope. 5-10 days prior to large earthquakes the spectrum and the phase glow iris of infrasonic oscillations in atmosphere that can become a bottom for making of a new method of the forecast of earthquakes is essential variants. The infrasound from earthquakes and fires can serve as a harbinger, a signal and the parent of cataclysms (explosions of methane ice). Analysis of a frequency characteristic of an acoustic field has displayed, that by operation vertically-axial wind aggregates in an Earth's atmosphere infrasonic waves by frequency of 4-7 Hz are generated. Infrasound presence in an Earth's atmosphere at growing powers vertically-axial wind aggregates guesses realization of the further analysis of interaction of infrasonic acoustic fields with sunlight.
2019-04-23T13:25:17.076Z
2015-02-27T00:00:00.000
{ "year": 2015, "sha1": "53cd39f7cafd3b6004c56100449a2ee5ffa6615d", "oa_license": null, "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ejb.20150301.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7437d2dafa6f9b077952f93e0ef5d84389d6f28b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
225598971
pes2o/s2orc
v3-fos-license
Distributed Switched Optimal Control of an Electric Vehicle : Distributed control is investigated to solve an electric vehicle switched optimal control problem faster than centralized control without significant performance change. The powertrain includes a cooling system, supercapacitor, and two switched mode components: a battery with discharging and charging modes and an electric drive with motoring and generating modes. Control-oriented component power flow models are developed with mode and temperature dependence. Component specific power and thermal management optimization problems, subject to these models, require solution for overall powertrain management. The power management problem is switched, having discrete-valued mode selection variables. Both problems are solved in a distributed manner using the alternating direction method of multipliers (ADMM). An ADMM-based algorithm to solve the switched powertrain management problem is proposed; it (i) solves the embedded version of the switched problem that relaxes discrete mode switch values to continuous values and (ii) projects embedded mode selection values onto discrete values and then solves the problem with now known mode selections. The distributed solution approach is demonstrated using a trapezoidal drive profile and three regulatory profiles. The regulatory results are compared to centralized control and the proposed algorithm achieved at least a 3.3 times improvement in solution time with comparable drive performance. Introduction Electric vehicles with both battery and supercapacitor energy sources have gained interest recently since driver-demanded power can include large, sharp peaks and rapid changes in discharging/charging power, which reduces the usable life and efficiency of a battery if used alone. Given both energy sources, the supercapacitor supplies/absorbs high-frequency and large, sharp peak power demands while the battery supplies/absorbs power at lower frequency and wider peaks. A supercapacitor is not used alone because its energy density is significantly less than a battery. Modeling and control of the battery-supercapacitor powertrain has been investigated previously. Golchoubian et al. [1] use nonlinear model predictive control to manage the power of a battery-supercapacitor electric vehicle. Only the operation of the battery and supercapacitor are considered; their combined power outputs must meet the desired electric drive power. Model predictive control is implemented to minimize the squared battery current, promoting low battery duty and use of the supercapacitor to extend the battery lifetime. The electric drive demand is predicted using a Markov chain process trained on power data sampled over drive profiles. Deterministic and stochastic model predictive control are simulated over different drive profiles. Results show that stochastic optimal control based upon the Markov chain predictions can achieve similar performance compared to a deterministic control with full future knowledge of the drive profile. Optimal control of a battery-supercapacitor electric vehicle is also investigated by Zhang et al. [2]. The system model includes a battery, supercapacitor, and DC-DC converter. The power management consists of a drive profile categorizer, driving pattern predictor, optimal frequency splitter, and real-time predictive controller. The categorizer and driving pattern predictor together recognize whether the profile is highway, urban, or a combination and predict a real-time driving pattern via a neural network. The optimal frequency splitter is a low-pass filter on the electric drive power demand with a cutoff frequency that is optimized to minimize the battery degradation and energy. Lastly, the real-time predictive controller sets the battery power to the value that minimizes the battery degradation with respect to power values scaled by the reliability of the estimated driving pattern. Control simulations show that the proposed method results in better prediction accuracy and lower system operation cost compared to control based upon a Markov prediction based fuzzy logic strategy. The models and controls in [1,2] do not consider the powertrain as a switched system with components that have dynamics and algebraic relationships that change depending on the direction of power flow. Meyer et al. [3] investigates the battery-supercapacitor electric vehicle as a switched system that includes battery, supercapacitor, electric drive system, and vehicle dynamics. The battery and electric drive system models change depending on whether the former is discharging or charging and the latter is motoring or generating. Each valid system power flow configuration that has some unique dynamics and/or algebraic relationships is termed a mode of operation. Four modes are defined: battery discharging/supercapacitor discharging/electric drive motoring, battery discharging/supercapacitor charging/electric drive motoring, battery discharging/supercapacitor charging/electric drive regeneratively braking, and battery charging/supercapacitor charging/electric drive regeneratively braking. Each mode has associated with it an integer mode switch variable that indicates whether the mode is on or off. Power management is formulated as a model predictive control problem. The control cost function to minimize includes velocity reference tracking error, mode switched mode-specific frictional braking values (so as to maximize regenerative braking), and the difference in supercapacitor state of charge from fully charged at the end of the prediction horizon. The control problem is a switched optimal control problem (SOCP) because it includes a switched system model and a cost function with mode switched terms; the SOCP has both continuous-valued control inputs and discrete-valued (integer) mode switch variables, which precludes application of traditional model predictive control and numerical solvers. To solve the SOCP, the embedding method is applied. The embedding method embeds the discrete-valued mode switch variables into continuous ones, creating a continuous-valued embedded optimal control problem (EOCP) that is solvable using traditional nonlinear programming techniques [4]. If the solution of the EOCP results in a non-integer mode switch value, then projection is applied to obtain control values applicable to the original switched system [5]. In comparison to common SOCP solution methods such as mixed-integer programming, the embedding method more often results in the existence of a solution (guaranteed to exist under mild conditions) and lower numerical solution times [6]. The control approach is shown to produce effective power management over several drive cycles. However, the switched system control is centralized and the EOCP solution times, even though lower than other SOCP solution methods, are not suitable for practical application. Distributed control is a potential remedy for slow power system management EOCP solution times that achieves similar solution accuracy to centralized control. Distributed control of power systems has been investigated to reduce computation time through parallelization at the expense of having to pass data between components with computational capability. Typically in distributed control, each component (i) solves a component level specific control problem that is smaller than the centralized system wide control problem, (ii) broadcasts a select subset of its results, (iii) receives pertinent results from other components, (iv) updates its own control problem in a way to work toward a system control solution given its own results and results received, and (v) returns to (i) with the updated component control problem unless some convergence criteria are met. A popular approach to distributed control is the alternating direction method of multipliers (ADMM) [7,8]. Component connecting variables, termed complicating variables, are identified and then (i) component level optimal control problems are solved with respect to the current values of the complicating variables and dual variables, i.e, Lagrange multipliers associated with satisfying component connection constraints, to find minimizing values of the component level problem variables, (ii) the complicating variable values are updated to satisfy optimality conditions given the current component level problem solution variable values, and (iii) the dual variable values are updated to fulfill dual feasibility given updates in (i) and (ii). ADMM is well-known to converge given convex component level problems and linear connection constraints between components. East and Cannon [9] propose ADMM for a parallel hybrid electric vehicle powertrain. To avoid discrete (integer) control variable in the control optimization, heuristics control the engine clutch engagement on/off state and fixed-gear transmission selection. Control problem convexity is ensured by modeling the engine and electric drive loss maps as quadratic functions, assuming that the battery dynamics voltage and resistance are invariant with battery power, and showing that the system dynamics are linear under the assumption that increasing output power requires increasing engine or electric drive output. The control objective is to minimize the engine power losses, which is equivalent to minimizing fuel use, while satisfying the power demands of the driver. Simulations of the vehicle control using ADMM and dynamic programming to solve the centralized problem are performed over drive cycles where dynamic programming results are used to tune the ADMM parameters. Variations in power demand and prediction horizon show little variation in the ADMM solution time with the former and computation time decreases as the latter decreases. The ADMM fastest solution times are approximately forty times less than those from dynamic programming at comparable cost values. East and Cannon [10] consider the parallel hybrid electric powertrain again to evaluate computational characteristics of different solution methods for the convex control problem. Multiple control test cases are solved using ADMM and a projected interior point method applied to the centralized problem. The ADMM demonstrates sublinear convergence while the interior point shows superlinear. The total ADMM and interior point method solution times scale quadratically and cubically, respectively, with the problem horizon length. In this work, the ADMM solution time is approximately fifty times less than that of the interior point method in the longest prediction horizon case for similar cost values. Romijn et al. [11][12][13] investigate complete vehicle energy management of a parallel hybrid heavy-duty vehicle with ADMM. Components include a high voltage battery system, electric drive, engine, gearbox, and refrigerated semi-trailer. Component operation is described with power state variables with power conversion efficiencies modeled as quadratic functions; component connections are enforced by the conservation of power. Individual component control objectives are to minimize their respective energy losses to minimize the total fuel consumption; additionally, the battery state of charge at the end of the prediction horizon is to equal the starting value. ADMM and quadratic programming are used to solve the centralized vehicle energy management problem composed of the individual control objectives. ADMM solution times are approximately seven times less and have similar accuracy compared to quadratric programming results at their longest shared prediction horizon. Nilsson et al. [14] consider the ADMM-based distributed energy management of heavy vehicle ancillary systems that include the cooling system, electrical system, and engine accessory loads. The control goal is to minimize the fuel use of the ancillaries over a drive cycle while respecting electrical bus bounds and available energy. The conservation of power is applied to each component to obtain component models where components with energy storage have a storage cost. The models are reformulated to be convex using second order cone constraints and then ADMM is used to solve the resulting convex control problem. ADMM simulation solutions are compared to convex and nonconvex centralized problem solutions and show little difference. ADMM solution times and cost comparisons have also been reported for power system management. Erseghe [15] explores microgrid distributed power flow management using ADMM. Reasons given for using the approach are alleviation of privacy/security concerns since only a limited amount of data is exchanged between nodes and the ability to localize fault handling. The power system consists of transmission line connected nodes with both generation and load consumption abilities. The control problem cost function is convex but the problem is nonconvex due to voltage cross coupling terms in the constraints. Convergence is shown to occur under the weak assumption that the component problem is solvable. Several different wide area networks are simulated. The ADMM solution times are slower than that of an interior point method applied to the centralized problem; both methods result in similar costs. However, the ADMM solution time increases slower than that of the interior point method as the network size increases. Wang et al. [16] uses ADMM to manage a microgrid that consists of electric vehicle charging stations, battery energy storage, and solar arrays. The control problem is to minimize load power tracking error, curtailment of solar output power, and storage and vehicle battery cycling. The microgrid component level control problems are convex with quadratic cost functions subject to linear models and bounds. Control simulation shows that using ADMM results in similar cost to a centralized problem solution and ADMM solution times suitable for real-time implementation. Liu et al. [17] investigates a convex, connected microgrid management control problem using ADMM. Models of diesel generators, battery storage systems, and renewables are associated with each of the microgrids. The objective is minimize total operating cost of the microgrids while protecting the privacy of internal microgrid data and the totality of power exchanges between the microgrids. A case study involving three microgrids operating together is shown to converge and provide the needed power. The ADMM control costs are comparable to a centralized control solution approach. The application of ADMM to the solution of a powertrain power management EOCP has been problematic since they are usually nonconvex [3,5,18,19]. However, recent ADMM advances have resulted in its broader applicability to certain classes of nonconvex problems with linear connection constraints. Wang et al. [20] shows that convergence is guaranteed for nonconvex cost functions that have decoupled complicating variable costs and component level variable costs and are continuous and differentiable with a Lipschitz continuous gradient. Additional conditions are that the variables are bounded and the linear connection constraints have full column rank in both complicating variables and component level connection variables to ensure uniqueness. Indicator functions can be incorporated into the cost function to signal whether or not component level variables are feasible within a possibly nonconvex compact set. Applications presented include statistical learning, minimization on compact manifolds, smooth optimization over complementary constraints, and matrix decomposition. Ferranti et al. [21] applies the results to perform distributed nonlinear model predictive control of multiple autonomous robot vessels. The overall control problem is for each robotic agent to navigate their path while avoiding collisions with other agents. The centralized problem is reformulated to meet the conditions in Wang et al. [20]; an indicator function signals whether the robot motion is feasible with respect to its nonlinear dynamics model. This work proposes battery-supercapacitor electric vehicle SOCP-based powertrain management with the solution of EOCPs using ADMM to achieve faster solution times and comparable control costs, i.e., performance, with respect to a centralized control problem solution. Unlike past efforts, the powertrain management includes both power management and thermal management. Appropriate powertrain power-flow-oriented component models are developed for both power and thermal management. The power and thermal management are formulated as separate control problems because of their difference in dynamics response time scales. Component level power management problems that include SOCPs are formulated in preparation for distributed control. The SOCP problems are transformed into EOCPs and a distributed control solution algorithm based upon recent ADMM advances to ensure convergence is set forth. A second algorithm is proposed to project the component EOCP control input solutions back to control inputs that are applicable to the original switched components. The thermal management is not a switched control problem, thus the first ADMM-based algorithm is applicable and the projection performed by the second algorithm is not needed. A Tesla Model S with the addition of a supercapacitor is simulated over test and regulatory drive profiles to evaluate the solution times and control costs obtained from ADMM and a centralized control solution approach. Specifically, Section 2 develops the powertrain component models and control objectives for both power and thermal management. Section 3 outlines the distributed approach to solving an SOCP via the solution of an EOCP and, if needed, solution of the projected control inputs. Next, Section 4 gives simulations results over a short, severe-duty trapezoidal drive cycle and three common regulatory profiles. Comparisons are made between the distributed and centralized control problem solutions using regulatory profile results. Conclusions and future work directions are set forth in Section 5. Component Models and Controls The vehicle herein is based upon a Tesla Model S with the addition of a supercapacitor to protect the battery from the expected rapid fluctuations and sharp peaks in vehicle power demand that can reduce its efficiency and usable life. Figure 1 shows the powertrain architecture that consists of a 225 kW induction motor-based electric drive system (EDS), 59.6 kWh Lithium-Ion battery pack, and 168 Wh supercapacitor; the EDS, battery, and supercapacitor are connected via an electrical bus and the EDS and drivetrain are joined with a mechanical bus. Not shown is the cooling system that is connected by a thermal bus to the powertrain electrical components. Operation of the powertrain is divided into two tasks: power management and thermal management. The power management task must operate at a much faster update rate than the thermal task since the motion dynamics are much faster than those of the latter task. Each task is performed using distributed control, with the power management task requiring distributed switched system control. The power management task regulates the power flow between the vehicle motion, electric drive system, battery, and supercapacitor. The task is a switched control problem since both the battery and EDS models switch depending on the direction of power flow, where the battery has discharging and charging mode models and the EDS has motoring and generating mode models. Distributed control is to be used to perform the power management and the vector of complicating variables, i.e., the variables that connect the different components, is ,c is the battery electrical bus power, P p c is the supercapacitor electrical bus power, P p m is the mechanical bus power at the drive wheel axle, and ω p m is the angular velocity of the EDS motor shaft and the input into the final gear set connected to the drive wheel axle. The thermal management task regulates the cooling system to keep the battery, supercapacitor, and EDS at temperatures that maximize efficiency within their operational ranges. The task is an unswitched control problem that is to be solved with the distributed approach. The thermal management task distributed control complicating variable vector is ψ t = [P t b,clt , P t c,clt , P t d,clt , T t clt ] : P t b,clt is the battery coolant transfer power, P t c,clt is the supercapacitor coolant power transfer, P t d,clt is the EDS coolant power transfer, and T t clt is the cooling system coolant temperature. In preparation for the control developments and simulations, the battery, supercapacitor, EDS, vehicle dynamics, and cooling system are detailed. Models and task-specific controls are presented next for each component. Battery The 59.6 kWh Lithium-ion battery supplies and absorbs energy from the electrical and thermal buses. The battery is composed of 420 Saft 10.8 V, 12 Ah Lithium-Ion modules [22,23] composed of 12 parallel strings of 35 modules in series that provide 375 V to the EDS. The mode switched battery electrical dynamics without temperature effects are [3] is the battery's maximum rated storage energy; α b is the battery mode switch variable; P α b b is the mode specific battery power; modulate the maximum discharging and charging battery powers; η α b b is the discharge/charge efficiency; and k 0 /k 1 and . . , 4, are both discharge/charge fit coefficients. Unlike past work, the fit coefficients herein are developed as quadratic functions of battery temperature T b from data in [22,23]: Further, battery power changes are constrained to reduce the possibility of damage from rapid power fluctuations: dP where ∆ P b = 15 kW/s is the absolute value of the power rate limit. The battery interfaces with the electrical bus via P b,c , the mode-weighted convex combination of the discharge and charge powers: The connection to the power management complicating variables is the linear equality are appropriate matrices. Appendix A contains additional electrical model data. While supplying/absorbing electrical power, the temperature T b of the battery may change. The battery temperature dynamics are where α b is known, m b is the battery mass, C b is the battery specific heat, P b,clt is the power transfer to the cooling system coolant, and h A,b is the heat transfer coefficient between the battery and coolant with temperature T clt . The connection of component level variables to the thermal management complicating variables is for thermal management and A t b and B t b are appropriate matrices. Appendix A contains the thermal model data. Battery Power Management Operation of the battery component is a switched control problem since α b ∈ {0, 1} must be determined. The electrical power switched optimal control problem is (12) subject to Equations (1)-(8) and convex and compact variable bounds where [t p p,0 , t p p, f ] is the power management prediction horizon, t p p,0 is the current time, J p b is the battery power cost function, and q b,P weights use of battery power to promote use of the supercapacitor described shortly. Battery Thermal Management Unlike the electrical dynamics, the battery temperature optimal control problem is not switched. The goal of the control is to regulate the temperature to increase efficiency at the battery power requirement. Specifically, (14) subject to Equations (2) and (9)-(11) and convex and compact variable bounds where [t t p,0 , t t p, f ] is the temperature management prediction horizon, t t p,0 is the current time, J t b is the battery thermal cost function value, q b,η penalizes variation on efficiency from unity so as to promote ideal operation, and P α b b , W b , and α b are considered known from power management operation. Supercapacitor The supercapacitor consists of 139 Maxwell BCAP1200 supercapacitors [24] in series that store 168 Wh at full charge. The non-ideal supercapacitor electrical dynamics power flow model without temperature effects is presented in [3]. The model describes a circuit with a resistor, R s , in series with the parallel combination of an ideal supercapacitor, C, and resistor, R p : where Equation (16) represents the power evolution in the parallel legs of the circuit; Equation (17) is the power balance between the input power, the power lost in the series resistor, and the power in the parallel legs; W c is the state of charge; W max c is the maximum energy of the supercapacitor; P cp is the sum of the power in C and R p ; and P c ∈ [−208, 208] kW is the supercapacitor electrical bus interface power that is greater than or equal to zero during discharge and less than zero during charge. Similar to [25], R s and C have a temperature dependence of where T c is the temperature, C 25 • C and R s,25 • C are the capacitance and resistance at 25 • C and c C,i , i = {0, 1}, and c R s ,j , j = {0, 1, 2}, are component temperature variation data fitting coefficients. The connection constraint that connects the supercapacitor to the powertrain electrical bus is where P c is the component level connection variable, z p c = [W c , P c , P cp ] is the vector of power management states and algebraic variables, and A p c and B p c are appropriate matrices. Appendix B provides the supercapacitor parameters. To obtain the supercapacitor temperature, the thermal dynamics are modeled as where m c is the mass of the supercapacitor, C c is the specific heat of the capacitor, P c,th is the power dissipated by the resistors, P c,clt is the power transfer between the supercapacitor and coolant, h A,c is the heat transfer coefficient between the supercapacitor and coolant, and η c is the efficiency of the transfer of the input power to the capacitor, needed for control. The thermal management task complicating variables are included using with z t c = [P cp , P c,clt , T clt , η c ] and A t c and B t c are appropriate matrices. Appendix B contains thermal model data. Supercapacitor Power Management Supercapacitor control herein seeks to maintain the SOC equal to one, i.e., full charge, at the end of an optimal control prediction horizon. Thus the supercapacitor is available without penalty over the majority of the prediction horizon to fulfill electrical power imbalances that occur because of battery power rate of change limits. The desire to have the SOC at one at the end of the prediction horizon promotes its charging and future availability to supply energy. Specifically, the optimal control problem is to minimize with respect to P c and subject to Equations (16)- (20) and convex and compact variable bounds where q W c weights the variations in SOC from full charge at the end of the prediction horizon. Supercapacitor Thermal Management The goal of the control is to regulate the temperature to increase efficiency. Specifically, Equations (21)- (25), and convex and compact variable bounds where J t c is the supercapacitor thermal cost function value and q c,η is the penalty weighting on inefficiency; W c and P c are known from power management operation. Electric Drive System The electric drive system (EDS) is a 225 kW maximum power, 0-16,000 rpm induction motor and bidirectional AC-DC inverter that provides motoring power and regenerative braking to recharge the battery and supercapacitor. The EDS power conversion efficiency and operational envelope are shown in Figure 2. The motor's maximum mechanical power rises linearly from zero at zero speed to 225 kW at 5000 rpm-denoted as region 1, remains constant to 8000 rpm-denoted as region 2, and then decreases nearly linearly to 92 kW at 16, 000 rpm-denoted as region 3. These regions and efficiency curves are based upon the control approach described in [3]. The EDS has two modes with the mode switch α d = 0 for motoring and α d = 1 for generating. The EDS is modeled algebraically as in [3] since the electrical dynamics are much faster than the typical power flow changes observed for vehicle motion. For motoring: the electrical power P 0 d,e ≤ 0, the mechanical power P 0 d,m ≥ 0, and and during generating: where η d,m is the motoring/generating motor power transfer efficiency ω d is the motor shaft angular speed, η d,inv is the inverter efficiency, β models field weakening above the rated speed ω d,r , c d,1 and c d,2 are constants that are functions of the motor parameters, d 1 is a regularization term to prevent division by zero at zero speed and/or zero mechanical power, and u 0 d , u 1 d ∈ [0, 1] modulate the maximum mechanical power in motoring and generating, respectively. The motor operation variation with temperature is modeled with temperature dependent stator and rotor copper resistances as in [26], which results in c d,1 and c d,2 of where T d is the EDS temperature, c d,i,25 • C is the value at 25 • C, and α copper = 0.004 • C −1 scales the copper resistance change from 25 • C. Electrical model parameter values, including the expression for P max d,m (ω d ), are given in Appendix C. The EDS is connected to the electrical and mechanical buses with mode weighted electrical and mechanical power values, P d,e,c and P d,m,c , respectively: where α d is known, m d is the EDS thermal mass, C d is the EDS specific heat, P d,clt is the power transfer between the EDS and coolant, and h A,d is the heat transfer coefficient. The coupling of component level connection variables to the thermal complicating variables is Electric Drive System Power Management Operation of the EDS is a switched optimal control problem since it can operate in either motoring or generating modes. Specifically, the problem here is subject to Equations (29)-(37) and convex and compact variable bounds where J p d is the EDS power task cost function and q d,u weights the control inputs in each mode. Weighting the EDS control inputs has been shown to encourage bang-bang solutions, i.e., those with mode values in {0, 1}, from the numerical optimization of the (to be presented) embedded optimal control problem formulation as demonstrated in [3,5]. Electric Drive System Thermal Management The EDS thermal optimal control problem is not switched. The goal of the control is to maximize efficiency. The control problem is to minimize subject to Equations (30), (32), (33) and (38)-(40) and convex and compact variable bounds where J t d is the thermal cost function value, q d,η penalizes variation on efficiency from unity so as to promote ideal operation, and P α d d,m , ω d , and α d are known from power management operation. Vehicle Vehicle motion is expressed as a Lyapunov energy function, Υ = v 2 (where v is velocity), to remove a singularity at zero velocity that occurs with a standard point-mass, linear motion dynamical model with power inputs [3,5]: where Υ ∈ [0, 2874]m 2 /s 2 (considering only forward motion), P d is the drag force power, P rr is the rolling resistance, P g is the power due to the body gravity force, P w is the wheel power, P f is the frictional braking power, ρ air is the ambient air density, m v is the total vehicle mass, A f r is the vehicle frontal area, C d is the drag coefficient, C rr is the tire rolling resistance, and θ r is the road grade angle. The braking power is where P max f is the maximum braking power and u f ∈ [0, 1] modulates the braking power available at the current velocity. Component level connection variables are joined to the mechanical bus complicating variables with where R f d is the gear ratio from the EDS to the drive wheel axle, ω v = v/r whl is the angular velocity of the drive wheel, z Vehicle Power Management The vehicle component level optimal control problem is to determine the propelling and braking power to track a desired reference velocity. The problem is to subject to Equations (45)-(47) and convex and compact variable bounds where J p v is the cost value, q Υ is a penalty weight on tracking error, and q brk is a penalty weight on frictional braking so as to promote regenerative braking. Cooling System The cooling system regulates waste power due to power conversion inefficiencies to maintain the temperatures of the battery, supercapacitor, and EDS. The system coolant is assumed to only exchange heat with the battery, supercapacitor, EDS, and a controllable heat exchanger. The coolant temperature dynamics are where T clt ∈ [0, 40] • C is chosen as the intersection of the components' operating ranges, m clt is the mass of the coolant and C clt is the coolant specific heat, and P b,clt , P c,clt , and P d,clt are local values of coolant power values; P hex is the thermal power removed by the remainder of the cooling system that consists of heat exchanger(s) between the coolant and ambient and coolant pump and fan(s) to regulate heat transferred, similar to the cooling approach in [27]. Due to a lack of publicly available data on the majority of the Tesla Model S cooling system, P hex is approximated with where P max hex is the maximum thermal power that can be dissipated by the cooling system, T max clt is the maximum allowable coolant temperature, T amb is the ambient temperature, and u hex ∈ [0, 1] regulates the cooling power. The bases for the choice of P hex are (i) the ability to exchange coolant heat with the ambient is dependent on the temperature differential and (ii) the true coolant flow and cooling air flow rates are assumed to be adjustable by a controller such that P hex is achievable. The component level connection variables are coupled to the complicating variable via connection constraints: where z t t = [T clt , P b,clt , P c,clt , P d,clt , u hex ] is the vector of cooling system variables and A t t and B t t are appropriate matrices. Appendix E lists cooling system parameters. Cooling System Thermal Management The cooling system regulates the temperatures of the battery, supercapacitor, and EDS, to keep them at temperatures such that their combined efficiency is maximized. Efficiency is local to the battery, supercapacitor, and EDS components, thus the cooling system optimal control is to subject to Equations (50)-(52) and convex and compact variable bounds where and q u hex penalizes use of the cooling system. A penalty on cooling system use is analogous to a penalty on the energy used to operate the cooling system even though not explicitly modeled herein. Distributed Control Development In preparation for control development, the component level control problems are approximated in discrete time for task j using forward-Euler and trapezoidal numerical integration with a time step of h j . The generic representation of a component level discrete control problem is Switched, Embedded, and Projected Optimal Control When components include switched (discrete-valued) mode control inputs α j i , the use of the embedding method to solve switched optimal control problem (SOCP) avoids computational complexity of mixed-integer programming and results in faster solutions that have equal or lower cost values [6]. To apply the embedding method, the SOCP α The change in SOCP variables to embedded ones is denoted by (·) where Z j i →Z j i (which includesα j i ) and Ψ j →Ψ j . The new control problem with the change in SOCP variables to embedded ones is the embedded optimal control problem (EOCP). The sufficient conditions for the existence of an EOCP solution are that the dynamics must be linear in the continuous control inputs u, mode specific continuous control inputs are defined, and the optimization cost function must be convex in the continuous control inputs. This means that there exists at least one (possibly non-unique) minimum; there could be an infinite number of optimal solutions. Further, the switched system trajectories are dense in the embedded system trajectories such that all possible SOCP solutions are EOCP solutions. Thus, if an EOCP solution results in allα [5]. Here, the projection approach is to set the projected mode value to 0 ifα j i < 0.5 and 1 otherwise. To obtain continuous control inputs that maintain component coordination given projected mode values, a projected optimal control problem (POCP) is solved with α j i equal to the projected values. This does introduce the need to potentially solve two optimal control problems at each time step. Practically, the embedded and projected optimal control problems are classical problems that are solvable using traditional nonlinear programming. Distributed Control A popular approach to distributed control is ADMM [7], which is well known to converge when the component level problems are convex. Recently in [20], restrictions on convexity to achieve convergence have been relaxed for classes of nonconvex problems, opening the method to certain nonlinear control problems. Specifically, ADMM applied to continuous-valued component level optimization problems with nonconvex control cost functions will converge if the conditions in [20] where λ j i is the dual variable, Λ j i,k = [λ j i,k+1 , . . . , λ j i,k+N j ] , and ρ is a penalty parameter. The ADMM algorithm for the optimization herein is shown in Algorithm 1. It includes use of the primal residual 2-norm r and dual residual 2-norm s to both evaluate convergence and adjust ρ from a default value during iterations in an effort to keep r and s within a factor of 10 (Factor of 10 chosen through numerical experimentation of the problem herein and suggestion in [7]) of each other [7,28]. Convergence is achieved when r 2 and s 2 are both less than tolerance values, r and s , respectively. 11: if r 2 > 10 s 2 then 12: else if s 2 > 10 r 2 then 14: ρ l+1 ← ρ l /2 15: end if 16: Update iteration l ← l + 1 17: end for 18: end while Distributed Switched Optimal Control The component level power management SOCPs in Section 2 satisfy the sufficiency requirements for the existence of an EOCP solution given in Section 3.1. Further, the component level EOCPs and POCPs satisfy the conditions for convergence of non-convex ADMM in [20] (Theorem 1) as described in Section 3.2. The approach to solving the distributed switched optimal control problem is described in Algorithm 2. Algorithm 2 Distributed switched optimal control at time step k. Power Management Task Control The power management task denoted with j = 1 includes the coordination of the battery, supercapacitor, EDS, and vehicle to meet driving velocity demands. The battery and EDS are both switched components, thus Algorithm 2 is used to find the control inputs needed to perform the task. The temperatures of the battery, supercapacitor, and EDS are carried forward from k to k + N 1 since the thermal dynamics of the components are much slower than that of power management and it follows that h 1 N 1 < h 2 N 2 . Thermal Management Task Control The thermal management task denoted with j = 2 regulates the cooling system, battery, supercapacitor, and EDS thermal dynamics to maximize efficiency while penalizing cooling system use. Algorithm 1 is used to solve the optimal control problem since this task has no switched components. It is assumed that the future operating velocity demand of the vehicle can only be extrapolated into the future from the current and past values, thus the velocity predictions over the longer horizon h 2 N 2 are unreliable. Due to this unreliability and slow thermal dynamics, the values of P b , W b , P c , W c , P d,m , and ω d are made equal to their mean power management task POCP solution values over a trailing horizon of duration h 1 N th from k where N th is the number of trailing horizon partitions. Additionally, α b and α d are set to values consistent with the signs of the mean values of P b and P d,m , respectively. This task does not start until a short time after driving is initiated, h 1 N th,del , to have nonzero value data available at the first control problem solution. The optimal T clt value at the end of the first partition is used as the reference for a lower-level controller that updates at h 1 intervals. The reference T clt is taken to vary linearly from the current T clt at k to the optimal value h 2 seconds later. The lower-level controller uses the current power management and temperature data to find u hex for reference tracking. Control Simulation The vehicle control is simulated over four different drive profiles: trapezoidal, EPA highway fuel economy test (HWYFET), EPA urban dynamometer driving schedule (UDDS), and new European driving cycle (NEDC). The simulations are performed in MATLAB with optimization carried out using sequential quadratic programming. The power management task is implemented with h 1 = 0.5 s, N 1 = 2, and r , s = 0.2 (The r and s values for power management and thermal management are with respect to Watt, Celsius, and rad/s valued complicating variables.). The component cost function penalty weights are chosen (after empirical testing) as q b,P = 0.5, q W c = 1 · 10 5 , q d,u = 20, q Υ = 1.25 · 10 4 , and q brk = 6.4 · 10 −5 . The reference kinetic energy Υ to track in Equation (49) is obtained by linearly extrapolating from the known current velocity and the desired velocity since perfect knowledge of the drive cycle is not assumed. Thus the energy reference values are where the v k is the currently measured velocity and v re f ,k+1 is the current desired velocity that is delayed to one step ahead. The delay is due to the inability of the vehicle to instantaneously change velocity. This linear extrapolation assumption is meant to approximate a driver but does add a small error to the tracking of reference signals that are non-piecewise linear or have "corners". The thermal management task is performed with h 2 = 10 s, N 2 = 3, N th = 20, N th,del = 20, and r , s = 0.2. The values of the component cost function penalty weights are q b,η = 1000, q c,η = 1000, q d,η = 1000, and q u hex = 1. Similar to the power management task, the weights were empirically determined. The initial coolant temperature is taken as 25 • C and the ambient is 20 • C. Trapezoidal Drive Profile The trapezoidal drive profile consists of a 10 s acceleration to 26.8 m/s, 5 s constant velocity portion, and then a constant deceleration to zero over the final 10 s. The profile is meant to demonstrate the functionality of the control during a 0-to-60 mph acceleration over 10 s, the ability to hold a constant velocity, and severe deceleration to evaluate both frictional and regenerative braking. Figure 3 shows the excellent reference velocity tracking achieved with a mean absolute percentage error (MAPE) of 0.52%. The wheel power and frictional braking power are given in Figure 4. The wheel power rises with velocity during the first 10 s and then remains nearly constant during the constant velocity portion. During the deceleration over the last 10 s, the combined wheel and braking power slow the vehicle where the wheel power is the mechanical power consumed by the EDS to provide the preferred regenerative braking. All of the power to decelerate is not consumed by the EDS over (15, 15.5] s and (16.5, 20.5] s due to limits on the change in battery power and maximum supercapacitor SOC as seen in Figures 7 and 8. After 20.5 s, the entirety of the braking is provided by the EDS since the power that needs to be consumed is not greater than which can be taken by the energy storage systems. The limited use of frictional braking is consistent with the penalty on it in the cost function. The difference between the embedded and projected power values over (16.5, 20.5] s is explained in the context of the EDS operation given next. The EDS electrical and mechanical power are shown in Figure 5 where the EOCP values are P d,e,c and P d,m,c . The embedded and projected values track together fairly well until 15 s, the start of the deceleration, when the mechanical power shows a pronounced difference from (16.5, 20.5] s. This difference is due to the POCP, which is constrained to one mode, having solutions with EDS operating points that produce approximately the same electrical power with less mechanical power than solutions obtained from the EOCP, which is not constrained to specific mode values. Figure 6 shows the embedded and projected motoring (1 − α d ) and generating α d mode selections. Motoring is the optimal choice when accelerating and maintaining constant velocity while generating is chosen when there is excess vehicle kinetic energy available to charge the energy storage systems. The number of bang-bang solutions is four out of fifty, thus the projected solution is required 92% of the time. Figures 7 and 8 display the supercapacitor and battery powers and SOCs, respectively, where the battery power is P b,c . The figures show that the supercapacitor is used to supplement the rate limited battery to provide acceleration power. The supercapacitor is also given preference to charge during the constant velocity starting at 11.5 s and until after the start of deceleration at 18.5 s since there is a penalty on the SOC deviation from full and no battery SOC penalty. The battery provides motoring and supercapacitor recharging over (10.5, 15] s of the constant velocity portion. At the start of deceleration, the battery is also being charged. However, at (16, 16.5] s the battery discharges slightly to add additional charge to the supercapacitor. Further, the low battery power values between (15, 16.5] indicate the controller doesn't have a strong preference for discharging or charging. The remainder of the profile after 16.5 s, the battery is being charged. Note that the supercapacitor SOC begins to decrease again at 20.5 s. This is due to the battery rate limits because the battery charge power can't be reduced enough to match the regenerative power available, thus supercapacitor power is needed to make up the difference. The battery mode selection is shown in Figure 9 where (1 − α b ) is the discharging mode and α b is the charging mode; projection is required five times. The charging mode is on during deceleration and start of acceleration at (0.5, 1] s. The charging mode selection at (0.5, 1] s is attributable to the limited need for battery propelling power since the supercapacitor is near full charge and able to both propel the vehicle and provide a small battery charge power without an actionable change in the penalty on supercapacitor terminal SOC. The thermal management control problem is solved at 10 s and 20 s. Figure 10 shows the responses of the battery, EDS, supercapacitor, coolant, and changes in the desired coolant temperature. After the first solution, the temperatures of the battery, EDS, and supercapacitor are stabilized with the EDS trend being slightly downward. Figure 11 displays the efficiencies achieved. The supercapacitor efficiency is zero between 18.5 and 20.5 s since it is at approximately full charge and effectively off. The EDS efficiency decreases after 20 s, the time of the second thermal management control solution, since (i) the efficiency decreases near the maximum power line at low angular velocity as seen in Figure 2 and (ii) the optimal coolant temperature from 20 s onward is based upon power data that includes not only values obtained during deceleration but also the [10,15] s constant velocity portion of the drive profile. Finally, Figure 12 shows the cooling power control input from the local cooling system controller that follows the setpoint established by the cooling system control problem solution. The control is active during the commanded decrease in coolant temperature on (10,20] s and then active once from 20 s to prevent the coolant temperature from rising too fast. The penalty on coolant control input means that the cooling system will not maximize efficiency at all costs. The average efficiency of the battery is 96.0%, supercapacitor is 89.6%, and EDS is 94.1%. Regulatory Drive Profiles The power management distributed switched optimal control is compared to the results of a centralized approach using three regulatory drive profiles: EPA highway fuel economy test (HWYFET), EPA urban dynamometer driving schedule (UDDS), and new European driving cycle (NEDC). The centralized control is similar to that in [3], however the modes are redefined to reflect the powertrain operation herein. The centralized problem requires the definition of four modes of operation since system level valid power flows must be considered: EDS motoring/battery discharging, EDS motoring/battery charging, EDS generating/battery discharging, and EDS generating/battery charging. To account for component temperature in the centralized problem, the values from the distributed control problem solution are applied. Table 1 compares the distributed and centralized control simulation results. The distributed control solution time is the average of the longest embedded plus projected component solution times obtained at each time step. Over the drive profiles, the distributed solution is found between 3.3 and 6.2 times faster than the centralized problem. However, the distributed control MPGe energy economy is −3.4%, −3.2%, and −0.0061% over the HWYFET, UDDS, and NEDC, respectively, compared to the centralized control. The velocity tracking error MAPE is well below 1% for each control. These results show that the distributed control achieves significantly faster solution than a centralized approach with the trade-off of a small energy economy penalty. It is possible that further tuning of Algorithm 1 can decrease this energy economy penalty, however current results are acceptable. Conclusions This work developed distributed power and thermal management for a battery-supercapacitor electric vehicle similar to a Tesla Model S to assess solution time and performance compared to centralized control. Previous control-oriented power flow models for powertrain components were expanded to include temperature effects and a cooling system model was developed. Further, these models were incorporated into component level power and thermal management optimal control problems with the former including electric drive system and battery problems with discrete-valued mode selection switches. To simultaneously solve the component level problems, some of which include mode switches, to achieve system power management in a distributed manner, an algorithm based upon the alternating direction method of multipliers was set forth. The algorithm solves (i) the embedded problem, a continuous-valued relaxation of the original switched mode problem, and then (ii) a projected problem with modes set equal to the projection of the embedded problem solution mode values onto a discrete set. System thermal management, which has no mode switches, was solved in a similar distributed manner that did not include the projection step. Control simulations over several drive profiles showed that the distributed solution approach led to successful powertrain power management with low velocity tracking error, appropriate mode switching, and satisfactory energy storage state of charge control. Further, the thermal management resulted in reasonable temperature maintenance and high component operational efficiency. Also, distributed power management control solutions for the regulatory profiles were compared to those obtained using a centralized solution approach. The distributed control resulted in at least a 3.3 times reduction in solution time with at most a 3.4% reduction in energy economy compared to the centralized control. This confirms that a distributed solution approach can lead to lower switched optimal control problem solution times with little penalty. The small energy economy penalty may be overcome with further control tuning. Future work includes additional tuning of the distributed control to better match the centralized control outputs, incorporating fault detection and mitigation into a component to evaluate fault effects on overall system performance, investigating the effects of deletion and addition of components on system performance and choice of component control penalty weight tuning, comparison of the control herein to additional alternative control techniques, and implementation on down-scaled hardware to study the effects of communication delays and message drops. Funding: This research received no external funding. Conflicts of Interest: The author declares no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: Appendix A. Battery Parameters The battery data is taken from [22,23]. Table A1 lists the coefficients of the efficiency parameters obtained from fitting values calculated from temperature dependent battery data. Further, k 0 = −1, k 1 = 1, ∆ P b = 15 kW/s, and W max b = 214.57 MJ. With regard to thermal parameters, m b = 268.8 kg, C b = 795 J/(kg· • C), and h A,b = 536.43 W/ • C is the product of the cylindrical side area of the cells and heat transfer coefficient estimated as that of a cylinder in circulating coolant. The battery temperature range is [0, 41] • C and cooling power range is P b,clt ∈ [−64.37, 65.98] kW, which is determined from the minimum and maximum battery and coolant temperatures and heat transfer expressions. Appendix B. Supercapacitor Parameters The supercapacitor parallel resistor value is R p = 89, 732 Ω and the capacitance and series resistance at 25 • C are C 25 • C = 8.63 F and R s,25 • C = 80.62 mΩ, respectively [3]. The capacitance and series resistor are taken to vary with temperature similar to [25]. The parameters for the capacitance as a function of temperature are c C,1 = 7.21 · 10 −5• C −1 and c C,0 = 0.998. The series resistor temperature dependence parameters are c R s ,0 = 0.966, c R s ,1 = 0.111, and c R s ,2 = 21.3 • C. The maximum energy is W max c = 606.8kJ. The supercapacitor thermal parameters are based off data in [24]. The supercapacitor has a thermal rating of [−40, 65] • C, thermal capacitance of m c C c =41,700 J/ • C, and h A,c = 97.6 W/ • C, which is the product of the cylindrical side area of the cells and the heat transfer coefficient estimated as that of a cylinder in circulating coolant. The cooling power range is P c,clt ∈ [−7.81, 6.34] kW, which is determined from the minimum and maximum supercapacitor and coolant temperatures and heat transfer expressions. Appendix C. Electric Drive System Parameters The EDS maximum mechanical power shown in Figure 2 is mildly extended at zero speed to a value of 1 kW to make vehicle movement possible from rest and is modeled to have continuous first derivatives as in [3]: The motor efficiency coefficient values at 25 • C are c d,1,25 • C = 5.08 · 10 −2 and c d,2,25 • C = 26.9 with rated speed of ω d,r = 5000π/30 rad/s. The range of electrical power values is found using Equations (30), (32), (33) and (A1). Also, η d,inv = 0.95, η dc = 1, and η f d = 0.98. The EDS operating temperature range used here is [0, 40] • C. Additional EDS thermal parameters include the thermal mass m d = 158.8 kg; the specific heat capacity 430 J/(kg· • C), a generic EDS value [23]; and h A,d = 2.43 · 10 3 W/ • C, which is scaled from data in [29]. The coolant power P d,clt ∈ [−97.3, 97.3] kW is determined from the minimum and maximum EDS and coolant temperatures and heat transfer expressions. Appendix D. Vehicle Parameters The Tesla Model S-like vehicle parameters in [3] are duplicated here: the frontal area A f r = 2.35 m 2 is obtained from a dimensioned frontal view, the drag coefficient C d = 0.24, rolling resistance C rr = 0.0092, wheel radius r whl = 0.345 m, gear ratio R f d = 9.73, and the total mass of the vehicle m v = 2184 kg includes two average passengers each of 79 kg. The maximum braking power is P max f = 250 kW. Appendix E. Cooling System The mass of the coolant, m clt = 14.0 kg, is estimated from the coolant volume of the Chevrolet Bolt and the density of 50% ethylene glycol and 50% water mix at 30 • C. Coolant specific heat is C clt = 3.47 · 10 3 J/(kg· • C). The value of P max hex is 85 kW, which is taken as 75% of the maximum possible cooling at an ambient temperature of 20 • C.
2020-07-09T09:09:48.778Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "837426c5c7dbeec4c07c940f017ca11a74a147b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/13/3364/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "90ce220fbb1271cc7574c2e7c9cfa8451101e9e2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235335881
pes2o/s2orc
v3-fos-license
The cell biology of synapse formation Thomas Südhof discusses the cell-biological principles underlying the assembly of specific synaptic connections with defined properties that control neural circuits. . Synapses are communication nodes that connect neurons into circuits. (A) Electron micrograph of a human synapse with two synaptic junctions to illustrate the canonical features of all synapses: An intercellular junction in which a presynaptic varicosity that is filled with synaptic vesicles contacts a postsynaptic dendrite that contains multiple trafficking organelles as well as ribosomes (image courtesy of Dr. Christopher Patzke). Red arrows indicate synaptic junctions. Most neurons form thousands of input and output synapses. (B) Schematic view of a cortical microcircuit in which two pyramidal neurons both directly excite a postsynaptic pyramidal neuron and indirectly inhibit it via an interneuron. If the presynaptic neurons fire in bursts and trains, as is commonly observed in brain, the postsynaptic pyramidal neuron will exhibit differential increasing or decreasing responses depending on whether the various excitatory and inhibitory synapses are facilitating or depressing. (C) Flowchart of the lifecycle of a synapse. After neurons are born, migrate to their appropriate positions, and extend dendrites and axons, neurons form synapses. Synapses initiate as nascent contacts that mature into functional but plastic synaptic connections and are eliminated under control of unknown signals. Synapse turnover rates vary, but many synapses are continuously renewed. (D) Schematic of nascent synapses (left), mature synapses (center), and synapses being eliminated (right). In nascent synapses, transneuronal interactions mediated by SAMs such as latrophilins are proposed to initiate the intracellular signaling cascades that organize synaptic specializations. Subsequent synapse maturation and shaping of synapse properties (center) is controlled by a different set of SAMs such as neurexins. During synapse elimination, SAM interactions weaken, which may induce separation of synaptic junctions and withdrawal of synaptic processes. (E) Schematic of how SAMs organize synapse formation and synapse elimination. CASK, calcium/calmodulin dependent serine protein kinase; Cblns, cerebellins; GluD, δ-type glutamate receptor; Lphns, latrophilins; Nlgns, neuroligins. The astounding diversity of synapses correlates with differences in protein composition, creating a universe of synapses sometimes referred to as the "synaptome" (Nusser, 2018;Grant and Fransén, 2020). In brain, nearly all synapses are formed by axons en passant, as these axons cruise through the brain's gray matter (which incidentally makes the term "nerve terminal" as misleading as the term "circuit": presynaptic terminals are not at the end of axons, nor are circuits necessarily circular). Generally, axons form thousands of synapses that are often arranged like pearls on a string, with approximately one presynaptic specialization per 2-µm axon length (Takács et al., 2018). An axon can establish single synapses with many postsynaptic neurons or multiple synapses with a few postsynaptic neurons. Generation of multiple connections by a single presynaptic neuron onto a postsynaptic cell decreases transmission failures but limits the potential for synaptic plasticity. All presynaptic specializations secrete neurotransmitters via principally the same release machinery, whereas postsynaptic specializations sense neurotransmitters via diverse receptor machineries (Fig. 2). The canonical presynaptic release machinery is constructed by SNARE and Sec1/Munc18-like proteins that mediate membrane fusion, synaptotagmins and their complexin cofactors that enable Ca 2+ -triggering of fusion, and Rab3-interacting molecules (RIMs), RIM binding proteins, and Munc13s that build the active zone scaffold, tether synaptic vesicles, and recruit Ca 2+ channels to neurotransmitter release sites (Südhof, 2012(Südhof, , 2013Brunger et al., 2018;Emperador-Melero and Kaeser, 2020). This canonical presynaptic release machinery is diversified by expression of different isoforms of its various protein components, but the underlying principles are always the same, independent of neurotransmitter type. Even synapses with unusual presynaptic specializations, such as ribbon synapses or neuromuscular junctions, use the same canonical release machinery. Only one feature differentiates presynaptic terminals: the transporter proteins that fill synaptic vesicles with a neurotransmitter and associated enzymes that synthesize neurotransmitters in the first place (not needed for glutamate and glycine as general cytoplasmic components; see references above). Postsynaptic specializations, in contrast, are specific for particular neurotransmitters and their receptors. Almost no components are shared between different types of postsynaptic specializations (Fig. 2). Four neurotransmitter receptor gene families exist: tetrameric glutamate receptors (N-methyl-Daspartate [NMDA] receptors [NMDARs], α-amino-3-hydroxyl-5methyl-4-isoxazole-propionate [AMPA] receptors [AMPARs], and kainate receptors), pentameric cys-loop receptors (GABA A receptors, glycine receptors, nicotinic acetylcholine receptors, and ionotropic serotonin receptors), trimeric P2X receptors (ATP receptors), and metabotropic G protein-coupled receptors (GPCRs). Individual synapses never appear to contain more than one type of receptor (see discussion below). Most synapses (>98%) use tetrameric glutamate or pentameric cys-loop receptors (Fig. 2). Synapses using ATP neurotransmitters and P2X receptors are extremely rare. Although GPCRs often surround synaptic junctions, few GPCRs are present within postsynaptic specializations. For example, mGluR6 glutamate receptors represent the primary glutamate sensors of some retinal photoreceptor synapses but no other synapses (Snellman et al., 2008;Martemyanov and Sampath, 2017). Few postsynaptic proteins are currently (as of 2021) known to be shared by synapses containing tetrameric glutamate and pentameric cys-loop receptors (Fig. 2). An exception is Neuroligin-3, a synaptic adhesion molecule (SAM) that binds to presynaptic neurexins and functions in both excitatory and inhibitory synapses (Budreck and Scheiffele, 2007;Zhang et al., 2015a). As a rule, therefore, synapses are composed of canonical presynaptic and diverse postsynaptic molecular machineries. As will be discussed, this gestalt has major implications for synapse formation because it suggests that postsynaptic specializations develop in response to a particular neurotransmitter signal. Dynamics of synapse formation and elimination During development, newborn neurons migrate to specific positions in brain and extend axons and dendrites before engaging in synapse formation ( Fig. 1 C). In humans, an ∼2-yr postnatal period of exuberant synapse formation is followed by an ∼20-yr period of net synapse elimination, leading to a loss of >40% of all synapses (Huttenlocher et. al., 1982;Bourgeois and Rakic, 1993;Petanjek et al., 2011). An activity-dependent process of synapse elimination has been described for several synapses, such as the neuromuscular junction (Sanes and Lichtman, 1999), retinal inputs into the lateral geniculate nucleus (Chen and Regehr, 2000), and cerebellar climbing-fiber synapses (Kano and Hashimoto, 2009). To what extent physiological synapse elimination is generally activity dependent, however, remains unclear. Synapses are not only made in excess and eliminated developmentally, but also turn over continuously in mature brain. Live imaging showed that ∼40% of dendritic spines on pyramidal neurons in the sensory and motor cortex are replaced every 5 d, while ∼60% of dendritic spines are stable (Attardo et al., 2015;Fig. 3). Although earlier studies observed lower rates of spine turnover in cortex (Zuo et al., 2005;Holtmaat et al., 2005), other more recent studies also detected high turnover rates (Zhou et al., 2020). Stunningly, in the hippocampus, nearly 100% of spines turn over every 2 wk (Attardo et al., 2015;Pfeiffer et al., 2018;Fig. 3). These studies monitored spines instead of synapses, but in brain, all spines are associated with synapses (usually excitatory synapses), suggesting that these astounding rates of spine replacement correspond to synapse turnover in a mature brain. Therefore, while most neurons and their long-range axonal and dendritic structures are stable in mature brain, their synaptic connections are often not. In regions such as the hippocampus, the half-life of synaptic connections may be shorter than, for example, that of late long-term potentiation (LTP; Frey and Morris, 1997). Such a high rate of synapse turnover agrees well with the continued expression of proteins implicated in synapse formation throughout life, as documented in single-cell RNA-sequencing studies (Saunders et al., 2018;Zeisel et al., 2018, Tabula Muris Consortium, 2018Chen et al., 2020). What molecular mechanisms sustain the rapid life cycle of synapses? Clearly, synapse formation and elimination go hand in hand. Synapse elimination may be initiated by disengagement of SAMs and discontinuation of SAM signaling ( Fig. 1 D). For example, deletion of cerebellins in the forebrain has no effect on excitatory synapse formation but causes a delayed loss of synapses in some brain areas (Seigneur and Südhof, 2018). This observation is consistent with a signaling role for cerebellins in shaping synapses, a role whose absence induces synapse elimination. It seems likely that microglia play a major role in synapse elimination (Paolicelli et al., 2011) and that the interaction of neural CD47 with microglial signal-regulatory protein-α drives synapse elimination (Ding et al., 2021). Moreover, synapses may be "opsonized" via the classic complement pathway during synapse elimination (Stevens et al., 2007). Polymorphisms in the complement factor C4 gene were identified as a genetic risk factor for schizophrenia (Sekar et al., 2016), suggesting that schizophrenia could involve impairments in complementmediated synapse elimination (Druart and Le Magueresse, 2019;Presumey et al., 2017). However, the observed effect sizes are small, and no other complement factor has been linked to schizophrenia. More importantly, mice without a classic complement pathway exhibit fairly normal synapse numbers (Chu et al., 2010), and humans that lack the complement factor C3 (which is central to all complement activities) appear to suffer from severe immune disorders, but not from neurological impairments (Botto et al., 1992;Singer et al., 1994;Matsuyama et al., 2001). At present the precise roles of complement, microglia and SAMs in synapse elimination is thus unclear. The neurotransmitter type of a synapse is likely determined by the neurotransmitter that is transported into synaptic vesicles. Mammals encode five classes of vesicular neurotransmitter Figure 2. Synapses are composed of presynaptic specializations containing a canonical neurotransmitter release machinery and postsynaptic specializations constructed of diverse receptors and postsynaptic densities. The molecular composition of the presynaptic specialization is largely independent of the neurotransmitter type, with similar proteins mediating the localized and fast Ca 2+ -dependent fusion of synaptic vesicles (Südhof, 2013). In contrast, postsynaptic specializations are diverse, with little overlap in their molecular components. Four types of receptors are associated with distinct postsynaptic molecular complexes: glutamate receptors (center) account for ∼80% of synapses, pentameric cys-loop receptors (GABA A , glycine, acetylcholine, and serotonin, left) for ∼20% of synapses, and the remaining two receptor classes (metabotropic GPCRs and P2X receptors, right) for <1% of synapses (note that metabotropic GPCRs and P2X receptors are abundantly present outside of synapses). Whereas the only difference among various presynaptic specializations are the enzymes and vesicular transporters specific for particular neurotransmitters (summarized on the top right), few components of different postsynaptic specializations are currently known to be shared, including neuroligin-3, a SAM that binds to presynaptic neurexins. AcCh, acetylcholine; GluA, AMPA-type glutamate receptor; GluD, δ-type glutamate receptor; GluK, kainate-type glutamate receptor; GluN, NMDA-type glutamate receptor; Nlgn, neuroligin; Rec., receptor; STED, stimulated emission depletion; Syts, synaptotagmins. Attardo et al. (2015). Rec., receptor. Südhof Journal of Cell Omote et al., 2016). Strikingly, if a neuron uses multiple neurotransmitters that are transported by different vesicular transporters, these transporters are invariably sorted into distinct synaptic vesicles (Takács et al., 2018;Granger et al., 2020;Silm et al., 2019). As a result, the corresponding neurotransmitters are stored in separate synaptic vesicles and released independently by vesicle exocytosis from the same neuron. Thus, the vesicular transporter type confers an identity to synaptic vesicles. By the same rule, cotransmitters that use the same vesicular transporters (e.g., GABA and glycine, or adrenaline and noradrenaline) are stored in the same vesicles. The only exception to this rule appears to be ATP, which is co-stored with acetylcholine and biogenic amines in many vesicles (Whittaker, 1984). Moreover, vesicles that contain different vesicular transporters, and are thus filled with different neurotransmitters, are sorted to different synaptic junctions with separate active zones in the same neurons (Root et al., 2014(Root et al., , 2018Moore et al., 2015). As a result, even though a neuron may coexpress two neurotransmitters, these are released at different output synapses. For example, a neuron using both glutamate and GABA forms separate glutamatergic or GABAergic output synapses that contain only either postsynaptic tetrameric glutamate or pentameric GABA A receptors. Thus, stunningly, neurons coexpressing GABA and glutamate, or acetylcholine and glutamate, form separate synapses with distinct neurotransmitters. The selective sorting of different vesicular transporters into separate vesicles that are then targeted to distinct synaptic junctions was clearly shown for acetylcholine and GABA. These neurotransmitters are co-released in the hippocampus at synapses formed by basal forebrain cholinergic neurons (Takács et al., 2018) or in the cortex at synapses formed by vasoactive intestinal peptide-positive interneurons (Granger et al., 2020). Similarly, midbrain neurons use dopamine and glutamate as cotransmitters that are packaged into distinct vesicles whose exocytosis is differentially regulated (Zhang et al., 2015b;Silm et al., 2019). Moreover, some central neurons use GABA and glutamate as cotransmitters that are packaged into separate vesicles and targeted to distinct symmetric and asymmetric synapses (Root et al., 2014(Root et al., , 2018). It appears that at least in some instances, a presynaptic neuron can even selectively form synapses with distinct neurotransmitters onto different postsynaptic targets. This has been beautifully described for spinal cord motoneurons: acetylcholine is their only transmitter at the neuromuscular junction, acetylcholine and glutamate are cotransmitters at motoneuron synapses formed on Renshaw-type interneurons, and glutamate is the only transmitter for recurrent excitation between motoneurons (Moore et al., 2015;Lamotte d'Incamps et al., 2017;Bhumbra and Beato, 2018). In general, the observation that an individual presynaptic neuron releasing two neurotransmitters forms distinct synapses with the correct postsynaptic receptors suggests that the presynaptic neurotransmitter instructs the postsynaptic specialization. However, the example of the motoneuron indicates that the postsynaptic neuron can also determine what neurotransmitters will be used by the presynaptic neuron. As discussed in the next section, the underlying mechanisms are, however, unclear. In addition to the use of cotransmitters that are segregated into different vesicles and secreted at distinct synaptic junctions, some neurons switch transmitters in an activity-dependent manner (Spitzer, 2017). For example, mice acquire improved motor skills after 1 wk of voluntary wheel running, which causes a switch from acetylcholine to GABA in a subset of neurons in the caudal pedunculopontine nucleus (Li and Spitzer, 2020). This reversible switch appears to change the regulation of the substantia nigra, ventral tegmental area, and thalamus by the pedunculopontine nucleus. Since different neurotransmitters use synaptic junctions with distinct types of postsynaptic specializations, the neurotransmitter switch involves formation of new synapses. Here again, the biology suggests that the presynaptic neuron instructs postsynaptic synapse formation. Thus, we face a cell-biological challenge: How does a neurotransmitter tell a postsynaptic neuron what type of specialization to assemble? As discussed below, trans-synaptic signaling mediated by SAMs likely plays a central role, although at present our understanding of the underlying processes is limited. Molecular logic of synapse formation: SAMs I posit that SAMs (also called "synaptic organizing molecules") are principal agents in organizing synaptic junctions (Jang et al., 2017;Südhof, 2018;Yuzaki, 2018;Kim et al., 2021). By engaging trans-cellular interactions, SAMs are thought to nucleate nascent synapses, drive synapse maturation, control the properties of synapses, and regulate synapse elimination ( Fig. 1, C and D). SAMs perform these actions by signaling in both directions (preto postsynaptic and post-to presynaptic). No single "master" SAM likely controls everything; instead, an orchestra of SAMs mediates assembly of diverse synaptic junctions. Many candidate SAMs have been described (Fig. 4). Consistent with the asymmetric organization of synaptic junctions, SAMs generally form heterophilic complexes. As described above, the same basic release machinery governs presynaptic functions independently of neurotransmitter type, whereas diverse postsynaptic receptor machineries mediate postsynaptic functions in excitatory and inhibitory synapses (Fig. 2). As a result, presynaptic SAMs are mostly "hub" molecules that are present in excitatory and inhibitory synapses, like neurexins (reviewed in Südhof, 2017) and leukocyte antigen-related (LAR)type phosphotyrosine phosphatase receptors (PTPRs; reviewed in Takahashi and Craig, 2013;Han et al., 2016;Fukai and Yoshida, 2020;Fig. 4). In contrast, postsynaptic SAMs are more diverse as ligands for these hub molecules and are often specific for excitatory or inhibitory synapses. Broadly, SAMs perform two overlapping functions: organizing the assembly of synapses ("making synapses") and specifying synapse properties ("shaping synapses"). More SAMs shaping synapses are known than SAMs making synapses, possibly because diverse synapse properties need to be controlled by multifarious signals. The example of SPARCL1 and neuroligins illustrates the functional differentiation between SAMs that make or shape synapses (Fig. 5). SPARCL1 (a.k.a. Hevin) boosts the excitatory synapse density and the amplitude of AMPAR-mediated synaptic responses without affecting inhibitory synapses. SPARCL1 thus stimulates the making of new functional excitatory synapses (Gan and Südhof, 2020). In addition, SPARCL1 dramatically enhances NMDAR-mediated synaptic responses, suggesting that the new synapses are functionally different (i.e., contain more NMDARs). Thus, SPARCL1 acts both in the making and the shaping of synapses (Fig. 5). Neuroligins, conversely, do not influence synapse numbers but change the properties of synapses, i.e., shape synapses. Among others, neuroligin-2 deletions greatly decrease the synaptic strength at inhibitory synapses (which are untouched by SPARCL1), whereas neuroligin-1 deletions suppress NMDAR-mediated synaptic responses at excitatory synapses more than AMPAR-mediated responses. Although neuroligins and their presynaptic neurexin receptors were suggested to bind to SPARCL1 (Singh et al., 2016), SPARCL1 and neuroligins perform distinct and independent functions, suggesting that they do not physiologically interact. The phase diagrams of Fig. 4, F and G, illustrate these functional differences and interdependencies in a 2D representation, visualizing the making and shaping of synapses. Similar observations apply to many other SAMs. Elucidating the candidacy and functions of a SAM in making and shaping synapses is not a trivial task. Three basic challenges stand out. First, simply localizing a SAM to the synapse is not straightforward. Determining whether a protein is truly synaptic is arguably the most important need, but it requires specific antibodies and superresolution microscopy and/or immuno-EM. Second, identifying valid protein interactions is difficult. Common approaches, such as coimmunoprecipitations and affinity measurements by surface plasmon resonance, are inconclusive. As a general rule, without the demonstration of a stable complex (for example by size exclusion chromatography coupled with multiangle light scattering or via a crystal structure) or without matching phenotypes during functional manipulations, it is difficult to distinguish sticky proteins from real ligands. Third, identifying the synaptic functions of a SAM is challenging. Many "functional" manipulations, such as RNAi or overexpression, cause indirect nonspecific changes. Synaptic functions have to be analyzed at defined synaptic connections, requiring sophisticated electrophysiology and imaging approaches. Many SAMs, such as neurexins, perform distinct functions in different synapses. Most SAMs (except for neurexins and their multifarious ligands) have additional essential developmental roles besides shaping synapses. It is as though a concert musician was responsible first for ushering in the audience and then for playing in the subsequent performance not just one, but multiple instruments. Given these challenges, little is known overall at present about how SAMs orchestrate synapse formation. On top of these challenges, even the most rigorous experiments can provide ambiguous results. For example, neurexin deletions generally alter synaptic transmission without changing synapse numbers, but a discrete loss of some synapses is detected in neurexindeficient parvalbumin-positive cortical interneurons (Chen et al., 2017) and in CA3 region neurons in mice expressing mutant neurexin-1 that lacks heparan sulfate modifications . Deletion of the cerebellin neurexin ligands, conversely, causes an ∼50% decrease in synapse numbers in cerebellum (Hirai et al., 2005) but only a scattered loss of synapses in other brain regions (Seigneur and Südhof, 2018). Does this mean that neurexin-neuroligin and neurexin-cerebellin interactions are "making" a small subset of synapses, or is this synapse loss secondary to the cessation of a SAM signal in the affected synapses? In support of the second hypothesis, synapses are initially formed normally by cerebellin-deficient neurons but are lost secondarily (Seigneur and Südhof, 2018). To consider these questions more deeply, next I further discuss the role of SAMs in making and shaping synapses in molecular terms. SAMs and synaptic specificity Synapse formation is tightly regulated. Not only are the neurons forming synapses specific, but also the subcellular locations and properties of the resulting synapses. For example, cerebellar parallel-fiber synapses always form on the distal dendrites of Purkinje cells, whereas climbing-fiber synapses always form on the proximal dendrites of Purkinje cells, with the former invariably exhibiting short-term synaptic facilitation and the latter short-term synaptic depression (Galliano and De Zeeuw, 2014). How does synapse formation produce the exquisite specificity of synaptic connections in a neural circuit? Two sequential processes are traditionally thought to establish synapse specificity: Axon guidance positions an axon adjacent to a target neuron, and partner choice then determines which neurons form synapses at at what location (e.g., distal or proximal dendrite, soma, or axon initial segment; Fig. 1, C and D). However, a third process also needs to be considered for synapse specificity: shaping of the properties of synapses that are as important for the overall performance of a neural circuit as the number and location of the synapses. These three processes collaborate to achieve the exquisite specificity of synapse formation (Südhof, 2018;Sanes and Zipursky, 2020;Chowdhury et al., 2021). The mechanisms of axon guidance are well studied, but how axon guidance is coupled to synapse formation and which SAMs phosphatases (PTPRD, PTPRF, and PTPRS), are hub molecules that interact with a series of postsynaptic SAM families and also bind to each other in cis (Han et al., 2020). Most candidate SAMs perform additional functions outside of synapses. Lines and arrows indicate interactions, with cis-interactions shown as dotted lines and less validated trans-interactions shown as dashed lines. DCC, deleted in colorectal cancer; EphB, Ephrin B; FLRT, fibronectin leucine-rich transmembrane; LRRTM, leucine-rich repeat transmembrane; Rec., receptor; RTN, reticulon; SALMs, synaptic adhesion-like molecules; SliTrks, Slit-and Trklike proteins; SynCAM, synaptic cell adhesion molecule; TrkC, tropomyosin receptor kinase C. Südhof Journal of Cell Biology 8 of 18 Mechanisms of synapse formation https://doi.org/10.1083/jcb.202103052 show that SPARCL1 increases excitatory but not inhibitory synapse numbers, whereas deletion of all neuroligins has no effect on synapse numbers and does not impair the SPARCL1-induced increase in synapse numbers. The electrophysiology results (C-E) show that SPARCL1 increases, whereas the pan-neuroligin deletion decreases, NMDAR-mediated synaptic strength significantly more than AMPAR-mediated synaptic strength. Although these two manipulations act similarly but in opposite directions, they do not depend on each other (D). Only the neuroligin but not the SPARCL1 manipulation affects inhibitory synapse (E). Data are adapted from Gan and Südhof (2020). (F and G) Phase diagram of the effect of SPARCL1, neuroligins, and latrophilin-3 manipulations on excitatory (F) and inhibitory (G) synapses, as analyzed in cultured hippocampal neurons. Values were computed from Gan and Südhof (2020) and Sando et al. (2019). Numerical data in B, D, and E are means ± SEM. Statistical significance was assessed by two-way ANOVA followed by post hoc corrections. Ctrl, control; EPSC, excitation postsynaptic current; IPSC, inhibition postsynaptic current; KO, knockout. In B, D, and E, asterisks indicate statistical significance as calculated by two-way ANOVA (*, P < 0.05; **, P < 0.01; ***, P < 0.001). Südhof Journal of Cell Biology 9 of 18 Mechanisms of synapse formation https://doi.org/10.1083/jcb.202103052 guide the construction of synapses (i.e., make a synapse) is largely unclear. Pioneering studies revealed that nonsynaptic adhesion molecules guide axons to a target cell once the axons are within the vicinity of the target region. For example, in C. elegans, Syg1 and Syg2, a pair of Ig-domain proteins, guide axons to their synaptic targets (Shen et al., 2004). Similarly, in the mouse retina, cadherins specify target areas for synapse formation (Duan et al., 2014). After axon guidance, synapse formation is likely initiated when SAMs instruct assembly of nascent synapses (Fig. 1 D). A major question is whether the establishment of a synapse between particular neurons at a specific location can be mechanistically divided into a "partner choice" decision and synapse formation as such, or whether partner choice and synapse formation are mechanistically the same (Sanes and Zipursky, 2020;Südhof, 2018). As a third alternative, it is possible that synapse formation operates nonspecifically, and that nascent synapses between noncognate neurons are quickly eliminated, thereby creating specificity via a "divorce" mechanism ( Fig. 1 C). Thus, three hypotheses could account for synapse specificity: A sequential partner choice → synapse establishment process, a "package deal" in which a combination of SAMs mediates both partner choice and synapse establishment (partner choice = synapse establishment), and a sequential synapse establishment → selective elimination process. For each of these hypotheses, the shaping of synapse properties could be partly inherent and partly add-ons via additional SAMs. In considering these three hypotheses, a key observation is that synapse formation is highly promiscuous, at least under nonphysiological conditions. In heterologous synapse formation assays, expression of a SAM in a nonneuronal cell induces formation of pre-or postsynaptic specializations in cocultured neurons. Here, nearly any SAM induces heterologous synapse formation (reviewed in Südhof, 2018). Even the neuronal pentraxin receptor (a membrane-tethered pentraxin) stimulates formation of postsynaptic specializations in cocultured neurons, presumably by engaging AMPA-type glutamate receptors . The only specificity of heterologous synapse formation is that a given molecule induces either only pre-or postsynaptic specializations. Nearly all molecules that induce synapses in heterologous synapse formation assays are not essential for synapse formation as such when tested genetically, suggesting that in neurons, synapse formation can be induced by multitudinous signals (Jiang et al., 2021). As another demonstration of the promiscuity of synapse formation under nonphysiological conditions, neurons readily form abundant synapses with themselves ("autapses") when cultured in isolation on an island of glia (Bekkers and Stevens, 1991). The nonphysiological promiscuity of synapse formation seems to support the notion that partner choice precedes the making of a synapse, or that synapses are formed promiscuously and noncognate synapses are then rapidly degraded. However, the package deal hypothesis positing that partner choice and synapse establishment are mediated by the activities of the same SAMs is also consistent with the nonphysiological promiscuity of synapse formation. Specifically, according to that hypothesis, neurons choose synaptic interaction partners in a hierarchical manner based on a graded affinity among SAMs. Synapse formation only becomes promiscuous when high-affinity partners are lacking. Thus, the nonphysiological promiscuity of synapse formation does not tell us which hypothesis is correct. What, then, do known SAM functions tell us about the partner choice and initial establishment of synapses? Many candidate SAMs were suggested to initiate synapse formation and/or encode synapse specificity, but few have endured the test of time. At present, the only SAMs that have consistently been shown to be required for establishing synapses are postsynaptic adhesion-GPCRs called latrophilins and brain angiogenesis inhibitors (BAIs; note that the name does not correspond to a known function). Like other adhesion-GPCRs, these proteins contain large extracellular domains mediating interactions with multiple trans-synaptic ligands. Deletions of latrophilin or BAI isoforms produce a severe decrease in synapse formation in specific subsets of synapses. Bai3 deletions in Purkinje cells selectively block climbing-fiber but not parallel-fiber synapse formation (Kakegawa et al., 2015;Sigoillot et al., 2015), whereas Bai3 deletions in olfactory bulb granule cells impair synapse formation of accessory bulb inputs but not of mitral cell inputs . Similarly, deletion of latrophilin-2 in CA1 pyramidal neurons selectively suppresses afferent synapses from the entorhinal cortex, whereas deletion of latrophilin-3 in the same neurons suppresses Schaffer-collateral input synapses (Anderson et al., 2017;Sando et al., 2019). In synapse formation, latrophilins function as GPCRs and thus as classic signaling receptors . Latrophilin-dependent synapse formation requires interactions with presynaptic teneurins and fibronectin leucine-rich transmembranes in complexes that have been crystallographically confirmed (Lu et al., 2015;Jackson et al., 2018;Li et al., 2018Sando et al., 2019). Puzzlingly, teneurins have also been proposed to mediate synapse formation via a homophilic trans-synaptic interaction (Mosca et al., 2012;Berns et al., 2018). However, the structure of teneurin molecules suggests that a trans-cellular interaction would be difficult to envision (Jackson et al., 2018). Moreover, no experiments in which pre-or postsynaptic teneurins were separately deleted have been reported, making it unclear whether teneurins function both pre-and postsynaptically. Overall, the exquisite specificity of different latrophilin isoforms in the formation of distinct input synapses on CA1 region neurons suggests that latrophilins contribute to synapse specificity and do not simply mediate establishment of synapses (Sando et al., 2019), favoring the package deal hypothesis outlined above. Neurotransmitter specificity of synapses As described above, the presynaptic neurotransmitter type determines the postsynaptic specialization in a synapse, and even in the same neuron, different types of neurotransmitters and receptors are segregated into different synapses. This observation suggests that presynaptic terminals induce postsynaptic specializations corresponding to a specific neurotransmitter type. Consistent with this notion, rapid local release of caged glutamate or GABA using photolysis induces dendritic spines and functional synapses (Kwon and Sabatini, 2011;Oh et al., Südhof Journal of Cell Biology 10 of 18 Mechanisms of synapse formation https://doi.org/10.1083/jcb.202103052 2016). Intriguingly, local photolysis of caged-GABA stimulates generation not only of GABAergic postsynaptic specializations, but also of dendritic spines and glutamatergic specializations (Oh et al., 2016). However, at the same time, ablation of neurotransmitter release does not impede synapse formation. Specifically, abolishing evoked neurotransmitter release using genetic approaches does not block generation of spines and formation of ultrastructurally normal but nonfunctional synapses (Verhage et al., 2000;Varoqueaux et al., 2002;Sando et al., 2017;Sigler et al., 2017;Lin et al., 2018;Held et al., 2020). Moreover, uncaging of glutamate or GABA induces postsynaptic specializations only in brain slices from preadolescent mice (Kwon and Sabatini, 2011;Oh et al., 2016), whereas synapse replacement operates throughout life (Fig. 3). Viewed together, we thus have one dataset that suggests that neurotransmitter signals are instructive in synapse formation, whereas another dataset shows that neurotransmitter signals are not required for synapse formation. How can we resolve this conundrum? One hypothesis is that minimal residual neurotransmitter signaling, possibly stimulated by activation of guide adhesion molecules, triggers assembly of synaptic junctions. This idea is attractive, but is not supported by evidence for residual neurotransmitter release and does not explain the specific localization of synapses, since the residual signaling is likely diffuse. A related hypothesis posits that postsynaptic receptors may selectively recruit specific types of presynaptic axons for synapse formation in conjunction with particular SAMs and activation by neurotransmitters. Indeed, this hypothesis is consistent with the photolysis experiments described above (Kwon and Sabatini, 2011;Oh et al., 2016). It would explain the observation that in spinal motoneurons that use acetylcholine and glutamate as cotransmitters, the postsynaptic cell determines whether a presynaptic terminal uses only acetylcholine (muscle cells), both acetylcholine and glutamate (Renshaw cells), or only glutamate (other motoneurons; Bhumbra and Beato, 2018). However, deletions of neurotransmitter receptors also have little effect on synapse formation. Deletion of all GABA A receptors in cerebellar Purkinje cells did not impair GABAergic synapse formation, similar to the deletion of presynaptic GABA release (Fritschy et al., 2006). Moreover, deletion of all GABA A receptors in cultured hippocampal neurons causes only a partial loss of GABAergic synapses (Duan et al., 2019), whereas deletion of glutamate receptors has no effect (Duan et al., 2019). On balance, the evidence thus suggests that under physiological conditions, neurotransmitter signaling does not determine the establishment or specification of synapses. How the neurotransmitter identity of a presynaptic terminal instructs the postsynaptic specialization is therefore another fundamental question that remains unsolved. Glia in synapse formation Extensive evidence suggests that astrocytes play a major role in synapse formation, whereas microglia contribute to synapse elimination. Astrocytic extensions often surround synaptic contacts, creating tripartite synapses in which astrocytes likely contribute to shaping synapses (for a recent review, see Noriega-Prieto and Araque, 2021). Although space constraints prevent me from discussing these events in detail, it is noteworthy that glia also secrete powerful synaptogenic proteins (Bosworth and Allen, 2017). The specific role of these proteins, however, remains unclear, since knockout of these proteins only marginally decreases synapse numbers (Christopherson et al., 2005;Kucukdereli et al., 2011). Most of these proteins are secreted by astrocytes in trace amounts but are also present in blood, and the relation of their systemic and central nervous system functions is unexplored. For example, the synaptogenic secreted protein SPARCL1 is a blood component that is also produced at low levels by astrocytes (Fig. 5). How astrocytic proteins induce synapse formation, and what physiological significance their activities have, remains unknown. For most studies, only immunocytochemistry and few functional analyses were performed, and it is often unclear whether these candidate synaptogenic factors are indeed generating new synapses that are functional. Shaping synapse properties How are the diverse properties of synapses determined? Emerging evidence suggests that these properties are not autonomous functions of a synapse, but are dynamically shaped by the bidirectional signaling between pre-and postsynaptic specializations that is mediated, at least in part, by SAMs. The most extensive evidence for this view is derived from studies on neurexins, arguably the best-understood SAMs, which serve as key regulators of synapse properties. Neurexins are presynaptic SAMs encoded by three homologous genes in vertebrates (reviewed in Südhof, 2017). Initially we simplistically proposed that neurexins are "recognition" molecules that redundantly contribute to determining neuronal identity (Ushkaryov et al., 1992;Ushkaryov and Südhof, 1993). However, two key findings quickly challenged the original view of a unitary neurexin function. First, deletion of neurexins caused no change in brain architecture, with little synapse loss, but impaired synaptic transmission primarily by decreasing presynaptic Ca 2+ influx (Missler et al., 2003;Luo et al., 2020). This observation indicated that neurexins are essential for organizing functional synapses, not for initiating their assembly or for conferring identity to neurons. Subsequent work using conditional deletions of neurexins in different types of neurons expanded this finding. In excitatory calyx of Held synapses (Luo et al., 2020) or inhibitory synapses formed by somatostatin-containing interneurons in cortex (Chen et al., 2017), conditional deletions of all neurexins impaired the organization of presynaptic active zones and recruitment of Ca 2+ channels, confirming the original finding. However, deletions of all β-neurexins in the hippocampus impaired synaptic transmission by interfering with endocannabinoid signaling, suggesting a very different function . Moreover, in inhibitory parvalbumin-positive interneurons in cortex, the pan-neurexin deletions suppressed synapse numbers (Chen et al., 2017). These results suggested that neurexins perform major functions at synapses that differ depending on the types of neurons involved. Second, neurexins are expressed in thousands of isoforms that are generated by alternative promoter usage and alternative splicing and are produced in diverse regulated patterns throughout the brain (Ullrich et al., 1995). Moreover, different neurexins and their splice variants have dramatically different functions, suggesting that it is no longer possible to talk about neurexins as a homogeneous protein family. For example, alternative splicing of presynaptic neurexins at splice site 4 (SS4) controls the postsynaptic receptor composition as analyzed in CA1 → subiculum synapses (Aoto et al., 2013;Dai et al., 2019). Presynaptic neurexin-1 containing an insert in SS4 (Nrxn1 − SS4 + ), but not neurexin-1 lacking an insert (Nrxn1 − SS4 − ), trans-synaptically increases postsynaptic NMDAR levels without affecting AMPARs (Dai et al., 2019). In contrast, the equivalent presynaptic neurexin-3 variant (Nrxn3 − SS4 + but not Nrxn3 − SS4 − ) decreases postsynaptic AMPAR levels without affecting NMDAR levels. Strikingly, neurexin-1 and neurexin-3 both act by binding to postsynaptic GluD1 and GluD2 using cerebellins as adaptors (Dai et al., 2021). To complicate matters, a completely different neurexin-3 function is observed in olfactory bulb synapses . Here, presynaptic neurexin-3 has no effect on postsynaptic AMPAR levels in excitatory synapses but regulates the release probability of inhibitory synapses. The overall picture that emerges is that neurexins do not perform a unitary function, but that different neurexin isoforms, generated from distinct genes via separate promoters and further diversified by alternative splicing, have distinct roles depending on the identity of the neurons in which they are expressed. These roles include a regulation of the presynaptic release machinery, postsynaptic receptor composition, and synapse numbers. Given the large number of validated transsynaptic ligands for neurexins-more than for any other SAM (Fig. 4)-it seems likely that the diverse roles of neurexins are dependent on differential ligand interactions, but no proof for this idea is available at present. Do other SAMs have a similarly broad role in organizing synapse properties? Initial evidence indicates that this may also apply to LAR-PTPRs. LAR-PTPRs are also expressed from three alternatively spliced genes and (again, similar to neurexins) interact with multifarious postsynaptic ligands (Fig. 4;reviewed in Takahashi and Craig, 2013;Han et al., 2016;Fukai and Yoshida, 2020). Moreover, LAR-PTPRs appear to interact with neurexins in cis, possibly via the heparan-sulfate modification of neurexins (Han et al., 2020). Deletion of all three LAR-PTPRs causes no decrease in synapse numbers, demonstrating that they alone are not essential for making a synapse, but induce an ∼40% decrease in NMDAR-mediated synaptic responses without significantly altering AMPAR-mediated responses Emperador-Melero et al., 2021). Although this phenotype resembles the effect of neurexin-1 SS4-alternative splicing on NMDAR-mediated synaptic responses, in the case of neurexin-1, the surface levels of NMDARs are changed (Dai et al., 2019), whereas in the case of the LAR-PTPR deletion, the surface levels of NMDARs were not impaired . Signal transduction cascades organize synapses Engagement of SAMs presumably controls synapse formation by activating cytoplasmic signals, but little is known about the processes involved. Latrophilins and BAIs, at present the best-validated SAMs in initiating synapse formation, are GPCRs. Recent data indicate that the GPCR activity of latrophilins produces cAMP, and this activity is essential for synapse formation . This observation suggests a role for cAMP and other classic signal transduction cascades in initiating synapse formation. The use of a ubiquitous second messenger for something as specific as synapse formation may appear surprising, but cAMP signaling is highly compartmentalized and context-specific in neurons (Averaimo and Nicol, 2014;Zaccolo et al., 2021;Johnstone et al., 2018). Although enticing, little else is known about what intracellular signals induce synapses. This is a central cell-biological question that is now ready to be tackled. Our understanding of the cytoplasmic processes regulating synapse properties is similarly limited. Much is known about the composition of presynaptic active zones and postsynaptic specializations, but how SAM-stimulated signals organize this composition is unclear. What molecular interactions align a presynaptic neurotransmitter signal, such as glutamate, with specific postsynaptic receptors, and how are these receptors coupled to a particular postsynaptic density? Again, without insight into cytoplasmic protein interactions, it will be impossible to make progress on this question. For example, it has been suggested that binding of the postsynaptic scaffolding proteins gephyrin and collybistin to the cytoplasmic tail of neuroligin-2 organizes the postsynaptic scaffold of GABAergic receptors (Poulopoulos et al., 2009). However, at a subset of GABAergic synapses, loss of GABA A receptors leads to a decrease in gephyrin clustering without a change in neuroligin-2, suggesting that neuroligin-2 alone is not sufficient to initiate the organization of GABAergic specializations via binding to gephyrin . This agrees well with the lack of specificity of gephyrin binding to neuroligin-2. The cytoplasmic sequences of neuroligin-2 that bind to gephyrin are also present in neuroligin-1, which is present only in excitatory synapses. The signals that confer specificity of neuroligin-2 to inhibitory and neuroligin-1 to excitatory synapses, and that enable neuroligin-3 to function in both types of synapse, thus remain unknown. Synapses in neuropsychiatric and neurodegenerative disorders Synapses, made and shaped by multifarious trans-synaptic interactions, are arguably the most vulnerable part of the brain because of the highly polarized design of neurons. In most neurons, a complex dendritic arbor is closely connected to the cell body, whereas equally complex axons extend far away from the cell body. Dendrites are generally >10× thicker and 1,000× shorter than axons. Dendrites contain the same organelles as the cell body and are engaged in active protein synthesis and lipid metabolism, thus representing seamless extensions of the neuronal soma. Axons, in contrast, supply distant, highly compartmentalized presynaptic specializations via axonal transport over long distances. Axons contain no Golgi complex, no rough endoplasmic reticulum, and little smooth endoplasmic reticulum, limiting the presynaptic synthesis of proteins and lipids (Hanus and Ehlers, 2016;Younts et al., 2016;Hafner et al., 2019). Membrane proteins, secreted proteins, and lipids are supplied to presynaptic terminals by anterograde axonal transport from the cell body, and all material that is recycled from nerve terminals has to be moved back to the cell body via retrograde axonal transport. As distant outstations, presynaptic terminals are therefore dependent on axonal transport. Thus the architecture of most neurons includes an inherent design fault that renders presynaptic terminals, and thereby synapses, vulnerable. This vulnerability may account for the observation that synapses are a central factor in the pathogenesis of neuropsychiatric and neurodegenerative disorders. Advanced DNA sequencing has revolutionized the human genetics of neuropsychiatric diseases. We now know scores of genetic changes that predispose to neuropsychiatric disorders, including intellectual disability, autism, schizophrenia, and Tourette syndrome. Surprisingly, these studies implicated dysfunction of numerous genes in neuropsychiatric disorders. In many cases, the same genes predispose to different clinical entities (Taylor et al., 2020;Guang et al., 2018;Coelewij and Curtis, 2018;Keller et al., 2017;Manoli and State, 2021;Schaaf et al., 2020). Many of these genes operate in synapses. A key example is the neurexin-1 gene (NRXN1). One of the more common copy number variations observed in neuropsychiatric disorders localizes to chromosome 2p16.3 and inactivates only NRXN1 expression because of the large size of the NRXN1 gene (Südhof, 2008;Kasem et al., 2018;Hu et al., 2019). The heterozygous NRXN1 deletion predisposes to a range of neuropsychiatric disorders. It is among the leading monogenic causes of schizophrenia, autism, and Tourette syndrome. Comparison between human and mouse neurons carrying mutations in NRXN1 revealed that human synapses are more susceptible than mouse synapses to impairments induced by such mutations. Whereas heterozygous NRXN1 mutations in mouse neurons produced no detectable changes, they suppressed excitatory synaptic responses in human neurons (Pak et al., 2015(Pak et al., , 2021. These impairments were reproduced in patientderived NRXN1-mutant neurons (Pak et al., 2021). These findings provide an example of the indirect relationship between genetic changes, synaptic impairments, and neuropsychiatric diseases, illustrating the challenges we face in developing new therapies for these devastating disorders. A different picture emerges for neurodegenerative disorders, which are quintessentially related to aging. As we age, cognition declines, possibly because synapses and neurons become weaker when damage accumulates. However, recent results suggest that this is only part of what happens during aging. Pioneering studies in mice showed that age-dependent decline in cognition and synaptic plasticity could be partially reversed by exchanging the blood of old with that of young mice (reviewed in Pluvinage and Wyss-Coray, 2020). This "rejuvenation" of the brain by systemic factors cannot be explained solely by a stimulation of neurogenesis, because the synaptic plasticity changes occur in brain regions, such as the cortex, that are not subject to adult neurogenesis. It suggests that synapses age because synaptotrophic mechanisms are maintained by systemic factors that decline as we age. The mechanisms involved are unclear. At least two proteins that are present at much higher levels in the blood of young vs. old mice, SPARCL1 and thrombospondin-4, directly stimulate the formation and enhance the strength of synapses Südhof, 2019, 2020). Whether these factors directly act on neural circuits in vivo, however, remains unknown. When aging is associated with neurodegeneration, such as observed in Alzheimer's disease, synapses are among the first structures affected (Terry et al., 1991;DeKosky et al., 1996;Scheff and Price, 2006). At present it is unclear if the demise of synapses in neurodegenerative disorders is a nonspecific symptom, a revealing phenotype, or a diagnostic byproduct. No genes involved in neurodegeneration (except for α-synuclein) have been directly implicated in synaptic function, although presenilins and amyloid precursor protein (APP), which are causally mutated in familial Alzheimer's disease, appear to contribute to synaptic function. Mutations in amyloid precursor protein (Torroja et al., 1999;Wang et al., 2005;Priller et al., 2006) and presenilins alter synaptic functions, although the mechanisms remain unclear. Further, ApoE4 (the major genetic risk factor for sporadic Alzheimer's disease) is important for promoting synapse formation (Huang et al., 2019). It is tempting to speculate that there is a relation between the age-dependent decline in systemic factors supporting synaptic function, the aging-induced predisposition to neurodegeneration, the possible role of genes causing familial Alzheimer's disease, and the impairments in synapses observed early in neurodegeneration, but what that relation is remains unknown. Outlook and enduring questions Understanding the dynamics of synapses-their initial formation, the specification of their properties, their plasticity, and their turnover-is arguably one of the most important challenges in neuroscience. Efforts to meet this challenge have only started. At present, no definitive description of the basic cellbiological processes that underlie synapse formation is available. Synapse formation is highly relevant for understanding neural circuits. How will we ever gain insight into how circuits control behavior, if we don't understand the transfer of information from one neuron to the next? Clearly, this transfer is dependent on the formation and elimination of synapses, which is a diverse and dynamic process in vivo. Among the many basic questions that need to be addressed, I would like to list a few important points. First, what molecular logic, mediated by gene transcription and mRNA splicing, drives synapse formation? In other words, how is the specific identity of different types of synapses determined, and how is their plasticity programmed? This is of paramount importance for insight into how neural circuits are constructed. Second, in a related question, how are synapses established? I proposed three hypotheses: that synapses are established in a canonical process following partner choice, that synapses are established nonspecifically by default and partner choice is effected post hoc by elimination of noncognate synapses, or that partner choice is part of diverse synapse establishment mechanisms mediated by distinct combinations of SAMs. Which of these hypotheses is correct is a major question to be addressed. Third, what signal transduction pathways organize synapses? Synapse formation and elimination, independent of their mechanisms, are likely controlled by intracellular signals that are activated by SAMs, but the nature of these signals is unknown. At present, synapse formation and elimination are black boxes: We have initial insight into some of the extracellular interactions involved, but we have no idea what actually happens in a neuron during these processes. Fourth, what is the cell-biological basis for the design of the canonical presynaptic machinery compared with the nonoverlapping diverse composition of postsynaptic specializations? A subquestion here is how presynaptic neurotransmitters control the makeup of postsynaptic specializations, even though neurotransmitter signals don't seem to be involved. Fifth, digging deeper into the cell biology, how does a presynaptic neuron sort different vesicular transporters into distinct vesicles that are then targeted to separate synaptic junctions? What cell-biological mechanisms allow for such exquisite specificity? Sixth, what signals and mechanisms confer specific properties onto synapses? Clearly SAMs such as neurexins and their ligands are intimately involved, but how are they in turn regulated, and by what mechanisms do they function? Finally, despite hundreds of papers, synaptic plasticity, especially long-term plasticity, remains an enigma. There is little insight into mechanisms besides the fact that NMDAR-dependent LTP involves recruitment of postsynaptic AMPARs and that at least in some instances, neurexins and neuroligins are necessary to render synapses competent for LTP. Moreover, there is scant evidence that long-term plasticity per se is physiologically important for a behavior, despite abundant manipulations of molecules with multifaceted roles that happen to also affect LTP. However, this lack of specific manipulations has not curtailed speculation that LTP is involved in memory, drug addiction, and scores of other human brain activities. I hope that this review will be helpful in motivating studies in these large, and largely unexplored, areas.
2021-06-05T06:17:03.018Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "a9fd02f3a4ca63d8b9cf6a543ec09e15b3a148d0", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1083/jcb.202103052", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "351ea30529de5d5c54fd48ebc2a41b55f881c5e1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
149938864
pes2o/s2orc
v3-fos-license
A DATA-DRIVEN APPROACH FOR ESTIMATING THE FUNDAMENTAL DIAGRAM The fundamental diagram links average speed to density or traffic flow. An analytic form of this diagram, with its comprehensive and predictive power, is required in a number of problems. This paper argues, however, that, in some assessment studies, such a form is an unnecessary constraint resulting in a loss of accuracy. A non-analytical fundamental diagram which best fits the empirical data and respects the relationships between traffic variables is developed in this paper. In order to obtain an unbiased fundamental diagram, separating congested and non-congested observations is necessary. When defining congestion in parallel with a safety constraint, the density separating congestion and non-congestion appears as a decreasing function of the flow and not as a single critical density value. This function is here identified and used. Two calibration techniques – a shortest path algorithm and a quadratic optimization with linear constraints – are presented, tested, compared and validated. INTRODUCTION Traffic flow theory is the basis for understanding, controlling and predicting the movements of vehicles.It deals with variables at different levels of aggregation in time or space and determines relations between them [1]. The most popular, simplest and oldest relation in the traffic flow theory is the fundamental speed-density relationship.Its history and developments are described in [2][3].The graphical representation of the relationship between any two of these variables is called the fundamental diagram (FD).It describes how speed decreases with density -this decrease is due to safety reasons. An analytical FD allows an easier representation of traffic phenomena and is often necessary in at least three cases: when relations between car-following models and the FD are investigated [4]; when considering a stochastic traffic flow model, which leads to specifying a stochastic FD and making analytical assumptions about its form and the form of its random fluctuations [5]; when constructing explicit solutions to the Lighthill-Whitham-Richards (LWR) traffic flow model [6]. In other cases, however, such as with macroscopic traffic simulation models or assessment studies [7], an analytic FD is not necessary.Furthermore, recent sensor approaches, such as floating vehicle data (FCD) or similar means, have a wide spatial coverage regarding their measurements but do not provide information about traffic state variables such as flow or density.An already calibrated FD is used for the estimation of these variables ( [8] and [9]).The use of an analytical FD does not allow the estimation of state variables for high speeds since the FD curve often has a very flat branch near free-flow speeds.Using an empirical FD allows to take into account all data and to have more accurate values for different speeds. In this contribution, the proposed speed-density relationship is neither analytic nor a set of analytic functions, but just the function which best fits the data. To respect the traffic flow theory, it is mandatory to have a decreasing speed as a function of density.When density increases, the average space headway between two consecutive vehicles decreases, thereby decreasing the space gap.A decreasing speed must accompany this decreasing gap; otherwise, the stopping distance (which increases with the speed) becomes unsafe.Appendix A shows that, when defining congestion in parallel with a safety constraint, the density separating congestion and non-congestion is, most often, a decreasing function of the flow.This function generalizes the critical density value; it With a global form being somewhat restrictive, a set of piecewise linear regressions that is able to reproduce the capacity drop is successfully proposed in [23]. Kerner [24] argued that congestion includes two different traffic phases: the synchronized traffic (when the downstream front is fixed at a bottleneck) and the wide moving jam.This reduces the scope of use of the FD, which should be used only in cases where the congestion structure is near the one prevailing when the FD has been calibrated. Forerun by Daganzo & Geroliminis [25], who provided empirical evidence for the existence of an urban-scale FD, many promising approaches and applications explore network-scaled relations between vehicle density and space mean flow -called macroscopic fundamental diagram [15,26]. In this paper, an empirical fundamental relationship is reworked to ensure that speed decreases with density.Until recently, a low computational cost was required, but nowadays computers perform fast calculations, even with a great number of parameters.When a large number of parameters is justified and does not decrease the robustness of the model, there is no justification for calibrating a simplified model using only a few parameters, which implies a loss of information.Building an FD which best fits the data leads inevitably to increased accuracy. TYPE OF DATA AND THE CALIBRATION PROCESS 3.1 Type of data used for calibrating an FD Data for calibrating an FD consists either of trajectory data [27] or of loop data by vehicle at some points, or of loop data aggregated for a time period (flow, occupancy and possibly average speed). The first category of data enables comprehension of phenomena and accurate preprocessing in order to remove the noise.The second category allows specifying FDs by length of vehicle [28] and acutely studying the variability.Data of the third category are more commonly available and used. The fundamental equation of traffic flow establishes the relationship between the three main macroscopic variables -flow, speed and density.If a relationship is established between any two of the variables, the relationship of the third one can be controlled by the following equation: where K is the density, Q is the flow and V s is the space mean speed.impacts the fundamental diagram.Additionally, it should be noted that, for high flows, the safety constraint might be not complied with. The only constraint used to build the non-analytical FD in this paper is to respect the assumption that speed decreases when density increases. In this paper, the use of the FD and the state of the art are presented in Section 2. The type of data and the calibration methodology are presented in Section 3. Two approaches are explored for determining the FD.In the first one, the shortest path approach (SPA, Section 4) gives the congested and non-congested speeds related to the flow; the non-congested speed is constrained to decrease when the flow increases, whereas, in congestion, speed is constrained to increase with the flow.In the second one, a decreasing speed related to density is obtained due to an algorithm -the linear quadratic optimization with linear constraints (QOLC, Section 5). Section 6 is dedicated to validation and transferability.Then, some conclusions and perspectives are outlined. Two appendices are dedicated to the density threshold separating congestion and non-congestion -the first one in relation with the safety constraint and the second one in relation with a particular FD (Underwood, [10]).These appendices show that this density threshold is a decreasing function of the flow. THE USE OF THE FUNDAMENTAL DIAGRAM According to Coifman [11], "much of traffic flow theory depends on the existence of a fundamental relation between flow, Q, density, K, and space mean speed, V." In first-order traffic models, an FD is used in conjunction with a conservative equation, initial conditions and conditions on demand [12] [13].In classical second-order models [14], the FD is included in a speed equation that takes into account dynamic space and time effects.The new family of second-order models, the generic second-order modeling (GSOM), combines a first-order model with the dynamics of driver-specific attributes [15]. A number of analytical speed-flow or speed-density relationships have been proposed and calibrated, including by Greenshields [16], Greenberg [17], Edie [18], Underwood [10] and May [19].These models, however, do not take into account certain phenomena such as spontaneous congestion, random variations, capacity drop and hysteresis, that are described in [20], [21].A functional form for the FD, based on generative functions applied on an inverse of a generalized space interval, is proposed in [22].This functional form is the solution of a system of constraints (speed decreases with density; a concave flow density relationship...). Calibration of the fundamental diagram Data used in the following sections consist of six-minute records of average speed and flow (Q i , V i ), with i as the time index; density K i is derived as .V Q i i Data were collected in 2009-2010 on the A1 motorway linking Paris to Lille, in the north of France.The speed-density or speed-flow relationships are calibrated using a dual loop detector on a section situated at 4 km from the city of Lille.The speed there was limited to 110 km/h as on other urban motorways in France; since traffic operators wanted to reduce this speed limit to 90 km/h, an ex-ante assessment of this measure was made [7], where fundamental diagrams, as close as possible to the data, were required to facilitate the comparison between the simulated new scheme and the empirical data.This was the motivation for this paper.On this road section, the motorway has five lanes in each direction.The relationships presented here are calibrated for the third lane, from Paris towards Lille.We used data from the year 2010.After the elimination of empty periods (six-minute periods without vehicles) and irrelevant data (speed < 2 km/h or speed > 200 km/h, or traffic flow ≥ 320 vehicles/ 6 minutes), 58,000 six-minute data were used out of the 87,600 of total data. The curve corresponds to the generalized exponential model Q V K e ( / ) f K K0 $ $ = -a with V f =111.9 km/h, K 0 =36.8, a=1.48. Drivers might adapt their speed with respect to their time headway, it being linked to the flow, which is in a certain unit and is the inverse of the average time headway.The data used to calibrate a speed-flow The space mean speed is computed when the speed v j is available for every vehicle j on the whole section -the space mean speed is their harmonic speed average. When only flow and density are available, the space mean speed is defined as their ratio. When data come from a dual loop detector, the spot speed of vehicle j when it reaches the detector is marked as v j ; although these spot speeds are different from the previous definition, their harmonic average is used for the space mean speed: When data come from a loop detector giving only flow and occupancy x, and if an estimation of the average length of vehicles is available, a relation analogous to Equation 1 arises.Indeed, let l be the length of the loop detector and L j be the length of vehicle j; each vehicle j passes the sensor during a time equal to v L l j j + , v j is the spot speed of vehicle j at the detector; occupancy (for one time unit) is the sum of the passage times of vehicles passing the detector -their number, in a time unit, is the traffic flow Let L eff be the effective average of lengths L j , weighted by -Due to individual or contextual (meteorology,...) reasons, some drivers avoid high speeds (at freeflow) but have a common behavior at capacity.This speed reduction implies an increase of the flow/ speed ratio (thus of density) which vanishes when the flow increases to capacity.At a given flow q, the empirical dataset (q i , v i ) leads to a dataset in the (flow x density) space For a given flow q, the splitting of the k v q i i = a k in two parts -round K free (q) and K congested (q) -is optimal (i.e., the deviation is minimum) when the threshold on k i is equal to ( ) ( ) .K q K q 2 free congested + This function decreases when q increases for the well-known Underwood FD [10] (see Appendix B).This is a clue for a decreasing density threshold.The equations and the process giving the FD (within the above constraints) are below.For readability, the width of each class is one unit, so there are q max classes for a flow from 1 to q max .The periods are grouped in a class according to their flow (subscript q).For every flow class q, let n q be the number of periods of the class, let Q q j and , V q j j=1…n q be the empirical flows and average speeds, respectively.Without any loss of generality, for every flow class q, the , Q V q j q j _ i pairs are assumed to be sorted when j varies from 1 to n q , according to the ratio Q V q j q j (the first j corresponds to the smallest ratio).Let W , q k free and W , q k congested be the average of the empirical speeds V k j when density is smaller or greater than k, respectively. Let n q,k be the number of class q periods with density smaller or equal to k.If considered that, for the flow class q, k is the density threshold separating congested and non-congested periods, the empirical FD will consist of two values at the traffic flow q. The average of non-congested empirical speeds is: The average of congested empirical speeds is: The measure of the scatter at the flow level q is the residual between the observed and modeled speeds; if the threshold separating congested and non-congested regimes were the density k(q) (typically, whatever is q, the critical density is k crit ) the measure of the scatter would be the following: relationship often consists of flows (Q i ) and average speeds (V i ) at a traffic detector for a set of periods i; The FD generally does not synthesize the data properly because these data are, for different reasons, very scattered (see Figure 1).An important cause of variability arises from inhomogeneous conditions due to accelerations, transient states, platoons, "synchronized" vehicles.In an inhomogeneous period, the observed flow Q is partly obtained during non-congested sub-periods with low occupancy (i.e., low density) and partly during congested sub-periods with high occupancy (i.e., high density). To calibrate a speed-flow diagram, it is necessary to use a threshold for separating periods of non-congestion and periods of congestion in the empirical data.Non-congested periods contribute to the calibration of the non-congested branch of the FD, where the speed V free (q) is a decreasing function of the flow, and the ratio ( ) ( ) K q V q q free free = is increasing.Congested periods are used to calibrate the congested branch of the FD, where the speed V congested (q) increases with the flow -the ratio ( ) The two branches of the FD meet at the point where the flow is maximum -the speed is then the critical speed, and the density (the capacity divided by the critical speed) is the critical density.The capacity might be taken as the highest traffic count observed; the critical speed is the average of harmonic mean speeds observed at periods of the highest traffic count. When browsing the non-congested branch of the FD from zero flow to capacity, the ratio ( ) ( ) K q V q q free free = increases from zero to the critical density.Also, the critical density is generally used to separate congested and non-congested points.However, the lower flows are little concerned by the critical density.Nothing prevents defining congestion (then separating congested periods from non-congested ones) not with a single critical density threshold, but with a threshold function of the flow k(q).This function is assumed to be monotonous -swinging is not explainable and should result from over-identification of the FD from the dataset. When it comes to a density threshold function, there are several arguments for decreasing rather than increasing with the flow: -It results in a better numerical adjustment (see Remark 2).-A parallel between the density required by a safety constraint, and the density threshold function shows that this function is decreasing, at least until the flow reaches a certain value (see Appendix A). Speed must decrease with density Equation 12does not always imply a decreasing speed with density.Both parts of the FD, where the functions are monotonous, are examined below: 1) The non-congested part In this condition, the two last terms of Equation 12 imply that, when q varies, the speeds W , q k q free ^h can be extended in a decreasing function related to the flow V(Q).It can be assumed that this function is continuous and differentiable and V ' be formed, giving the speed related to the density.We show below that vice versa also applies, that speed also decreases with density (i.e., the constraint is satisfied).Indeed, as ( ) the density related to the flow, its derivative related to Q is: As V'(Q) is negative, Equation 14 implies that K'(Q) is positive and Q'(K), the derivative of Q related to K, is also positive.Indeed, at every point K=k, the value of Q'(K) is the inverse of the value of K'(Q) at Q=q.The derivative related to K of the compound function V•Q is: 2) The congested part In this part, the two first terms, W . 12are extended in an increasing function (also marked as V(Q), assumed to be continuous and differentiable with V'(Q)≥0 as well).This does not imply that the speed also decreases with density.Indeed, as V'(Q(K))≥0, K'(Q) given in Equation 14 is the sum of positive and negative terms, it is, therefore, not always negative.This is verified only if the following applies: This constraint is introduced in the SPA by removing the links between the nodes that do not respect this constraint. The steps of the algorithm 1) Initial step, for q=1 For the flow class q=1, nodes (q=1, W free , W congested , k) are reached at a cost depending only on k, equal to: q k q q q j q k q free q j q k q congested j n The clustering of the n q pairs in two classes -from 1 to n q,k crit and from n q,k crit +1 to n q is not optimal for decreasing .E , q k q 2 ^h Additionally, the paper proposes to identify the solution of function k(q) (Equation 10) for the whole set of traffic flows: with two constraints: The minimization provides k(q), decreasing with q.Remark 1.At a given flow, the level of k(q) impacts both W q congested and W q free .For instance, k(q) higher than the critical density implies that some periods (those the density of which is comprised between k crit and k(q)) are no longer considered to be congested.Their speed is lower than the average non-congested speed and higher than the average congested speed.As these periods pass from congestion to noncongestion, this makes both W q,,k(q) congested and W q,k(q) free lower than W , q k congested crit and , W , q k free crit respectively.In turn, if for all flows q the density thresholds k(q) are higher than the critical density, both branches of the empirical FD will be lower.The higher the density thresholds, the lower the empirical speeds for both branches. The following are the observations made on the main features of this approach: the graph, the constraint of a decreasing speed related to density, the algorithm. Nodes and links The SPA leads to an approximation of the solution in q max steps.The nodes are quadrupled (q, W free , W congested , k), where q is a flow class; k is a density k min ≤ k ≤ k max ; W free and W congested are speed values, included in specified discrete intervals, typically: W free , W congested , and k are integers, in the units specified by the user.This constraint avoids the creation of an infinite number of nodes.Then, it is recommended (not mandatory) to reduce the number of nodes created by adding the constraints: where V min (q), V min,free (q), V max,congested (q), V max (q), k min , k max are specified by the user.Remark 4. The simplified form of this approach would apply to calibrate a speed-density relationship: instead of flow classes, density classes k are considered; for every density k(k>0) and admissible speed W, a node (W, k) is created; for every speed Z, 0≤Z≤W, a link is created between (W, k) and (Z, k+1); the cost of every link towards this node is where n k is the number of observations of class k, and is generally lower than 80 vehicles/km.In this section, we built density classes of subscript k=1…m -here m=160, and the width of a density class is 0.5 vehicle/km.To each period i, a density class k is assigned according to the value of K i .Data become (Q j k , V j k ), the flow and average speed of the j-th period, the density of which belongs to class k. When W , q k free or W , q k congested are not defined, the node is removed.It is easy to see that the optimization of Equation 10is equivalent to the optimization of the deviation between the empirical speed averages (by flow class) and the FD speeds.This deviation is equal to the square root of the sum of Equation 17 for all flows q, divided by the total number of periods: Note that the speed averages might be replaced by their median in Equations 7-9 and 17. 2) Steps 2 to q max For q>1, links are created towards every node (q, W free , W congested , k), from existing nodes of the preceding flow level q-1 (q-1, W f , W c , k'), for any W f greater or equal to W free , any W c smaller or equal to W congested and any k' greater or equal to k.These links have the same cost, depending only on q and k, equal to C(q, W congested , W free , k), as defined in Equation 17. When W , q k free or W , q k congested are not defined, the cost of the link is assigned to a very high value.When the flow class q-1 is empty, q-1 is replaced by q-2, etc. 3) Final step The algorithm ends when the highest flow class is treated.The speed-flow relationship is provided by the path(s) leading to the nodes of minimal cost of the last step (q=q max ).The number of nodes, then the memory and the time required for applying the algorithm, depend on the width of flow classes and on the size of the discrete sets used for speed and density. Results Figure 2 shows the relationship obtained with the dataset used in this Section 3 for the year 2010 (width of flow classes: 50 vehicles/hour, speed unit of 1 km/hour, density unit 1 vehicle/km). The optimization provides a deviation D SPA (between the empirical average speeds and the FD speeds) equal to 0.35 km/h.The critical speed is 83 km/h; the critical density is 26 vehicles/km.For very low flows (less than 450 vehicles/hour) there were no data for estimating the speed in congestion.The capacity (2,190 vehicles/hour) comes from the maximum traffic count observed (219 vehicles) in a six-minute period.This points to the main drawback of the approach: the algorithm aims to reach Reversing the roles of speed and density, a density-speed relation can be calibrated.Then, speed classes are built, which should be in accordance with [23], where constant-speed fluctuations are highlighted.The average of the empirical densities for a speed class v, marked as , K v replaces V k used for density class k.To follow [23], this average may be replaced in Equation 21 by the median of the (K j v , j) being such a period that its average speed is in class v. In Figure 4, where the X-axis is density and the Y-axis is speed, the QOLC speed-density relationship and the points derived from the SPA relationships were plotted concomitantly. The following can be seen: -Small oscillations of the SPA speed for high densities (greater than 43 vehicles/km).These appear because the additional constraint of decreasing speed with density was not implemented.-A probable underestimation of capacity in the QOLC approach.-A maximum density higher for the QOLC than for the SPA.Indeed, the SPA uses flow classes; each flow class including two sub-classes, the first one for the congested values (high densities), the second one for the non-congested values (low densities); the mean density of the first sub-class, since it is a mean, is always lower than the maximum density.This type of aggregation is not made in the QOLC.-Some differences between the SPA and QOLC speeds for medium and high densities.The SPA speeds are higher than the QOLC ones for The objective is to find the speed vector (W k ), decreasing with density k, the closest to the data.It is the solution for the quadratic programming problem with linear constraints: with the following constraints: where n k is the number of periods for the density class k.This problem is solved using the software R ® and SCICOS ® packages [29].If the empirical mean speed for class k is marked as , V k it is easy to see that replacing V k j with V k does not change W k .Therefore Equation 19 can be replaced with: The empirical speed average and the optimized speed are plotted by density class in Figure 3.There are very few differences between both curves. The capacity of the road is derived as the maximum of products k.W k whatever the density class k, 1≤k≤m.This maximum is reached when the density class k is 25 veh/km; in this class, the average density, average speed and flow were K=25.2 vehicles/km; V=71.5 km/h, thus, Q=1,802 vehicles/h.This critical density is in accordance with the critical density of the first approach (26 vehicles/km).However, the capacity is lower than the one of the first approach (Q=2,179 vehicles/h).Indeed, W k is linked to the average and not to the maximum speed recorded in class k, it is the same for the product k.W k .So, this technique underestimates the capacity. The sensitivity of the results to the number of classes and to their width was tested with m=80 density classes of width 1 (instead of 0.5) vehicle/km; the optimized speeds remained very close (less than 0.7 km/h) to the previous in most cases, except for density 41-43 (the speed difference was 1.2 km/h), and for density 23 (the speed difference was 2.2 km/h). Figure 4 -Speed-density relationships obtained by both approaches The validity of the non-analytic approach of this paper is discussed below.This discussion is based on the deviation between the obtained FD and the empirical speed average, by class of density in the QOLC method or by flow class in the SPA.These deviations are equal to D SPA or D QOLC , the quantities minimized in Equation 18for the SPA approach and in Equation 21for the QOLC.The penultimate and ultimate columns of Table 1 give D for the QOLC and the SPA methods. D must be low not only on the calibration dataset but also on other datasets.The results were validated on other datasets of the same section of the same motorway.Table 1 gives the results of the calibration, validation, and transferability of the FD.The calibration is based on the first six months of the year 2010 (Table 1, line 1), the validation on the last six months of the same year (Table 1, line 2) or on the lane of the same section with the same lane number in the opposite direction (line 3). The transferability of the method is assessed by the values of D obtained by applying the same calibrated FD on the faster or slower lanes of the same motorway section (Table 1, lines 4 and 5) or on middle lanes of other close motorways (A22, A25, lines 6 and 7). Deviations are very small on the calibration set as well as on the validation datasets; this validates the methods.The calibration of the FD is rarely transferable to the other lanes of the motorway, or to the same lane of other motorways.This is not surprising.Since the objective is to be the closest to the data, a new calibration is needed for each lane. CONCLUSION We developed a methodology in this paper to establish a fundamental diagram which best fits the data, free of any analytical form, considering just the assumption that drivers adapt their speed to their densities between 20 and 40 vehicles/km, then lower for densities greater than 40 vehicles/km (congestion).Are these differences explainable? 1) There is a possible explanation for lower SPA speeds at high densities (beyond 40 vehicles/km): when flows are low, high-density thresholds have been used for specifying the congestion.A low SPA FD speed is in accordance with Remark 1 of Section 4 -the higher the density thresholds, the lower the empirical speeds for both FD branches.2) There is no direct explanation for high SPA FD speeds at higher flows (densities between 20 and 40 vehicles/km): the density thresholds used are no longer high, which makes the SPA FD speeds not lowered, but this does not explain why they are higher than the QOLC FD speeds.Much lower density thresholds should have been used to make the SPA and QOLC FD speeds equal.This highlights the sensitivity of the SPA FD speeds to these density thresholds.It is both a benefit and a danger for the calibration of a speed-flow FD.This can contribute to feeding the debate on the still sensitive subject of the fundamental diagram. VALIDATION AND TRANFERABILITY The assessment of the FD is based on the deviations between the FD and empirical data.For a speed-density relationship (and a density class pattern), the mean squared error (MSE) is given according to the value , E n 2 where E2 is given in Equation 19.E cannot be lower than the value obtained when , W V k k = the empirical speed average.In this case, , n E 2 computed on the calibration dataset is equal to 7.6 and 8.5 km/h for the QOLC and the SPA approaches, respectively.function of the flow.Such a function rightly reduces the importance of the critical density and impacts the FD speeds related to the flow.In the case of a density threshold decreasing from a high value (at very low flows) to the critical density (at capacity), both FD congested and non-congested speeds are increased.Besides, at low flows, periods whose empirical density is between the critical density and the threshold function are no longer considered to be congested.This is consistent with a definition of congestion based on an extended safety constraint.In every case, it would be sensible to first carefully check the results, to derive the critical density, the free-flow speed and the capacity, and to check these values with the same parameters obtained by other methods. A correct free-flow speed is obtained with both methods, the QOLC and the SPA.Both methods could be used, achieving the main objective of this research, which is to give a fundamental diagram relationship, free of any analytical form but with respect to the traffic flow phenomena. ACKNOWLEDGMENTS Many thanks to Arthur de la Rochefoucauld, who developed the optimization approach. Analytic FDs have a comprehensive and a predictive power, which is required in a number of problems: regarding car-following [4], with stochastic models [5], construction of explicit solutions of traffic flow models [6].However, as only few parameters are calibrated for determining the whole relation between speed and density, some loss of information appears between the empirical data and the analytic FD.This loss can be drastic for certain problems, such as estimating traffic state variables (the flow) from empirical FCD speeds.This issue occurs when FCD or Bluetooth sensors replace the common traffic loops.The relationship between speed and flow that an analytic FD provides at high speeds is too flat and not so accurate to be inverted, which is necessary to obtain the flow from the speed.For this problem, or for an ex-ante assessment of a traffic management strategy, the power of the analytic FD is not needed; it is better to use an FD close to data. For the calibration of the FD, sampling is an important task with regard to the clustering variable as well as the number and size of classes.Depending on whether we consider the speed-density or the speedflow relationship of the FD, density or flow classes are built.Using density classes does not enable us to have the exact value of the road capacity.Indeed, in the QOLC approach, the class corresponding to the critical density contains not only periods at capacity, but also periods with simultaneously a lower flow and a lower speed; this makes the average flow of the periods composing this class lower than the capacity.On the other hand, in the SPA, using flow classes results in an easy capacity identification -the capacity corresponds to the highest empirical flow class. Furthermore, the optimization method can be extended to more constraints, such as the concavity of a flow-density FD, which is required in the first-order LWR traffic flow model and in the GSOM models. The QOLC or SPA results consist of as many parameters as the number of classes.It is well known that the fewer the parameters, the more significant they are.But the quantity of data (in this case, 58,000 six-minute periods) allows the calibration of a high number of parameters, even if the traffic variables are not fully independent.A calibrated FD can be used on other periods of the same section, or on a symmetric section; elsewhere, other calibrations are necessary. Within the SPA, congested and non-congested periods might be separated either according to a constant density threshold (the critical density) or to a threshold Equations 24, 25 and 28 are defined when: The roots of the equation D=0 are When considering a=0.6, there are three possibilities, according to u. a) case u>1 This is the common case: when T=1 second, C=6m/ s 2 , L=4m, then u=1.15.In that case, Q 2 <0 and Q 1 >0.Equation 29 -and, in turn, the safety constraint -are satisfied when The safety constraint is not satisfied when traffic flows are higher.When Q<Q From Equation 28, , K Q ' 1 ^h is negative or null when: The root of the numerator of the first term of Equation 33 is: When Q >Q N , the numerator is negative; in this case, 33 is satisfied only when this numerator is, in absolute value, lower than the denominator, which is equivalent to: Appendix A. Density related to flow with regard to a safety constraint Let V be the velocity of a vehicle, C its maximum deceleration, T its reaction time, H its distance headway (between its front bumper and the front bumper of the prior vehicle). Assuming that an object falls from the prior vehicle, the following driver is able to avoid it if his stopping dis- + is lower than the distance headway minus the length L of the prior vehicle.The constraint is even heightened or generalized when the stopping distance is lower than the headway, multiplied by a coefficient a lower than 1, and when L includes a safety distance: "Non-congestion" can be defined as periods when, for a certain a, Equation 22 is valid. Taking the meter as the space unit (instead of the kilometer) makes . Then, using the relation Q=K.V: According to ^h the quadratic equation associated to Equation 23 has either no root, when D<0, then drivers cannot respect the constraint, or, when D≥0, has two roots, K 1 (Q) and K 2 (Q), relating density to flow: For these roots, the stopping distance just complies with the constraint.When Q is near zero, ( ) a and . When Q=0, the root K 1 (Q) convenes for the congested branch of the FD.By continuity, K 1 (Q) convenes any Q.We assume that when K 2 (Q)≤K≤K 1 (Q), there is no congestion, i.e., the generalized safety constraint is complied with.This assumption says that the function K 1 (Q) replaces the critical density.Its variations are studied below.Let u be such that The derivative of K 1 (Q) related to Q is Inverting this relationship, two density-flow relationships K free (q) and K congested (q) are obtained; although no analytic form is available, there is a (tedious) mathematical proof that the function ( ) ( ) K q K q 2 free congested + is decreasing.A graphical approach, with a sufficiently small resolution step, is good enough as well.This relationship has only two parameters; it reduces, after two changes of scale (in distance and in time), to the unique negative exponential curve V=exp(-K): -The unit of length, instead of 1km is set to K 1 crit km; in this new unit the critical density is equal to 1.In this unit, speeds must be multiplied by K crit .In particular, the free speed V f is replaced by Figure 1 - Figure 1 -Speed-flow FD for motorway A1: plotted empirical data corresponds to the third lane, January-June 2010 at least one node in the highest traffic flow class, even if the number of periods which constitutes this class is low.This can be avoided either by: -eliminating the highest flow class(es), -grouping the highest flow classes into a single class, -simplifying Equations 7, 8, 9, 12, 17 and 18 and, in the final step of the algorithm, not distinguishing the congested and non-congested speeds at q=q max .Remark 2. Solving the problem with an increasing (instead of decreasing) density threshold would lead to lessening the adjustment: the deviation D SPA would increase from 0.35 to 0.4 km/h.Remark 3. The same approach applies to estimating a density-flow relationship, which provides, in turn, a flow-density relationship.The roles of speed and density are inverted, and so are the roles of congestion and non-congestion. Figure 2 - Figure 2 -Speed-flow relationships with the SPA Figure 3 - Figure 3 -Speed-density relationship with the QOLC approach 1 f The unit of time, instead of 1 hour, is set to W hour; in this new unit the speeds are divided by ; W f in particular the free speed becomes W W 1 Table 1 - Deviation (in km/h) between the average empirical speed and the FD speed
2019-05-12T13:54:36.173Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "bd4f052f1808567b1494ed1f59b0728a6ab20ad2", "oa_license": "CCBY", "oa_url": "https://traffic.fpz.hr/index.php/PROMTT/article/download/2849/561561748", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8c707b4265193f8fad6858c34e45e9ec484a1498", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
220942506
pes2o/s2orc
v3-fos-license
Authors' reply re: Pre‐eclampsia‐like syndrome induced by severe COVID‐19: a prospective observational study Sir, Thank you kindly for the opportunity to respond to the letter from Dr Amorim and her colleagues. We would like to thank Dr Amorim and her team for their interest in our study and their invaluable observations. We acknowledge that our series is small and heterogeneous. Our patients ranged from 20 to 37 weeks of pregnancy and only in one casedid thepre-eclampsia-like (PE-like) syndrome resolve spontaneously and without being delivered. We, therefore, cannot affirm that all cases were PE-like syndrome, as we acknowledged in our article. Although early and late pre-eclampsia (PE) may not have the same pathological pathways, they do share the same diagnostic criteria. We agree with Dr Amorim and her colleagues that the soluble fms-like tyrosine kinase-1 to placental growth factor ratio (sFlt-1/ PlGF) and mean uterine artery pulsatility index (UtAPI) are generally predictive of early forms of PE; nevertheless, their negative predictive value before 37 weeks of gestation is extremely high (>97% for sFlt/PlGF <38) to exclude PE. For these reasons, we consider that there is no reason to believe that sFlt-1/PlGF is not a good tool to exclude the diagnosis of PE in the context of COVID-19. Dr Amorim proposes that preeclampsia may act as a risk factor for developing severe or critical COVID-19. We would recommend being cautious about this statement, as there is no evidence published to date supporting this hypothesis and, in our series, the timeline of signs and symptoms is clear: COVID-19 pneumonia occurred prior to features of pre-eclampsia. Nevertheless, we do agree with Dr Amorim and her colleagues that our study is a small series and further research is needed to better understand the relation between PE and COVID-19. For this reason, we are very much looking forward to finding out the results of Dr Amorim’s study. Meanwhile, we believe that patients with signs and symptoms of PE in the context of severe COVID-19 should be managed with caution, as, in some cases, these signs and symptoms could be caused by COVID-19 and sFlt1/PlGF might be helpful in the management of these pregnancies, especially in preterm cases.& References Sir, We thank Martin Hirsch and colleagues 1 for their interest in our study 2 and for pointing out a degree of mismatch between our reported findings and the information on the ISRCTN database. The primary outcome of pelvic pain was operationalised more specifically as cyclical pain in the trial, and unfortunately, the secondary outcome of dyspareunia was inadvertently omitted from those listed in the ISRCTN entry. However, we can confirm that the trial outcome variables did not alter during the course of the study and all of the outcomes recorded are reported in the final publicationno selective reporting occurred. We are very pleased that the core outcome set for endometriosis, which the design of our study predated, is now available. 3 The development of core outcomes plays a crucial role in establishing consensus on appropriate measures of treatment effectiveness and greatly assists comparison and synthesisand, where applicable, statistical poolingof trial results. 4,5 We note that although pain, quality of life, pregnancy and adverse events were recorded in our study, other of the core outcomes related to birth (e.g. gestational age, birthweight and neonatal mortality) and patient satisfaction were not recorded and therefore not reported. Future trials in endometriosis will now be able to profit from the clear guidance provided by this set of core outcomes, and our understanding of the effective management of this condition will be enhanced accordingly.& Authors' reply Sir, Thank you kindly for the opportunity to respond to the letter from Dr Amorim and her colleagues. 1 We would like to thank Dr Amorim and her team for their interest in our study 2 and their invaluable observations. We acknowledge that our series is small and heterogeneous. Our patients ranged from 20 to 37 weeks of pregnancy and only in one case did the pre-eclampsia-like (PE-like) syndrome resolve spontaneously and without being delivered. We, therefore, cannot affirm that all cases were PE-like syndrome, as we acknowledged in our article. Although early and late pre-eclampsia (PE) may not have the same pathological pathways, they do share the same diagnostic criteria. 3 We agree with Dr Amorim and her colleagues that the soluble fms-like tyrosine kinase-1 to placental growth factor ratio (sFlt-1/ PlGF) and mean uterine artery pulsatility index (UtAPI) are generally predictive of early forms of PE; nevertheless, their negative predictive value before 37 weeks of gestation is extremely high (>97% for sFlt/PlGF <38) to exclude PE. 4,5 For these reasons, we consider that there is no reason to believe that sFlt-1/PlGF is not a good tool to exclude the diagnosis of PE in the context of COVID-19. Dr Amorim proposes that preeclampsia may act as a risk factor for developing severe or critical COVID-19. We would recommend being cautious about this statement, as there is no evidence published to date supporting this hypothesis and, in our series, the timeline of signs and symptoms is clear: COVID-19 pneumonia occurred prior to features of pre-eclampsia. Nevertheless, we do agree with Dr Amorim and her colleagues that our study is a small series and further research is needed to better understand the relation between PE and COVID-19. For this reason, we are very much looking forward to finding out the results of Dr Amorim's study. Meanwhile, we believe that patients with signs and symptoms of PE in the context of severe COVID-19 should be managed with caution, as, in some cases, these signs and symptoms could be caused by COVID-19 and sFlt-1/ PlGF might be helpful in the management of these pregnancies, especially in preterm cases.& (COVID-19) pneumonia. Six women presented signs and symptoms of preeclampsia and were assessed with uterine artery pulsatility index (UtAPI) and angiogenic factors (soluble fms-like tyrosine kinase-1/placental growth factor [sFlt-1/ PlGF]). Only one woman had abnormal sFlt-1/PlGF and UtAPI and symptom resolution occurred in two women who remained pregnant after recovery of pneumonia. The authors concluded that 'pregnant women with severe COVID-19 can develop a pre-eclampsia-like syndrome that might be distinguished from actual pre-eclampsia by sFlt-1/PlGF, LDH and UtAPI'. We believe that this conclusion deserves comments. First, although we agree that COVID-19 may mimic the inflammatory pattern observed in pre-eclampsia, once both diseases are thought to be accounted for by systemic inflammation, 2,3 this rationale may only explain the clinical course of the two women with disease resolution, but not the others. Second, not all pre-eclampsia cases are the same. Early-and late-onset pre-eclampsia have distinctive features, pathogenesis of these two situations differs and markers such as UtAPI and sFlt-1/ PlGF can be predictive of early but not lateonset pre-eclampsia. 4 Therefore, based on the available evidence, we do not believe that these markers can be used to rule out pre-eclampsia in the context of COVID-19 infection. The two women who recovered were in their second trimester (20 and 24 weeks of gestation) and presented with severe pneumonia, so they may have had a pre-eclampsia/HELLP (haemolysis, elevated liver enzymes and low platelet count) -like situation, associated with the COVID-19 inflammatory state and the intensive care unit interventions. The other women who were delivered cannot be ruled out as pre-eclampsia cases only by these markers and their gestational ages were significantly higher (28, 30, 36 and 37 weeks of gestation). The topic is remarkably interesting and must be addressed, but six women is still an exceedingly small sample from which to derive any robust conclusion on the matter. It is likely that both phenomena may occur in the clinical setting of obstetric patients at risk of COVID-19 infection: namely, COVID-19 mimicking pre-eclampsia, particularly in early pregnancy and already established preeclampsia acting as a risk factor for developing severe or critical COVID-19. These two separate clinical conditions need to be investigated when caring for women at risk of each one or both, based on clinical and epidemiological criteria. Adequate diagnostic tools to differentiate between them would be helpful, but to our knowledge UtAPI and sFlt-1/PlGF do not have scientific support or have not been thoroughly investigated to respond to this need. We are now collecting data within a cohort study that has already enrolled 181 Brazilian pregnant and postpartum women with confirmed severe acute respiratory syndrome coronavirus 2 infection and, so far, the association with hypertensive disorders of pregnancy seems remarkable. We hope that our data may improve the knowledge about the potential bidirectional relationship between pre-eclampsia and COVID-19. However, the question that remains to be clarified is who came first, the chicken or the egg. And the answer will probably be different in each woman.&
2020-08-04T13:01:31.553Z
2020-08-02T00:00:00.000
{ "year": 2020, "sha1": "d7fbb2b5eb3b39cda1d4b97e9d51da64d75cfae7", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7436541", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6ce3f65606cebcd2c4b4123f30debe695797c36e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4685435
pes2o/s2orc
v3-fos-license
Epicardial Adipose Tissue Is Associated with Plaque Burden and Composition and Provides Incremental Value for the Prediction of Cardiac Outcome. A Clinical Cardiac Computed Tomography Angiography Study Objectives We sought to investigate the association of epicardial adipose tissue (eCAT) volume with plaque burden, circulating biomarkers and cardiac outcomes in patients with intermediate risk for coronary artery disease (CAD). Methods and Results 177 consecutive outpatients at intermediate risk for CAD and completed biomarker analysis including high-sensitive Troponin T (hs-TnT) and hs-CRP underwent 256-slice cardiac computed tomography angiography (CCTA) between June 2008 and October 2011. Patients with lumen narrowing ≥50% exhibited significantly higher eCAT volume than patients without any CAD or lumen narrowing <50% (median (interquartile range, IQR): 108 (73–167) cm3 vs. 119 (82–196) cm3, p = 0.04). Multivariate regression analysis demonstrated an independent association eCAT volume with plaque burden by number of lesions (R2 = 0.22, rpartial = 0.29, p = 0.026) and CAD severity by lumen narrowing (R2 = 0.22, rpartial = 0.23, p = 0.038) after adjustment for age, diabetes mellitus, hyperlidipemia, body-mass-index (BMI), hs-CRP and hs-TnT. Univariate Cox proportional hazards regression analysis identified a significant association for both increased eCAT volume and maximal lumen narrowing with all cardiac events. Multivariate Cox proportional hazards regression analysis revealed an independent association of increased eCAT volume with all cardiac events after adjustment for age, >3 risk factors, presence of CAD, hs-CRP and hs-TnT. Conclusion Epicardial adipose tissue volume is independently associated with plaque burden and maximum luminal narrowing by CCTA and may serve as an independent predictor for cardiac outcomes in patients at intermediate risk for CAD. Introduction Epicardial adipose tissue (eCAT) belongs to the endocrine active assemblage of visceral body fat with paracrine impact on the initiation and progression of coronary artery disease (CAD) [1][2][3][4]. Previous large cohort studies demonstrated that eCAT volume is associated with atherogenic risk factors, the presence of CAD and plaque burden [3,[5][6][7][8][9]. This observation is supported by the evidence of metabolic activity of eCAT as a source of several proatherogenic mediators, accompanied by paracrine or vasocrine mechanisms [10]. Furthermore, growing body of evidence suggests that elevated eCAT volume is independently associated with increased incidence of future myocardial infarction [11][12][13]. High-sensitive Troponin T (hs-TnT), on the other hand, is a sensitive biomarker of myocardial injury associated with highrisk coronary lesions and plaque burden and provides incremental value for the prediction of cardiac outcome in patients with both presumably stable CAD and preserved systolic left ventricular function [14][15][16][17]. Hs-CRP is a surrogate of inflammation associated with CAD and cardiac outcome [15,[17][18][19]. However, little evidence exists on the impact of eCAT volume on both cardiac troponins and hs-CRP, respectively. Cardiac computed tomography angiography (CCTA) enables for a simultaneous quantitative assessment of atherosclerotic plaque and eCAT volume [17,[20][21][22]. Recently, a strong association of eCAT volume with non-calcified plaque composition was reported [5,8,9]. However, to the best of our knowledge, the association of eCAT volume and quantitative plaque composition with biomarkers like hs-TnT and hs-CRP has not been reported so far. Herein, we therefore assessed the role of eCAT volume for coronary plaque burden by CCTA, its association with established biomarkers of myocardial injury (hs-TnT) and inflammation (hs-CRP), and investigated its prognostic value in presumably stable CAD patients. Study population A total of 1235 consecutive outpatients were scheduled for cardiac computed tomography angiography (CCTA) due to suspected or known coronary artery disease (CAD) between June 2008 and October 2011. CCTA was performed for clinical reasons according to the current guidelines [23]. All imaging was performed with a 256-detector row CT scanner (iCT; Philips Medical Systems, Best, the Netherlands) with a 2x128x0.625 mm detector configuration, as described previously [24]. Inclusion and exclusion criteria are provided online (S1 Appendix). The assessment of demographic and clinical characteristics is described online (S1 Appendix) and summarized in Table 1. We prospectively included 177 (14%) patients in our observational longitudinal single-center study who had a completed biomarker analysis for hs-TnT and hs-CRP (Fig 1). 25 patients were excluded due to the presence of one or more exclusion criteria, as listed online (S1 Appendix, Fig 1). An additional 13 patients were lost at follow-up, so that our final study population comprised 152 patients (87 men, mean age 64±10 years), and 139 patients with completed follow-up (Fig 1). Our study complied with the Declaration of Helsinki, was approved by our local ethics committee of the University of Heidelberg (S317/ 2008) and all patients gave written informed consent. Patient preparation and CCTA imaging protocols Patient preparation included the intravenous administration of 2.5-30.0 mg metoprolol (Lopresor 1 , Novartis, Pharma GmbH) if baseline heart rate was more than 60 beats per minute. All patients received 0.8 mg of sublingual glyceryl nitrate 5 minutes before the CT scan. During a single breath-hold, CCTA was performed with 65-80 ml (injection rate 6 ml/s) of nonionic contrast agent (Ultravist 1 370, Bayer Schering Pharma) followed by 30 ml (injection rate 5 ml/s) of saline that was administrated using an antecubital line. Imaging parameters were used as previously described [25] with n = 112 (71%) undergoing prospectively ECG triggered and n = 46 (29%) undergoing retrospectively ECG gated scans. Quantification of epicardial adipose tissue (eCAT) According to previous reports we performed all measurements with dedicated software (Extended Brilliance Workspace 4.0, Philips Healthcare). First, we identified the following anatomic boundaries for measurement of total eCAT volume: (i) upper boundaries: pulmonary artery bifurcation, the mid left atrium, and the aortic root, (ii) lower boundaries: the diaphragm and the left ventricular apex. Second, we defined the lower density threshold as -190 HU and the upper density threshold as -30 HU for subsequent 3D-segmentation [26]. Computer assisted evaluation of plaque volume, composition and luminal narrowing The methods used for evaluation of diagnostic image quality, visual plaque evaluation and quantitative assessment of Agatston score, luminal narrowing, coronary plaque volume and composition using the dedicated software (Extended Brilliance Workspace 4.0, Philips Medical Systems) have been previously established and described [17,20,27] and is provided online (S1 Appendix). Coronary CT angiograms and Agatston score were analyzed independently by two experienced readers (G.G. & G.K.) both with >5 years of experience in CCTA equivalent to the clinical competence statement training level 3 of the American College of Cardiology Foundation/American Heart Association (AHA) [28]. The per-patient fraction of non-calcified (FR non-calc.) or calcified (FR calc. ) plaque content in patients with at least one coronary plaque was calculated as follows: Agatston score For the assessment of coronary calcification prospective ECG-gated non-contrast scans were performed at 75% of the cardiac cycle, and using 120 kV tube voltage and 364 mA tube current, and resultant images with a 3 mm slice thickness were used for the calculation of the Agatston score. Follow-up and study endpoints Personnel who were unaware of the CCTA results contacted each subject or an immediate family member. The date of this contact was used for the calculation of the follow-up time duration. A standardized questionnaire was used to collect outcome data determined from patient interviews at the outpatient clinic or by telephone interviews. Reported clinical events were confirmed by review of the corresponding medical records in our electronic Hospital Information System, and contact with the general practitioner, referring cardiologist, or the treating hospital. The pre-specific endpoints of this study were cardiac death (sudden death due to arrhythmia, fatal myocardial infarction (MI) or intractable heart failure) and nonfatal MI. Further cardiac events included the occurrence of clinically indicated revascularization procedures by percutaneous coronary intervention (PCI) or coronary artery bypass graft surgery (CABG). MI was defined according to the European Society of Cardiology/American College of Cardiology Universal MI Definitions Committee, and for unstable angina, the Braunwald classification was used [29,30]. Since CCTA results may have triggered revascularization procedures, thereby altering the subsequent event rate, 'early' revascularization within 90 days of CCTA was not considered, and patients were censored at the time of early revascularization (n = 6). Biomarkers Blood samples were drawn from all patients before the CCTA scan. Analysis included both biomarkers hs-TnT and hs-CRP and routine laboratory parameter measurements. A detailed description of biomarker analysis is available online (S1 Appendix). Statistical analysis Statistical analyses were performed with use of MedCalc software (MedCalc 15.11.0, Ostend, Belgium). Categorical variables are presented as proportions (%). Continuous variables as mean ± standard deviation (SD) or median and interquartile range (IQR), as appropriate. Normality of data distribution was evaluated using Kolmogorov-Smirnov test. Since part of the continuous variables in Tables 1 and 2 [15,17]. For categorization of normal and high plaque volume we used a cut-off value of 19.6 mm 3 [17]. To account for non-normally distributed CCTA-based variables (for example total plaque volume) we performed Spearman's correlation analysis. For all other correlation analyses we calculated the Pearson correlation coefficient r, with p value. Multiple linear regression models were calculated to analyze the relationship between total plaque volume, calcium score, fraction of non- Lumen narrowing >70%, % 2 (5%) 5 (9%) 7 (13%) ns Positive remodeling, % 12 (24%) 11 (22%) 16 (30%) ns calcified plaque volume and the traditional risk factors, biomarkers and eCAT volume. Results are reported as the coefficient of determination R 2 as the proportion of the variation in the dependent variable (e.g. total plaque volume) and the partial correlation coefficient r partial as the coefficient of correlation of the tested variable with the dependent variable, adjusted for the effect of the other variables in the model. For survival analysis, Kaplan-Meier curves were generated to estimate the distribution of cardiac events as a function of the follow-up duration, depending on the presence or absence of elevated eCAT volume. Cox proportional-hazards univariate and multivariable regression analysis with Bonferroni adjustment for multiple comparisons was performed to identify predictors of all cardiac events (MI and cardiac death and late revascularization). Baseline variables that were considered clinically relevant (>3 risk factors for CAD, BMI, hs-CRP and hs-TnT) or that showed a univariate relationship with outcome were entered into the analysis. Results are presented as Hazard Ratios (HR) with the 95% confidence interval (95%CI) and the b-coefficient for multivariable analyses. In addition, we calculated the category-less net reclassification improvement (NRI) by using the "survIDINRI" software package (Revolution Analytics, Mountain View, California, USA). For reproducibility of eCAT volumes, we used the intra-class correlation coefficient (ICC) for intra-observer and inter-observer agreement and paired t-test for determining the significance of the mean absolute differences for repeated analysis of 40 randomly selected CCTA cases. The readings were separated by 8 weeks to minimize recall bias. A p value <0.05 was considered statistically significant. Associations between eCAT with traditional risk factors and biomarkers Univariate regression analysis demonstrated an association of eCAT volume with age, total number of atherogenic risk factors, BMI and the biomarkers hs-CRP and hs-TnT (Fig 2A-2E). In addition, significant correlations were observed between hs-TnT and total plaque volume ( Fig 2F) and between eCAT volume and serum lipid levels (S1 Fig). ECAT and CAD severity Patients with >1 plaque (n = 71, eCAT volume: 140±53 cm 3 ,) exhibited a significantly increased eCAT volume compared to patients without any plaque (n = 68, eCAT volume: 99±52 cm 3 ) and those with one plaque (n = 13, eCAT volume: 105±69 cm 3 ), respectively (p<0.05 for both, Fig 4A). Analysis by tertiles identified a significant association of eCAT volume with total plaque volume and maximum lumen narrowing ( Table 2, Fig 4B and 4C). In addition, eCAT volume was independently associated with presence of CAD (any plaque or luminal narrowing, R 2 = 0.11, r partial = 0.21, p = 0.026), plaque burden (by number of lesions: R 2 = 0.22, r partial = 0.29, p = 0.006) and CAD severity (by maximum lumen narrowing: R 2 = 0.22, r partial = 0.23, p = 0.038) after adjustment for age, diabetes mellitus, hyperlidipemia, BMI, hs-CRP and hs-TnT. Patients with cardiac events exhibited higher eCAT volumes than patients without cardiac events (156.6±58.2 cm 3 vs. 121.5±49.1 cm 3 , p = 0.03) (Fig 5A). Using univariate Cox proportional hazards regression analysis, significant associations were observed for both increased eCAT volume and maximal lumen narrowing in CCTA with all cardiac events (Table 3). Using multivariate Cox regression analysis, increased eCAT volume was independently associated with all cardiac events. However, when maximum lumen narrowing was additionally considered in the model, increased eCAT volume was no longer predictive (Table 4). Observer agreement and variabilities and time-spent The threshold-based eCAT volume assessment provided good intra-observer and inter-observer ICC of 0.9923 (95%CI 0.9785 to 0.9973) and 0.9996 (95%CI 0.9851 to 1.0000), respectively. Quantitative assessment required a mean interpretation time of 4.5±1.1min and 3.9±3.1 min per patient for eCAT volume and plaque characterization, respectively. Discussion In the present study we demonstrate a significant association of elevated epicardial adipose tissue (eCAT) volume with increased coronary atherosclerotic plaque burden, hs-TnT and hs-CRP. ECAT volume provided incremental prognostic value to traditional risk factors, presence of coronary artery disease (CAD), hs-CRP and hs-TnT in patients with presumably stable CAD. These findings may indicate an additional potential paracrine impact of eCAT on coronary plaque vulnerability that is different from accepted molecular trigger of atherosclerosis inception and progression. ECAT and coronary plaque burden and composition Among visceral fat, eCAT represents a unique sub-compartment first due to its close proximity to the heart muscle and the coronary arteries, and second due to its inflammatory activity [31]. In this context, paracrine and vasocrine effects of inflammatory cytokines from eCAT may promote atherogenesis and lead to elevated risk of adverse coronary events [11] (S3 Fig). Several investigations have described a significant association of eCAT volume with the presence of CAD and coronary plaque burden, which is in part explained by the strong link between eCAT and atherogenic risk factors [3,5,8,11]. In this line, we demonstrated an independent association of eCAT volume with the presence and severity of CAD after the adjustment for age, diabetes mellitus, hyperlipidemia, body-mass-index (BMI), hs-CRP and hs-TnT. Using quantitative plaque assessment, we demonstrated an independent association of elevated eCAT volume with total plaque volume, total number of plaques and coronary lumen narrowing. Patients suffering from relevant CAD exhibited the highest eCAT volumes. In the past years several studies demonstrated a close association of eCAT volume with clinical parameters such as BMI and atherogenic risk factors [3,12]. We detected an inverse correlation of eCAT volume with HDL-cholesterol, while serum levels of triglycerides were positively related to eCAT volume, which is in agreement with prior results [32]. The impact of atherogenic risk factors on plaque composition was assessed in several largescale clinical studies [15,17,27]. Furthermore, a strong association of eCAT volume with noncalcified plaque components was previously reported [5,8,9,22]. In the present study we demonstrated a significant BMI-independent correlation of elevated eCAT volume with total plaque volume, fraction of non-calcified plaque volume and total calcium score, which underscores the suggested association of eCAT volume with calcific and non-calcific plaque burden [3,13]. ECAT, biomarkers and cardiac outcomes Results from basic and clinical research propose that a mismatch of several pro-and antiinflammatory cytokines and mediators secreted from the eCAT may locally impact on atherogenesis in the underlying coronary arteries [31,33,34]. Our reported results demonstrate, that patients with augmented hs-CRP reveal higher eCAT volumes independent of BMI, which may be due to the pro-inflammatory endocrine activity of eCAT volume. Of interest, we also identified an association with small increases of hs-TnT, which is an established biomarker for myocardial micro-injury [17,35]. As with other prior investigations, we identified a significant association of minor increases of cardiac troponin T with vulnerable plaque characteristic as assessed by CCTA in patients with presumably stable CAD, which is possibly caused by silent plaque rupture, micro-embolization and microvascular obstruction, which may precede the clinical manifestation of myocardial infarction [15,17,35,36]. The present results affirm that hs-TnT correlates with coronary plaque burden as assessed by total plaque volume and calcium scoring. Recently, a report from the Heinz Nixdorf Recall Study reinforced the hypothesis that elevated eCAT volume drives disease progression predominantly in early stages of atherosclerosis [37]. In this line, our results give further evidence that eCAT volume is not only a bystander, but may be a key player for plaque progression and formation of vulnerable coronary lesions above and beyond the traditional mechanisms of plaque progression. Several investigations have demonstrated that eCAT volume is associated with incident cardiovascular events [6,13,38]. In our study, patients with elevated eCAT volume exhibited an increased risk for future cardiac events. Using a series of hierarchical Cox proportional-hazards regression models we demonstrated an incremental value of elevated eCAT volume to age, atherogenic risk factors, presence of CAD, hs-CRP and hs-TnT for the prediction of all cardiac events. However, when maximum luminal narrowing was considered in the model, increased eCAT volume was no longer predictive. Therefore, our results contribute to an expanding body of evidence for the role of eCAT volume in destabilization of vulnerable lesions, resulting in a higher incidence of cardiovascular events. Limitations The strength of our study is the unique complementary assessment of quantitative CCTAbased plaque characteristics and eCAT volume in conjunction with biomarkers for inflammation (hs-CRP) and myocardial micro-injury (hs-TnT). However, the major limitation of the presented study is the relatively small number of patients and cardiac endpoints. Second, no mechanistic data on paracrine or vasocrine inflammatory effects of eCAT on coronary plaque composition were assessed. Especially, the clinical significance of the weakly correlated eCAT volume with biomarkers, and its association with plaque burden and lumen narrowing needs to be investigated in future large-scale clinical trials to reinforce our findings. Finally, lipid serum assessments were accessible in only 55% of the study population. Conclusions Epicardial adipose tissue (eCAT) volume is independently associated with atherosclerotic plaque burden and CAD severity as assessed by cardiac computed tomography angiography (CCTA) and hs-TnT as biomarker of myocardial micro-injury. Elevated eCAT volume may provide incremental predictive value for future cardiac events in patients at intermediate risk for coronary artery disease (CAD).
2018-04-03T01:17:22.186Z
2016-05-17T00:00:00.000
{ "year": 2016, "sha1": "8a341f7dab68c39dfa8fb2c704a182d7b3f5f687", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155120&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a341f7dab68c39dfa8fb2c704a182d7b3f5f687", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260247166
pes2o/s2orc
v3-fos-license
Severe hemolytic disease of the newborn caused by JKb antibody: Two case reports and literature review Background: JKb antibody rarely causes severe hemolytic disease in the newborn except in 1 case, required blood exchange transfusion but later died of intractable seizure and renal failure. Here we describe 2 cases of JKb-induced severe neonatal jaundice requiring blood exchange transfusion with good neurological outcome. Case presentation: Two female Chinese, ethnic Han, term infants with severe jaundice were transferred to us at the age of 5- and 4-day with a total bilirubin of 30.9 and 25.9 mg/dL while reticulocyte counts were 3.2% and 2.2%, respectively. Both infants were not the firstborn to their corresponding mothers. Direct and indirect Coombs’ tests were positive, and JKb antibody titers were 1:64 (+) for both mothers. Phototherapy was immediately administered, and a blood exchange transfusion was performed within 5 hours of admission. Magnet resonance image showed no evidence of bilirubin-induced brain damage, and no abnormal neurological finding was detected at 6 months of life. Conclusion: JKb antibody-induced hemolytic disease of the newborn usually leads to a benign course, but severe jaundice requiring blood exchange transfusion may occur. Our cases suggest good outcomes can be achieved in this minor blood group-induced hemolytic disease of the newborn if identified and managed early enough. Introduction Blood type incompatibility is a common cause of hemolytic disease in the newborn (HDN) but is rarely caused by minor blood groups. However, severe HDN secondary to minor blood group incompatibility that led to fatality has been reported, including Duffy, Kell, Kidd, etc. The Kidd (JK) glycoprotein is a urea transporter of the red blood cell. [1] JK antibody can cause HDN, usually benign without complication. [2] There was a total of 12 cases of JK b -HDN reported in English, [2][3][4][5][6][7][8][9][10][11][12][13] including one required blood exchange transfusion (BET) [3] and one with intrauterine fetal demise. [4] The case reported by Kim died of renal failure and intractable seizure after a successful BET. [3] Here, we present 2 severe JK b -HDN successfully managed by phototherapy and BET with normal neurodevelopmental outcomes. Case 1 A 5-day-old term female infant of Han ethnicity was born to a G 2 P 2 mother at 40 weeks of gestation, vaginally, with a birthweight of 3500 g and Apgar scores of 10 and 10. Her weight was 3520 g upon admission. She was fed with both breast milk and formula. The first bowel movement happened at 4 hours of birth. No hematoma was noted, and no family history of neonatal jaundice. The mother was never exposed to blood transfusion and denied any autoimmune disease. There was no ABO or Rh incompatibility. Her 21-month-old sister was in good health. The admission hemogram showed Hb 126 g/L, Hct 35%, and a reticulocyte count of 2.2%. A blood smear showed anisocytosis, microspherocytosis, and elliptocytosis. The total and direct bilirubin levels were 309 and 19 mg/L, respectively. Coombs' tests were positive. Anti-JK b antibody (Sanquin Reagents B.V., Medicine Amsterdam, Netherlands) was 1:64 (+) in maternal blood. The level of G-6-P-D was normal, and no evidence of urinary tract infection. No intracranial hemorrhage was found by cranial ultrasound. Thyroid function was normal. Intensive double phototherapy was initiated immediately after admission. Type O (+) JK b (−) packed red blood cells and type AB (+) plasma were mixed for BET performed 5 hours after admission. The total and direct bilirubin levels were 13.9 and 1.0 mg/dL immediately after BET with Hb 129 g/L, Hct 39%, and a reticulocyte count of 1.7%. Phototherapy was continued for another day after the BET. On the 3rd day of admission, the brain MRI showed no obvious abnormality, and she passed the hearing screening. She was discharged at 11 days of life with Hb 122 g/L, Hct 37%, and a reticulocyte count of 1.5%. Neurological examination at 6-month-old showed no abnormal finding. Case 2 A 4-day-old term female infant of Han ethnicity was born G3P2 mother at 37 weeks of gestation via Cesarean section. The birth weight was 3200 g. Her admitting weight was 3150 g. Apgar scores were 8 and 10, respectively. She was visually jaundiced at the 36th hour of birth. She had normal activity without a highpitch cry. She was fed with breast milk and formula. The first bowel movement was at 10 hours of life. No hematoma was seen, and no family history of neonatal jaundice. The mother was never exposed to blood transfusion and denied autoimmune disease but did receive instrumental abortion for her second pregnancy. No evidence of ABO or Rh incompatibility. Her older brother, 4 years old, was in good health without neonatal jaundice. Upon admission, the hemogram showed Hb 139 g/L, Hct 36%, and a reticulocyte count of 3.2%. A blood smear showed anisocytosis, elliptocytosis, fragmented red blood cells, and polychromasia. The total and direct bilirubin levels were 259 and 9 mg/L, respectively. Coombs tests were positive. Anti-JK b was 1:64(+) in maternal blood. The level of G-6-P-D and urinalysis were normal with sterile urine. No intracranial hemorrhage was found by cranial ultrasound. Thyroid function was normal. The patient was given double phototherapy immediately after admission. BET was performed 4 hours after admission with a mixture of Type O(+) JK b (−) packed red blood cells and type AB (+) plasma. The immediate post-exchange transfusion total and direct bilirubin levels were 106 and 6 mg/L, respectively. Hemogram after exchange transfusion showed Hb 137 g/L, Hct 39%, and a reticulocyte count of 1.5%. On the 7th day of birth, her brain MRI showed no abnormality, and she passed the hearing screening. She was discharged at 9 days old with Hb 133 g/L, Hct 38%, and a reticulocyte count of 1.4%. Neurological examination at 6-month-old showed no abnormal finding. Discussion ABO and Rh-incompatibility are the most common cause of HDN and should always be considered in neonates with severe jaundice. Rh-HDN is more severe than ABO-HDN before the introduction of Rhogam. [14] HDN due to other minor blood groups were subsequently identified, such as Kell, Duffy, Kidd (JK), and MN antigens. [15] Minor blood group HDNs need a sensitized mother, so they are more commonly seen in nonfirst-born infants. With the recent abolition of the Chinese Population Policy, we have started to experience more HDN caused by minor blood group incompatibility. [14] Kidd blood group is an antigen in human erythrocytes, which consists of 2 specific genes (JK a or JK b ). Kidd antigen is a 43 kDa urea transporter on the erythrocyte membrane, and its deletion is compatible with life with limited urea concentrating ability of the affected kidneys . [1] Among Asians, JK a and JK b account for 49%, similar to other ethnic groups. [1] The reported gene frequency in Chinese Han population is 48.4% for JK a and 51.6% for JK b . [16] JK a and JK b have very weak immunity and rarely elicit an immune reaction. There are 2 types of antibodies produced against Kidd antigen, IgG and IgM, but only the IgG isotype will cause HDN. The mechanism by which high titers of JK a and anti JK b antibodies are generated remains unclear. In 1953, Plaut et al [17] described the JK b antibody. Most reported JK b incompatibility occurred in adults after repeated transfusion. [18] Allen was the first to identify an antibody in the maternal blood against the JK a of a newborn with HDN. The blood group, Kidd, was hence named after that mother's maiden name. [19] Kornstad and Halvorsen [5] reported the first case of JK b -HDN in 1958. Presently only 12 cases of JK b -HDN have been reported in English literature (Table 1). [2][3][4][5][6][7][8][9][10][11][12][13] Most of the JK b -HDN (12/14, 85.7%) had a very mild course and required no more than phototherapy. There were 2 fatal cases (14.3%); one died of renal failure and intractable convulsions despite BET and phototherapy, [3] while the other ended with intrauterine fetal demise at 25 weeks of gestation. [4] Two patients (14.3%) received simple blood transfusions, and 3 (21.4%) received blood exchange transfusions. Our patients received BET according to the AAP guidelines. Brain MRI and BAEP were normal at the time of discharge. Neurological examination in both cases was normal after 6 months. Phototherapy and BET are the standard measures for severe HDN. To establish the true cause of HDN, when ABO or Rh incompatibilities can be excluded, we need to consider other minor blood group antibodies. Due to their low antigenicity, most minor blood group HDNs occur either not as the first child, or the mothers have previous exposure to transfusion or abortion. Antenatal ultrasound can detect fetal anemia or hydrops caused by severe hemolysis, and intrauterine transfusion may be offered. [20] An intimate collaboration between the Table 1 Summary of the clinical and laboratory data from the published cases of hemolytic disease of newborns due to anti-Jk b . perinatologist and neonatologist is needed to care for such a situation. Although most JK a /JK b HDN have a benign clinical course, the potentially fatal outcome cannot be ignored. About 50% of pregnant Chinese women are JK b positive, which gives them roughly 25% chance of having JK b incompatibility. Luckily, severe JK b -HDN requiring aggressive management is extremely rare. This implies that universal detection of JK a /JK b antibodies, or other minor blood group antibodies, will be clinically unnecessary. Universal screening of neonatal jaundice, either by blood test or transcutaneous method, will be very important to prevent bilirubin induce brain damage. Severe bilirubin induce brain damage such as kernicterus has a devastating consequence for the victims, their families, and society. We want to advocate a national policy to implement measures to prevent kernicterus which is especially important after the discontinuation of our national population policy since we do expect severe HDN due to minor blood groups will become more frequent. Fortunately, with intensive phototherapy followed by successfully performed BET we obtained good outcomes for 2 severe JK b -HDN. We want to share our experience with our peers to draw attention to the adequate management of severe HDN regardless of the etiology.
2023-07-29T06:16:16.653Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "d03ddef63dd1a9979fbc6f34f99b1be6a287dbdf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "c4fd45f0e33522b7411d688caf00f34ad81d9645", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259711666
pes2o/s2orc
v3-fos-license
The Host E3-Ubiquitin Ligase TRIM28 Impedes Viral Protein GP4 Ubiquitination and Promotes PRRSV Replication Porcine reproductive and respiratory syndrome (PRRS), caused by the PRRS virus (PRRSV), is a highly pathogenic porcine virus that brings tremendous economic losses to the global swine industry. PRRSVs have evolved multiple elegant strategies to manipulate the host proteins and circumvent against the antiviral responses to establish infection. Therefore, the identification of virus–host interactions is critical for understanding the pathogenesis of PRRSVs. Tripartite motif protein 28 (TRIM28) is a transcriptional co-repressor involved in the regulation of viral and cellular transcriptional programs; however, its precise role in regulating PRRSV infection remains unknown. In this study, we found that the mRNA and protein levels of TRIM28 were up-regulated in PRRSV-infected porcine alveolar macrophages (PAMs) and MARC-145 cells. Ectopic TRIM28 expression dramatically increased viral yields, whereas the siRNA-mediated knockdown of TRIM28 significantly inhibited PRRSV replication. Furthermore, we used a co-immunoprecipitation (co-IP) assay to demonstrate that TRIM28 interacted with envelope glycoprotein 4 (GP4) among PRRSV viral proteins. Intriguingly, TRIM28 inhibited the degradation of PRRSV GP4 by impeding its ubiquitination. Taken together, our work provides evidence that the host E3-ubiquitin ligase TRIM28 suppresses GP4 ubiquitination and is important for efficient virus replication. Therefore, our study identifies a new host factor, TRIM28, as a potential target in the development of anti-viral drugs against PRRSV. Introduction The porcine reproductive and respiratory syndrome virus (PRRSV), which is a highly contagious pathogen that causes reproductive disorders and severe dyspnea in pigs, has been regarded as a persistent challenge for the swine industry globally [1]. The PRRSV has been recently classified as PRRSV-1 (species Betaarterivirus suid 1) and PRRSV-2 (species Betaarterivirus suid 2) [2]. The genome of the PRRSV is approximately 15.4 kb with more than eleven open reading frames (ORFs), encoding eight structural proteins (GP2a, GP2b, GP3, GP4, GP5, GP5a, M, and N) and at least sixteen non-structural proteins (NSP1α, NSP1β, NSP2-6, NSP-2N, NSP-2TF, NSP7α, NSP7β, and NSP8-12) [3,4]. The structural envelope glycoprotein 4 (GP4) plays a crucial role in generating infectious PRRSVs [5]. Previous research has identified that GP4 contributes to inducing protective immune responses [6][7][8]. More importantly, GP4 co-localizes with cluster of differentiation 163 (CD163), which is a major receptor of PRRSV attachment, thus mediating the virus entry process [9]. Specifically, the GP2a, GP3, GP4, and GP5 proteins form a heterotetrameric complex that is required to transport these proteins from the endoplasmic reticulum (ER) to the Golgi apparatus in each infected cell prior to virion assembly [10]. It has not been explored that the viral structural proteins utilize the interaction between host factors and viral proteins to facilitate PRRSV replication. The ubiquitination of proteins is a posttranslational modification (PTMs) process with many cellular functions including the regulation of virus replication [11]. Tripartite motif (TRIM) proteins are a large family of E3 ubiquitin ligases that are implicated in multiple biological processes ranging from transcriptional regulation to posttranslational modification [12,13]. TRIM proteins have been shown to mediate the transfer of ubiquitin to target proteins, especially a number of viral proteins identified as the substrates of TRIM proteins during virus infection [14]. Many TRIM proteins are known to inhibit viral replication [15,16]; however, very few examples exist of TRIM proteins being exploited by viruses to promote virus replication [17,18]. TRIM28 (also known as Kruppel-associated box-associated protein 1 (KAP1), or transcription intermediary factor 1β (TIF1β)) belongs to a subset of TRIM proteins called the transcription intermediary factor 1 (TIF1) sub-family. TRIM28 is characterized by a conserved N-terminal architecture consisting of a Really Interesting New Gene (RING) E3 ubiquitin ligase domain (R), two B-box domains (B) involved in higher-order oligomerization, and one coiled-coil (CC) domain required for dimerization, collectively known as the RBCC domain. Its C-terminus contains a plant homeodomain (PHD) involved in an intramolecular small ubiquitin-related modifier (SUMO) E3 ligase and a bromodomain (BR), and the SUMOylation of the PHD-BR is required for TRIM28's repressive activity [19]. This RBCC-PHD-BD structure is a characteristic only shared by the three other TRIM-family members: TRIM24/TIF1α, TRIM33/TIF1γ, and TRIM66/TIF1δ. All four proteins have been known for their function as transcriptional regulators [20,21]. Recent reports have demonstrated that TRIM28 could regulate protein posttranslational modification and is involved in the process of viral infection [22]. However, its effects on the posttranslational modification of viral proteins have not been elucidated. In this report, we identified that a host interactor, TRIM28, directly targets PRRSV viral protein GP4 and inhibits its ubiquitination, which protects GP4 protein from degradation and promotes PRRSV replication. Therefore, our data suggest that the TRIM28-mediated inhibition of viral protein ubiquitination may represent an escape mechanism by which the virus utilizes host factors to facilitate viral protein stabilization and expression. TRIM28 Is Induced by PRRSV Infection To evaluate how TRIM28 responds to PRRSV infection, we first measured TRIM28 expression in porcine alveolar macrophages (PAMs) infected with PRRSVs at an MOI of 1 for 0, 12, 24, 36, or 48 h. The mRNA and protein levels of TRIM28 were dramatically induced during PRRSV infection compared with those in uninfected cells ( Figure 1A-C). In agreement with our observations in PAMs, we found that PRRSV infection greatly increased the mRNA and protein levels of TRIM28 in MARC-145 cells ( Figure 1D-F). Taken together, these data suggest that PRRSV infection could up-regulate TRIM28 expression. TRIM28 Overexpression Facilitates PRRSV Replication To explore whether TRIM28 could affect PRRSV replication, PRRSV infection assays were performed in MARC-145 cells transfected with overexpressions of TRIM28 expression construct. PRRSV infection was examined by using Western blotting and immunofluorescence assay (IFA) analysis using Abs against PRRSV nucleocapsid protein N. PRRSV RNA levels were analyzed using RT-qPCRs with specific primers detecting ORF7. The amounts of PRRSV production were measured by using TCID 50 . The results showed that ectopically expressed TRIM28 dramatically increased not only PRRSV infection but also the abundance of viral RNAs (Figure 2A), viral N protein ( Figure 2B,C), and virus titers ( Figure 2D). Together, these data indicate that overexpressed TRIM28 could promote PRRSV replication. were detected by using RT-qPCRs and Western blotting analysis. Error bars: means ± SDs of 3 independent tests. Student's t-test: ** p < 0.01; *** p < 0.001 compared to control. TRIM28 Overexpression Facilitates PRRSV Replication To explore whether TRIM28 could affect PRRSV replication, PRRSV infection assays were performed in MARC-145 cells transfected with overexpressions of TRIM28 expression construct. PRRSV infection was examined by using Western blotting and immunofluorescence assay (IFA) analysis using Abs against PRRSV nucleocapsid protein N. PRRSV RNA levels were analyzed using RT-qPCRs with specific primers detecting ORF7. The amounts of PRRSV production were measured by using TCID50. The results showed that ectopically expressed TRIM28 dramatically increased not only PRRSV infection but also the abundance of viral RNAs (Figure 2A), viral N protein ( Figure 2B,C), and virus titers ( Figure 2D). Together, these data indicate that overexpressed TRIM28 could promote PRRSV replication. TRIM28 Knockdown Inhibits PRRSV Replication To further investigate the function of TRIM28 in PRRSV infection, small interfering RNA (siRNA)-knockdown experiments were performed. Synthesized siRNA targeting TRIM28 was used to suppress endogenous TRIM28 expression in MARC-145 cells ( Figure 3A,B). The SiRNA-mediated knockdown of TRIM28 expression significantly reduced not only PRRSV infection but also the abundance of viral RNAs ( Figure 3C), viral N protein ( Figure 3D,E), and virus titers ( Figure 3F). Collectively, these results demonstrate that TRIM28 knockdown could suppress PRRSV infection. TRIM28 Targets PRRSV GP4 Our results above confirmed that TRIM28 plays an important role in PRRSV replication. Recent studies have shown that TRIM proteins are involved in the regulation of viral infection by targeting viral proteins [16][17][18]. We therefore investigated whether TRIM28 interacts with PRRSV viral proteins. Recombinant expression vectors of TRIM28 and PRRSV protein were transfected into HEK293T cells, and co-immunoprecipitation (co-IP) analysis indicated that TRIM28 was associated clearly with GP4 and weakly with GP3 and GP5, but not with M or N ( Figure 4A). To confirm the interaction of TRIM28 with PRRSV GP4, the Flag-GP4 expression plasmid was transfected into MARC-145 cells. An anti-Flag antibody was used for co-IP analysis, and an anti-TRIM28 antibody was used to detect endogenous TRIM28 by means of Western blotting. As shown in Figure 4B, ectopically expressed GP4 co-precipitated endogenous TRIM28. Subsequently, we performed a con- . Their nuclei were stained with DAPI. Scale bar: 300 µm. The scale bars in the images were added by the Image J software, version 1.8), and virus titers (F) were detected. Error bars: means ± SDs of 3 independent tests. Student's t-test: * p < 0.05; ** p < 0.01 compared to control. TRIM28 Targets PRRSV GP4 Our results above confirmed that TRIM28 plays an important role in PRRSV replication. Recent studies have shown that TRIM proteins are involved in the regulation of viral infection by targeting viral proteins [16][17][18]. We therefore investigated whether TRIM28 interacts with PRRSV viral proteins. Recombinant expression vectors of TRIM28 and PRRSV protein were transfected into HEK293T cells, and co-immunoprecipitation (co-IP) analysis indicated that TRIM28 was associated clearly with GP4 and weakly with GP3 and GP5, but not with M or N ( Figure 4A). To confirm the interaction of TRIM28 with PRRSV GP4, the Flag-GP4 expression plasmid was transfected into MARC-145 cells. An anti-Flag antibody was used for co-IP analysis, and an anti-TRIM28 antibody was used to detect endogenous TRIM28 by means of Western blotting. As shown in Figure 4B, ectopically expressed GP4 co-precipitated endogenous TRIM28. Subsequently, we performed a confocal microscopy analysis to investigate whether TRIM28 and GP4 co-localize at similar subcellular positions. The results showed that TRIM28 was localized in both the nucleus and cytoplasm, whereas PRRSV GP4 was mostly distributed in the cytoplasm and TRIM28 and GP4 were remarkably co-localized in the perikaryon ( Figure 4C). Taken together, these data suggest that TRIM28 could interact with PRRSV GP4. TRIM28 Inhibits the Degradation of PRRSV GP4 by Impeding Its Ubiquitination Our results above indicate that TRIM28 interacts with PRRSV GP4. Since TRIM28 is a member of the TRIM family, which has E3 ubiquitin ligase activity, we speculated that TRIM28 may affect the expression level of PRRSV GP4. To test this hypothesis, the TRIM28 and GP4 expression plasmids were co-transfected into HEK293T cells. As shown in Figure 5A, the co-expression of gradually increasing amounts of TRIM28 increased the protein expression level of GP4 in a dose-dependent manner. Additionally, we performed a time course experiment to monitor Flag-GP4 degradation in the presence of cycloheximide (CHX) to inhibit protein synthesis. The overexpression of TRIM28 significantly slowed the degradation of GP4 ( Figure 5B). These results demonstrate that TRIM28 stabilizes the GP4 protein. TRIM28 is composed of a RING finger domain, B-box domain, coiled-coil domain, PHD domain, and BR domain. To determine which domain of TRIM28 is essential for promoting GP4 expression, the vectors expressing domain-truncated TRIM28 mutants, TRIM28 (RING), TRIM28 (BBOX + CC), and TRIM28 (PHD + BR) were constructed and each TRIM28 mutant was co-transfected into the HEK293T cells together with Flag-GP4. As shown in Figure 5C, the truncation of the BBOX + CC domain of TRIM28 significantly up-regulated GP4 expression, indicating that the BBOX + CC domain of TRIM28 plays an essential role in increasing GP4 expression. We next sought to explore the mechanism of GP4 stabilization by TRIM28. TRIM28 functions as an E3 ligase. We performed a ubiquitination assay. HEK293T cells were transfected with HA-ubiquitin and Flag-GP4 in the absence or presence of TRIM28. As shown in Figure 5D, the poly-ubiquitination of PRRSV GP4 was significantly inhibited by TRIM28. To further determine which lysine residue ubiquitinations of PRRSV GP4 are suppressed by TRIM28, we used two ubiquitin mutants, K48 and K63, as substrates of ubiquitination. TRIM28 inhibited both the total and K63-linked ubiquitination of PRRSV GP4, whereas it had no effects on the K48-linked ubiquitination of PRRSV GP4 ( Figure 5E). These results suggest that TRIM28 selectively attenuates the K63-linked ubiquitination of GP4 and its degradation. TRIM28 Inhibits the Degradation of PRRSV GP4 by Impeding Its Ubiquitination Our results above indicate that TRIM28 interacts with PRRSV GP4. Since TRIM28 is a member of the TRIM family, which has E3 ubiquitin ligase activity, we speculated that TRIM28 may affect the expression level of PRRSV GP4. To test this hypothesis, the TRIM28 and GP4 expression plasmids were co-transfected into HEK293T cells. As shown in Figure 5A, the co-expression of gradually increasing amounts of TRIM28 increased the protein expression level of GP4 in a dose-dependent manner. Additionally, we performed a time course experiment to monitor Flag-GP4 degradation in the presence of cycloheximide (CHX) to inhibit protein synthesis. The overexpression of TRIM28 significantly slowed the degradation of GP4 ( Figure 5B). These results demonstrate that TRIM28 stabilizes the GP4 protein. TRIM28 is composed of a RING finger domain, B-box domain, coiled-coil domain, PHD domain, and BR domain. To determine which domain of TRIM28 is essential for promoting GP4 expression, the vectors expressing domain-truncated TRIM28 mutants, TRIM28 (RING), TRIM28 (BBOX + CC), and TRIM28 (PHD + BR) were constructed and each TRIM28 mutant was co-transfected into the HEK293T cells together with Flag-GP4. As shown in Figure 5C, the truncation of the BBOX + CC domain of TRIM28 significantly up-regulated GP4 expression, indicating that the BBOX + CC domain of TRIM28 plays an essential role in increasing GP4 expression. We next sought to explore the mechanism of GP4 stabilization by TRIM28. TRIM28 functions as an E3 ligase. We performed a ubiquitination assay. HEK293T cells were transfected with HA-ubiquitin and Flag-GP4 in the absence or presence of TRIM28. As shown in Figure 5D, the poly-ubiquitination of PRRSV GP4 was significantly inhibited by TRIM28. To further determine which lysine residue ubiquitinations of PRRSV GP4 are suppressed by TRIM28, we used two ubiquitin mutants, K48 and K63, as substrates of ubiquitination. TRIM28 inhibited both the total and K63-linked ubiquitination of PRRSV GP4, whereas it had no effects on the K48-linked ubiquitination of PRRSV GP4 ( Figure 5E). These results suggest that TRIM28 selectively attenuates the K63-linked ubiquitination of GP4 and its degradation. Discussion Presently, the majority of research papers on TRIMs have concentrated on how they operate as antiviral agents, either directly limiting viral replication or indirectly eliciting an antiviral innate immune response. However, it is unclear whether TRIMs can serve as "pro-viral" factors: that is, host components necessary for virus replication. According to several studies, some viral antagonists can exploit TRIMs to initiate their IFN antagonist action (TRIM23, for instance, ubiquitinates YFV-NS5 to inhibit STAT2 function [23]), but these are unintended consequences that provide viruses with an advantage by lowering host antiviral responses. It would not be surprising if TRIMs were involved in directly Discussion Presently, the majority of research papers on TRIMs have concentrated on how they operate as antiviral agents, either directly limiting viral replication or indirectly eliciting an antiviral innate immune response. However, it is unclear whether TRIMs can serve as "pro-viral" factors: that is, host components necessary for virus replication. According to several studies, some viral antagonists can exploit TRIMs to initiate their IFN antagonist action (TRIM23, for instance, ubiquitinates YFV-NS5 to inhibit STAT2 function [23]), but these are unintended consequences that provide viruses with an advantage by lowering host antiviral responses. It would not be surprising if TRIMs were involved in directly encouraging virus replication via non-degradative ubiquitinating viral proteins, given that the ubiquitination of viral proteins may positively affect particular steps of the replication cycle. Most viruses, including the PRRSV, interact with host proteins and utilize them to escape the antiviral response and fulfill virus replication and persistent infection. Our group discovered that TRIM28 was possibly involved in the regulation of PRRSV infection by using mass spectrometry. In the present study, our findings suggest that TRIM28 is a host factor targeted by PRRSV viral protein GP4 and that it directly enhances virus replication via inhibiting the K63-linked ubiquitination of GP4. To verify whether TRIM28 is an effective target for PRRSV infection, we examined the possible correlation between TRIM28 expression and PRRSV infection progression. In this study, PRRSV infection induced TRIM28 gene expression in porcine alveolar macrophages (PAMs) and MARC-145 cells, indicating that PRRSVs might manipulate the host protein TRIM28 to facilitate their propagation. Nevertheless, the involved pathways, as well as the underlying mechanisms regulating TRIM28 expression during viral infection, have not yet been resolved. Previous studies have discovered that TRIM28 posttranslational modification (such as via phosphorylation and SUMOylation) is dramatically altered during virus infection, including the human adenovirus (HAdV) [24], influenza virus [25], human cytomegalovirus (HCMV) [26], kaposi's sarcoma-associated herpesvirus (KSHV) [27], and merkel cell polyomavirus (MCPyV) [28]. It should be further explored whether PRRSV infection causes changes in the posttranscriptional modification of TRIM28. Functionally, TRIM28 is frequently described as an important scaffold protein concentrated in gene promoter regions to restrict transcription [29,30]. This detrimental role of TRIM28 in gene transcription has significant consequences for viral transcription and replication. TRIM28 has previously been shown to repress viral transcription for several herpesviruses including the KSHV, HCMV, and Epstein-Barr virus (EBV) [26,31,32]. TRIM28 has also been shown to inhibit HIV-1 replication by binding the acetylated HIV-1 integrase and preventing the integration of pro-viral DNA [33] and to mediate the transcription suppression of the HIV-1 LTR promoter [34,35]. According to one recent study, TRIM28 inhibits Tas-dependent transactivation activity in prototype foamy virus (PFV) promoters, which limits PFV transcription and replication [36]. In this study, we found that ectopically expressed TRIM28 facilitated PRRSV replication while TRIM28 deficiency inhibited PRRSV replication, indicating that TRIM28 could promote PRRSV replication. These results indicated that TRIM28 may be a critical positive regulator for PRRSV infection. Therefore, we speculated that the role of TRIM28 in regulating PRRSV infection may be independent of its transcriptional inhibition. TRIM proteins have previously been reported to directly regulate viral proteins during viral infections. The PRRSV minor structural proteins GP2, GP3, and GP4 form noncovalent heterotrimers in the virion. The PRRSV GP4 protein is an important component in the formation of the viral replication complex, which is required for replication. As a result, research on the effects of PRRSV GP4 will reveal insights into the mechanistic replication of the PRRSV. Our results showed that TRIM28 interacted with the PRRSV viral protein GP4. Our further study confirmed that TRIM28 inhibited the K63-linked poly-ubiquitination of PRRSV GP4 and enhanced its protein expression. To our knowledge, this was the first time TRIM28 was identified as being involved in the stabilization of PRRSV viral protein expression. Multiple previous studies have shown that TRIM28 enhances the stability of substrate protein via SUMOylation [37][38][39][40]. A recent study has revealed that the E3 SUMO ligase TRIM28 facilitates the SUMO1 and SUMO2/3-catalyzed SUMOylation of NLRP3, whereby it attenuates the K48-linked ubiquitination of NLRP3, resulting in the enhancement of NLRP3 stability [41]. We did not test the SUMOylation of PRRSV GP4 by TRIM28, which was a shortcoming of this study. We speculated that SUMOylation mediated by TRIM28 may block the ubiquitination site of PRRSV GP4 and thus inhibit PRRSV GP4 degradation. Further analysis of the SUMOylation and ubiquitination sites of PRRSV GP4 will help elucidate the underlying mechanisms. The PRRSV is a highly pathogenic porcine virus that causes significant economic losses in the global swine industry. In order to establish infection, the PRRSV has evolved numerous sophisticated methods to influence host proteins and evade antiviral responses. Our results showed that TRIM28 interacted with GP4 and reduced its K63-linked polyubiquitination, resulting in increased protein expression, which aided virus propagation. As a result, we can develop inhibitors of TRIM28 to reduce the viral load of PRRSVs that can be used in the treatment of PRRSVs. Cells and Virus Strain PAMs were collected via the bronchoalveolar lavage method from healthy six-weekold Large White-Dutch Landrace crossbred piglets as previously described [42] and maintained in RPMI-1640 medium supplemented with 10% FBS at 37 • C with 5% CO 2 . MARC-145 and HEK-293T cells were cultured in Dulbecco's modified Eagle's medium containing 10% FBS at 37 • C with 5% CO 2 . PRRSV strain HN07-1 (GenBank accession number KX766378.1) was propagated in the MARC-145 cells. PRRSV titers were measured by means of a microtitration assay using MARC-145 cells in 96-well plates and calculated as 50% tissue culture infective doses (TCID 50 ) per milliliter according to the method of Reed and Muench. Virus Infection The PAMs and MARC-145 cells were grown to approximately 70% to 80% confluence and infected with PRRSV strain HN07-1 at an MOI of 1 for 2 h. Then, the supernatants were removed and the cell monolayers were rinsed with PBS to remove un-attached virus particles. After that, the cells were incubated in fresh medium containing 2% FBS at 37 • C and 5% CO 2 for 0, 12, 24, 36, or 48 h. Expression Vector Construction and Plasmid Transfection The full-length sequences of TRIM28 (GenBank accession numbers XM_007998459.2) cDNA were obtained by using an RT-PCR and cloned into the mammalian expression vectors pCMV-Flag-N, pCMV-HA-N, or pCMV-Myc-N. Various truncated plasmids of TRIM28 were generated from corresponding wild-type constructs. The recombinant plasmids of the Flag-tagged PRRSV viral proteins (GP3, GP4, GP5, M, and N) were preserved in our laboratory. All the specific primers used for plasmid construction were designed using Primer Premier 5 and listed in Table 1. All constructs were confirmed by DNA sequencing. Lipofectamine 2000 was used to transfect MARC-145 or HEK293T cells with recombinant expression vectors as directed by the manufacturer. RNA Interference Small interfering RNAs (siRNAs) that targeted TRIM28 (GenBank no. XM_007998459.2) were synthesized by GenePharma. MARC-145 cells were seeded in 6-well plates and transfected with 50 pM siTRIM28 or negative-control siRNA (NC) using Lipofectamine 2000 according to the manufacturer's instructions for 48 h. The effects of siRNAs were analyzed by using RT-qPCRs and Western blotting. Table 2 shows the sequences of the siRNAs. RNA Isolation and RT-qPCRs Total RNA was extracted by an RNA extraction kit according to the instructions, and the purity and concentration of RNA were detected by a spectrophotometer. In each reaction system, 1 ug RNA was reverse-transcribed into 20 uL cDNA according to the protocol for a subsequent RT-qPCR assay. RT-qPCRs were performed using SYBR green PCR mix on a CFX96 TM Real-Time System. A single cycle of denaturation at 95 • C for 30 s was followed by 40 cycles of amplification at 95 • C for 5 s and 60 • C for 34 s. In order to confirm product specificity, a final melting cycle was added to create a melting curve. A single peak obtained in the melting curve verified the specificities of the PCR products. The housekeeping gene β-actin was used to standardize the relative gene expression levels. The number of fold changes in the levels of gene expression was calculated using the 2 −∆∆ct method. Table 1 lists all of the primers for RT-qPCRs, among which primers Swine-β-actin-For, Swine-β-actin-Rev, PRRSV-ORF7-For and PRRSV-ORF7-Rev are from the references and the other primers were designed by Primer Premier 5. Immunofluorescence Assay (IFA) Cells in culture plates were first fixed with 4% paraformaldehyde for 30 min and then permeabilized with 0.1% TritonX-100 in PBS for 10 min. The fixed cells were washed with PBS and blocked with 5% BSA in PBS containing 0.1% Tween-20 for 1 h to prevent nonspecific binding. The cells were stained with specific primary antibodies (anti-Flag was diluted with 1:500; anti-HA and anti-PRRSV-N were diluted with 1:100), followed by blotting with fluorescent conjugated secondary antibody (FITC-labeled secondary antibodies were diluted with 1:400; Alexa Fluor TM 546 labeled donkey anti-rabbit and donkey anti-mouse IgG were diluted with 1:500) in the dark for 1 h. The cellular nuclei were counterstained with DAPI for 10 min. The cells were observed with an EVOS™ M5000 system (Invitrogen), a 10× objective or confocal laser scanning microscope (ZEISS), and a 63× objective. Co-Immunoprecipitation (Co-IP) For the co-IP assays, HEK293T cells were transfected with the corresponding plasmids. The cells were harvested by using centrifugation (2000 rpm, 25 • C for 10 min), lysed in NP-40 supplemented with 1 mM phenylmethyl sulfonyl fluoride (PMSF) at 4 • C for 1 h, and then centrifuged at 12,000× g for 10 min. Immunoprecipitation was performed using protein A + G agarose according to the manufacturer's instructions, and then the precleared cell lysates were mixed with ANTI-FLAG ® M2 Affinity Gel beads and incubated overnight at 4 • C. The next day, the beads were washed five times with ice-cold PBS and then centrifuged at 1000× g for 5 min. The proteins were eluted with elution buffer (5% SDS and 1% TritonX-100) and analyzed by using SDS-PAGE and immunoblotting. Ubiquitination Assays The cells were transfected with Ub-HA or its mutant vectors. To prevent proteasomal degradation, cells were treated with 20 µM of MG132 for 6 h before harvest. Thirty-six hours after transfection, cells were harvested, and the following steps were the same as for normal co-IP. Statistical Analysis Data were obtained from at least three independent experiments for the quantitative analysis, which were conducted using Student's t-tests for two groups. Statistical analyses were performed using the GraphPad Prism v9 software. Differences considered to be significant at p < 0.05 are indicated by * and those considered to be significant at p < 0.01 are indicated by **. Conclusions We report a new function for the host E3-ubqiuitin ligase TRIM28 in promoting PRRSV replication. We propose that TRIM28 acts as a stabilizer of PRRSV viral protein GP4 by inhibiting its ubiquitination and degradation to stabilize GP4 protein expression, thereby facilitating PRRSV replication.
2023-07-12T06:42:17.469Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "836430605abee9b3f487712170536887177ba9b7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/13/10965/pdf?version=1688135548", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a56e61dd043d09948750015494401d2f452add4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53548031
pes2o/s2orc
v3-fos-license
Inspired by structured decision making: a collaborative approach to the governance of multiple forest values Since the 2000s, consensus-oriented decision making has become increasingly common in the management of natural resources because of the recognition that collaborative processes may enhance the legitimacy of decision making and facilitate effective implementation. Previous research has identified a number of problems with the design and practical facilitation of collaborative processes. Structured decision making (SDM) has been developed as an alternative suitable for decision making characterized by complexity, stakeholder controversy, and scientific uncertainty. Our aim was to investigate the feasibility and practical relevance of collaboration and dialogue inspired by SDM in the sphere of forest management. The methods used included analyses of meetings records and semistructured interviews with participating stakeholders and organizers of a collaborative process focused on improving the management of Swedish forests in the young forest phase. The results show that the SDM rationale of step-by-step teamwork, the involvement of experts, and guidance by an independent facilitator has a number of merits. These merits included the creation of genuine discussion with careful consideration of different interests and values, thus building trust among stakeholders and the Swedish Forest Agency. However, at the end of the process, some issues still remained unclear, including how the decision options would be made practically useful and accessible to forest owners. Furthermore, concerns were raised about the lack of novelty of the options. As a result, there was uncertainty about the extent to which the options would contribute to a more varied forest landscape given the multiple values involved. We conclude with some remarks on the potential future of engaging SDM in the forestry sector. INTRODUCTION Consensus-oriented decision making has become increasingly common in the management of natural resources.Public agencies set up collaborative processes to facilitate outcomes that could not be accomplished by the state on its own without the engagement of private actors (Ansell and Gash 2007, Mårald et al. 2015, Bodin 2017).Such processes with broad stakeholder involvement are often set up to enhance the support (i.e., legitimacy to the public) of decision making and so generate effective implementation on the ground (Reed 2008, Johansson 2016).However, although collaborative processes have become important in managing disputes over resources, the outcomes remain largely unexplored (Ansell and Gash 2007, Lockwood et al. 2010, Emerson and Nabatchi 2015).Furthermore, the role of scientific knowledge and expertise in decision making is often contested.For instance, it is not known to what extent deliberations and policies take into account scientific knowledge about how trade-offs between multiple values can be handled or how stakeholders deal with the fact that such research is associated with major uncertainties (Uggla et al. 2016). Hence, several challenges persist in the design and practical facilitation of collaborative processes.As a way to deal practically with such challenges, a model of structured decision making (SDM) has been developed (Gregory et al. 2012).Briefly, an SDM process recognizes that management choices are often characterized by a high degree of stakeholder controversy, with the result that consensus is seldom possible or even desirable (Gregory et al. 2001).Rather, it is important that collaborative processes address the different values held by stakeholders, that alternative options and their consequences are developed, and that trade-offs between competing objectives are considered.A central starting point is that management options are surrounded by a high degree of scientific uncertainty.It is also important, however, to investigate when the value base determines a position.Reduced uncertainty does not mean that decision support is improved (Mårald et al. 2015).Public agencies often have little knowledge about how to deal with uncertainty and value-based controversy in resource management (Gregory et al. 2012). The potential incentives and obstacles to adopting an SDM approach are especially intriguing when it comes to the governance of forests because of the long-term perspectives attending forest issues (Ogden and Innes 2009, Marcot et al. 2012, Ferguson et al. 2015).Given the predicted climate change scenarios, stakeholder processes and strategies targeting the role of forests in mitigating and adapting to climate change are becoming increasingly important (Ogden and Innes 2009, Wellstead et al. 2013, Rist et al. 2016).Forests are a resource with high public value, even when privately owned, but it is difficult to determine who has the responsibility for harmonizing multiple values (environmental, social, and economic) in the management of forests.In Sweden, for instance, the majority of forest land is owned by small-scale private forest owners or large-scale forest companies.By law, all categories of forest owners have significant room to maneuver in finding ways to harmonize biomass production, conservation values, and forests' social and aesthetic values (Beland Lindahl et al. 2017).Partly as a result of the diverse ownership structure of the land and international pressures for biodiversity conservation, collaborative processes have become common in the governance of Swedish forests since the early 2000s.The deregulation associated with the Swedish Forestry Act https://www.ecologyandsociety.org/vol23/iss4/art16/ in the 1990s, and persistent disagreement between stakeholders about the activities of intensive forest management, have created an ongoing need for forest owners and other stakeholders to meet and deliberate about the challenges of combining forest management with biodiversity conservation and other forest uses (Mårald et al. 2015, Beland Lindahl et al. 2017).Despite the fact that collaboration and dialogue have been an integral part of Swedish forest policy since the 2000s, there is a clear lack of studies analyzing their feasibility and outcomes (see Sundström 2010, Johansson 2013, Mårald et al. 2015, Johansson 2016).It is thus important to evaluate the incentives for and obstacles to more structured ways of facilitating collaborative processes in forest management in a Nordic context. Our aim is to analyze the feasibility and practical relevance of collaboration and dialogue inspired by SDM in the governance of multiple forest values.We did this by looking at the Swedish Forest Agency's commission on adaptive forest management (Swedish Forest Agency 2013, 2016a).The work of this commission provides an opportunity to explore how public agencies deal with complex issues characterized by uncertainty and stakeholder controversy.In particular, we examine a stakeholder dialogue that took place during the final year of the commission and focus on the management of Swedish forests in the young forest phase.We begin by outlining an analytical framework on collaborative processes and SDM theory and logic, including a review of previous research.The methods follow, including brief background information about Swedish forest use and policy and the particular case studied.We then analyze the Forest Agency's collaborative process and how it was inspired by SDM in two parts: we analyze how SDM has been interpreted and applied in practice, and then we offer a summary of stakeholders' perceptions of the objectives, procedures, and outcomes of the SDM process.Finally, we review the feasibility and practical relevance of collaboration in Swedish forestry and the potential for engaging SDM in the forest sector. DESIGNING COLLABORATION USING A STRUCTURED DECISION PROCESS Since the 2000s, the role of the state in public administration has changed toward more inclusion of nonstate actors in policy making and implementation (Denhardt andDenhardt 2011, Bodin 2017).This deliberative turn, or the move from government to governance, may signal the impossibility of handling complex problems such as climate change without cooperation with nonstate actors.The inclusion of various interest groups in the decision-making process is frequently credited with generating more legitimate decision-making processes and effective achievement of public goals (Howlett andRayner 2006, Hysing 2009).According to Denhardt and Denhardt (2011), this form of governance requires a public administration that helps citizens articulate their shared interests and have them met through public institutions characterized by responsiveness.Such institutions must trust in the efficacy of collaboration and work to bring proper stakeholders to the table to seek solutions to the problems that communities face.The role of government is to facilitate solutions to public problems and be responsible for assuring that the decision-making process is consistent with the public interest and democratic norms of justice and fairness.The role of public administration is to take an active role in setting up arenas in which various stakeholders can meet and articulate shared values and collective responsibility for the public interest (Denhardt and Denhardt 2011). There are several challenges in the design and practical facilitation of collaborative processes.To begin with, any form of collaboration requires the true commitment of various stakeholders.Stakeholders need to be motivated to participate and able to participate on equal terms, they need to commit to the decisions made, and at the end, they need to feel that the time spent was worth the effort (Kangas et al. 2010).This outcome requires broad stakeholder participation, transparent decisions, awareness of collective responsibility, trust building, and measurable outcomes (Johansson 2016).At the same time, public agencies may struggle with the different expectations of each of the participants and their own desires (Wesserlink et al. 2011, Mårald et al. 2015, Westberg and Waldenström 2017).A recent study of Finnish forest governance highlights the importance of past decision-making processes involving the stakeholders, the extent to which all relevant stakeholders participate in the deliberations, and the extent to which divergent views are considered in a balanced and transparent manner (Sarkki and Heikkinen 2015; see also Ansell and Gash 2007).An analysis rooted in such an approach considers to what extent an initiative encourages the emergence of shared understandings or new solutions and respectful consideration of all opinions. SDM was developed as a practical way to deal with the above challenges (Gregory et al. 2001(Gregory et al. , 2012)).The SDM approach has emerged from the need to provide more informed decisions about environmental policy choices and their associated ecological uncertainties.It aims to provide better solutions, more productive participation by stakeholders, and greater acceptance of resource management.It has been defined as "the collaborative and facilitated application of multiple objective decision making and group deliberation methods to environmental management and public policy" (Gregory et al. 2012:6).The primary purpose of SDM is to aid and inform decision makers rather than to prescribe a preferred solution.In practice, it is a prescriptive approach that draws on decision analysis and applied ecology along with insights gained from other behavioral sciences, group dynamics, and negotiation theory.It is an explicit step-by-step process that a group agrees to follow.It takes into consideration both values (what is important) and consequences (what is likely to happen if an alternative is implemented).An SDM approach recognizes that different values denote what matters, that is, what is important in the context of the specific problem at hand.The goal of the SDM process is essentially to clarify possible actions and their implications across a range of relevant concerns by (1) clarifying the decision context and (2) setting objectives.Thus, it focuses on (3) identifying, comparing, and iteratively refining alternatives.These alternatives should reflect substantially different approaches to a problem, based on different priorities, and should present decision makers with real choices.Choosing a preferred alternative will involve an open dialogue about tradeoffs. The stakeholders involved in an SDM process need to be prepared to learn, to explore competing hypotheses, and to build a common understanding of what constitutes the best available information for (4) estimating consequences and (5) evaluating trade-offs.In so doing, they will clarify areas of agreement and disagreement and the reasons for these disagreements.The results of an SDM process are useful to decision makers whether or not a consensus is reached.Public programs often stress the importance of consensus among stakeholders; it is seen as a goal to be striven for, even though it may not always be attainable.However, dispute resolution and consensus building should be avoided in an SDM approach (Gregory et al. 2001).Rather, SDM is concerned with (6) developing learning and building management capacity so as to make better decisions in the future.Instead of seeking to resolve disputes, the deliberative process should focus on aiding decisions, both by the stakeholders and by the agency empowered to make the final decision.This requires an open process with thoughtful exploration of the values of different stakeholders.Conflict among group members should not be viewed as a problem to be overcome but as an opportunity to clarify values and facts relevant to the decision at hand.There is an emphasis on learning over time, including a formal commitment to review decisions when new information becomes available.What exactly is done at each step of an SDM process and the level of rigor and complexity will depend on the nature of the decision, the stakes, the resources, and the timeline (see Table 1 for a guide to the step-by-step approach). Our review of the literature shows that SDM has been interpreted and applied in various ways in resource management in the past few years.Recent research has analyzed its use in settings such as supplementary feeding in species conservation (Ewen et al. 2015), recreational fisheries (Irwin et al. 2011), the selection of monitoring variables and management priorities for salt marsh ecosystems (Neckles et al. 2015), and the restoration of river basins (Kozak and Piazza 2015).These studies have drawn attention to specific, well-defined problems in marine conservation.However, few studies have looked at the complexity of governing resources with multiple uses.As regards forestry, we have identified studies that address parcelization and forest fragmentation of private lands (Ferguson et al. 2015), the implementation of regional forest management plans (Ogden and Innes 2009), and the management of national forests (Marcot et al. 2012).The approach of Ferguson et al. (2015) is of particular interest: The purpose of their study was to help landowners identify which decision options would be most likely to result in outcomes that meet objectives related to forest sustainability.The study first identified landowners' multiple objectives and their relative importance, and then modeled the probability of the different outcomes for each decision option.The authors concluded that SDM may well help land owners to identify creative decision options that are most likely to meet their objectives.Furthermore, they confirm that SDM is an effective approach with which to evaluate options rigorously for decision problems that are controversial.From a different viewpoint, Marcot et al. (2012) provide an SDM approach to the study of three case studies concerning national forest land management plans and project plans.They came to the conclusion that SDM can be helpful in decomposing and understanding complex problems, yet the key challenge is how to bring these tools and processes into daily implementation.Ogden and Innes (2009) identified 30 forest practitioners who were involved in the implementation of a regional forest management plan in identifying climate change vulnerabilities and evaluating adaptation options.The practitioners identified several decision options, which provided insight into the readiness of practitioners https://www.ecologyandsociety.org/vol23/iss4/art16/ to engage in adaptive strategies in a regional context.Here, we build on these review examples and provide a case-based assessment of an SDM-inspired approach in the Swedish forest sector.To strengthen environmental considerations in forest management, the Forestry Act of 1993 (which is still in force) gave equal priority to biodiversity conservation and timber production.However, the Act sets only minimum criteria related to both goals and does not stipulate how they are to be achieved.Instead, Swedish forest policy explicitly affirms the importance of "freedom with responsibility," granting all Swedish forest owners, public and private, large-scale and small-scale, substantial scope to decide how to incorporate environmental protection in the management of their forests (Johansson andKeskitalo 2014, Beland Lindahl et al. 2017).Previous research has shown that forest owners have multiple objectives, suggesting that an emphasis on only economic benefits is not desirable from the forest owners' point of view (Bjärstig and Sténs 2018).However, a recent study of Swedish forest policy has found that the current governance model adapts a "more-of-everything" pathway, in which various ecological, economic, and social goals are expected to be prioritized and achieved simultaneously Mårald et al. 2015, Johansson 2016).For decades, controversies over forestry and environmental issues have been common.The lack of regulatory clarity and scientific uncertainty about sustainable harvest levels and biodiversity protection may also allow stakeholders with dissimilar interests to justify their standpoints (Uggla et al. 2016).As a result, there is a need to find ways to develop models and processes in which scientific uncertainty and stakeholders' divergent views can be handled (Johansson 2016, Uggla et al. 2016). CASE STUDY AND METHODS It is in this context that adaptive management has come to the fore.This approach to the management of complex systems is based on learning, thus offering a social steering instrument that complements command-and-control regulations (Rist et al. 2016).This approach fits well with the growing demand for alternatives to Sweden's current dominant silvicultural system, driven by a desire to increase biomass production, meet environmental targets, and mitigate climate change.However, diversified forest management that deviates from well-established practices carries many uncertainties that are especially evident in cases with diverse land ownership and long rotation periods (Rist et al. 2016).In 2013, the Swedish government commissioned the Swedish Forest Agency, together with the Swedish University of Agricultural Sciences (SLU), to develop a model of adaptive forest management (Swedish Forest Agency 2013, 2016a).The overall aim was to create conditions for higher biomass production and better environmental status for Swedish forests.The Agency's interpretation of adaptive management focused on developing knowledge about sustainable forest management at the interface between science and practice.The government provided special funds for a three-year program in which a working model could be tested.In May 2013, the Agency and SLU presented a first report that proposed a working model (Swedish Forest Agency 2013).In April 2016, a final report was ready, with the results of the project (Swedish Forest Agency 2016a).One of the proposals in the first report was to establish a special stakeholder panel.The main task of this group would be to identify troublesome gaps or uncertainties related to forest management that would be appropriate to test with the adaptive model through a stakeholder dialogue process.This stakeholder group would also serve as a reference group in the implementation phase of the project.The panel was formally established in the autumn of 2013 after a request to stakeholders in the Forest Agency's National Sectoral Council.After an initial phase of process development, the panel agreed on various forest management issues that were suitable for a collaborative dialogue process.The first question to be addressed, and thus the point where the whole approach could be tested, was the management of forests in their young phase (Swedish Forest Agency 2016a,b). This first application of the adaptive model provides a case study of feasibility and the practical relevance of collaboration and dialogue in governance when there are multiple forest values (Fig. 1).This case study takes a qualitative approach and includes 14 semistructured interviews.The interviewees comprised all of the stakeholders who participated in the collaborative process in 2015-2016, two officials from the Swedish Forest Agency who were responsible for organizing the process, and one independent facilitator who facilitated all the meetings.The stakeholders represented a number of diverse interests: hunting (2 stakeholders), reindeer husbandry (1), environmental values (1), energy (1), forestry services (1), large-scale forestry (2), small-scale forestry (1), tourism (1), and outdoor activity (1).The interviews were conducted during the spring of 2016, either face-to-face or by telephone, and lasted from 40 min to 2 h.All respondents were assured of anonymity.The interviews were recorded and transcribed verbatim.The quotations included here were translated from Swedish into English.Because the interviews were semistructured, they were generally open, allowing the researcher and respondent to examine new ideas that were brought up during the interview.A number of questions were thought about well in advance, including an interview guide with topics and questions drawing on collaborative governance and SDM reasoning (Table 2).References to interview participants are in the form "IP x," where x is the number of the person interviewed. Our results also rely on the analysis of records from the seven dialogue meetings that were held and previous research on collaborative processes and SDM approaches.The results were categorized into two sections.First, we analyze how SDM has been interpreted and applied in practice (with particular focus on the adaptive model in Fig. 1).Second, we offer a summary of stakeholders' perceptions of the objectives, procedures, and outcomes of the SDM process. Clarifying the decision context and setting objectives Before the collaborative process began, the Forest Agency appointed a secretariat consisting of a facilitator, or process manager, and two administrators.The Agency proposed to follow the original step-by-step SDM approach outlined by Gregory et al. (2012) and the model of adaptive forest management developed by SLU (Fig. 1; Swedish Forest Agency 2016a).However, at the start of the exercise, it was decided to make some changes primarily related to the context of decision making.Instead of acting as the decision maker and clarifying what general objective should be met, the Agency decided to formulate the task of the process in an open-ended fashion (IP 1, 4).This also meant that the process came to focus more on developing decision support for future opportunities than on making an actual decision in the near future. A working group with a broad representation of stakeholders was set up.Several organizations chose to participate, but some stakeholders were unable to participate, which meant that the Agency had to contact a number of other stakeholders before the working group could be considered inclusive.Particular importance was attached to ensuring that the participants held different values (Swedish Forest Agency 2016b; IP 1, 4).One important aspect of the process was engaging an independent facilitator to moderate the discussions and provide information and feedback regularly after the meetings.Right from the start, the facilitator was given quite free rein on how to interpret and proceed with the SDM approach (IP 1, 4). According to the Forest Agency, the main objective of the collaborative process was to develop variants of silvicultural programs (Table 3) for even-age forest management that could help landowners meet different land-use objectives.Groups of participants brainstormed different land-use objectives and possible alternative measures in young, even-aged forest stands, as well as possible ways of estimating consequences (IP 1).In other words, the aims of the process were open and general; it was up to the stakeholders to decide how to define the most important aspects (IP 1).Although it was important that the discussions stayed within the current governance framework of Swedish forest use because the results would feed into current policy and practice, in reality, Swedish forest owners have considerable room to maneuver in managing their forests.The stakeholder discussions focused solely on aspects of management in the young phase, leaving out other aspects of the rotation period such as regeneration methods after final cut and commercial thinning.It was also made clear from the beginning that consensus among the stakeholders was neither possible nor desirable. An important point of departure was that the results of the process should be useful to forest owners in their production forests.This meant that suggestions had to be in line with the current Forest Act and general forest policy, and the Forest Agency had to be able to stand behind the final content.According to present forest policy, it is very important for the future development of a forest stand to take measures before the trees reach the size at which they can provide commercial stem wood.Therefore, precommercial thinning is recommended to improve the overall economy of a full rotation cycle, determine tree species composition, avoid mortality and self-thinning, promote the growth of remaining trees, and favor quality development of the stand.The Forest Agency has been concerned about the low use of precommercial thinning after the deregulation of the Forest Act in the 1990s.Another specific goal of current Swedish forest policy is to increase variety in the management of Swedish forests (Swedish Forest Agency 2016b).Thus, another important objective was to develop management alternatives that could contribute to more varied forestry and increase interest among landowners in managing the forest in the young forest phase, assuming a silvicultural system based on even-aged management (IP 2).Given these broad objectives, it was necessary to have a wide range of stakeholder viewpoints.https://www.ecologyandsociety.org/vol23/iss4/art16/From this point on, the main objective of the process was linked to the development of various options that forest owners could use to meet their objectives in forest management (Table 4).To capture the different needs and objectives of forest owners, the discussions started with the question, "What is important in young forest management, in the shorter and longer term?"All stakeholders were given an opportunity to clarify their perspectives and priorities in open discussion.From the start, it was clear that many of the identified objectives could be merged.For instance, forest damage was seen as a bigger problem when moose (Alces alces) and other ungulate species such as roe deer (Capreolus capreolus), fallow deer (Dama dama), and red deer (Cervus elaphus) are present.This was not the case for semidomesticated reindeer (Rangifer tarandus tarandus) because they do not feed on trees, even though damage sometimes occurs from, for example, trampling. Developing alternatives and evaluating consequences and tradeoffs After the decision context had been clarified, the stakeholders formulated possible management objectives (Table 5).This step was necessary because the Agency formulated the task quite broadly, and the exercise was meant to provide support for further discussion and move the work forward.As background to the exercise, some of the stakeholders were given an opportunity to present their thoughts on the changes they would like to see in the present management of young, even-aged forests.The results of this exercise built on the previous discussions of what the stakeholders thought were important consequences of management in the young phase. In the subsequent step, the stakeholders discussed ways to develop measures and criteria for later evaluation of management options.Both of these issues prompted much discussion among the stakeholders.Some of them argued that current forest management should form the basis for the options, whereas others argued that a focus on traditional even-aged forestry would lead to too little innovation and the risk of developing a silo mentality. The discussions revealed that it was difficult to develop measures and criteria for evaluation that everyone in the group could agree on, and so this step in the process was postponed. Unresolved issues following the first two meetings with the stakeholders required some additional work between these meetings before the group decided to move forward.However, the first step of SDM (clarifying objectives) was not closed.The stakeholders identified a number of management objectives so that they could continue working on the next step to develop measures or criteria that could be used to show how the different management alternatives live up to what stakeholders think are important considerations in the young phase.The stakeholders then summarized the differences and similarities they could see in their respective management options.The facilitator asked the stakeholders to reflect on the similarities and differences compared to the nine objectives originally identified by forest owners, as well as on how the results of this exercise could become useful to the Forest Agency.One of the objectives, "varied forest stands with high biodiversity," was then divided into two silvicultural programs: one for multilayered, unevenaged stands and one for broadleaf-dominated, even-aged stands.Two management options, namely "a forest easy to access" and "a forest suitable for reindeer herding," were considered to require the same type of forest management and were thus merged into one.Thereafter, the workgroup entered the final SDM step of formulating silvicultural programs for the agreed-upon eight management objectives (Table 5), which should result in guidance to the Forest Agency's management experts and forest land owners.As part of the dialogue process, specialists from the Agency were then involved.They supported the stakeholders with expertise as they refined and developed more specific stand-based silvicultural programs to meet the identified management objectives.Because forest management depends not only on fundamental objectives but also on geographical and natural conditions, the group agreed to define initial states of the young forest stands.For practical reasons, the stakeholders decided to define four "typical" young forest stands in northern and southern Sweden, for a total of eight typical stands.These stand types became the point of departure for the silvicultural programs developed by the experts. The next step of the process was to analyze the consequences of the different silvicultural programs.Therefore, the Agency consulted an expert at the Forestry Research Institute of Sweden (Skogforsk).Contact was also established with three forestry experts from the Forest Agency and the advisory and counseling services of the Agency regarding the future handling of the decision options resulting from the collaborative process.At this point, the discussion about criteria and indicators needed to be resumed.This discussion was done at the sixth meeting with the https://www.ecologyandsociety.org/vol23/iss4/art16/stakeholder group, after which the secretariat summarized suggestions for criteria and indicators based on the discussions during this meeting.These suggestions were sent out to all group members for comment and to the expert from Skogforsk.The expert from Skogforsk also presented possibilities and limitations using the Heureka modeling tool (https://www.slu.se/en/departments/forest-resource-management/program--project/forestsustainability-analysis/heureka/heureka-systemet/en-heureka/).The Heureka forestry decision-support system is a suite of freely available software developed and hosted by the Swedish University of Agricultural Sciences.The system covers the whole decision-support process from data inventory to selection among plan alternatives with multicriteria decision-making techniques. It is used in practical forestry in Sweden today.After suggestions for adjustments, a set of refined criteria was sent to the group.The expert from Skogforsk analyzed the eight silvicultural programs for the eight typical forest stands described earlier.The results were amalgamated and summarized in tables provided to the group members. The analyses of consequences showed that silvicultural measures and the design of silvicultural programs applied during the young forest phase can have a large effect on what management objectives can be met, both in young managed forests and later during the rotation cycle.It was also obvious that some management objectives could be met by similar silvicultural programs because some programs could meet several objectives.However, other objectives required more specific programs, and balancing against other objectives could not be achieved. Implementation and way forward One strength of the SDM model is that it has a structured approach and permits iteration.The working group can, if necessary, go back to the earlier stages and try new approaches or make additions.The stakeholder group made good use of this opportunity.On several occasions, they made minor corrections in the options and evaluation criteria.Late in the process, they also changed a number of the options to increase differentiation.A final exercise simulated the use of the result to advise forest owners on forest management.After this exercise, the stakeholder group agreed that the mission given by the Forest Agency was now complete and instructed the secretariat to compile a final report (Swedish Forest Agency 2016b). The result of the collaborative process has now been delivered in the form of a number of suggested silvicultural programs to meet eight different management objectives (summarized in Table 5).This was done for the typical stands in the north and south of Sweden used as a starting point for management in the young phase of even-aged, managed forests.The programs can be seen as examples of objectives that a forest owner could adopt.This outcome was consistent with the decision context because the Forest Agency envisioned decision options that were well described and analyzed in line with current forest policy.Moving forward, the options should provide landowners with recommendations on how to manage the forest in the young phase to stimulate more variation than is currently provided. Learning from structured decision making: stakeholders' perceptions of the collaborative process In terms of the decision context, a majority of stakeholders considered the collaborative process to be rather unclear in the beginning (IP 3,6,7,8,10,11,12).Given the complex nature of the task, it was not clear which objectives should be addressed and where they would lead.Some of the stakeholders saw this as an advantage (IP 6, 7, 9, 13), whereas others argued that the mandate and objectives should have been clearer from the outset (IP 3,8,10).At the same time, many of the stakeholders reasoned that this was probably a necessary step, although it took some meetings to clarify this before they perceived the process as structured (IP 2,8,9).All of the stakeholders considered it reasonable that the group stayed within the framework of current forest policy; otherwise, it would not be useful for forest owners, who were the targeted end-users. The delimitation of the context was identified as a problem. Because the stakeholders were instructed to discuss management only in the young forest phase, in this case, mainly precommercial thinning, other important silvicultural measures during the rotation period of an even-aged forest stand were left out of the discussion.Although many of the stakeholders argued that this was a reasonable delimitation, they also considered it important to clarify the timescale of forest management, that is, how forest owners undertake silvicultural measures today and how this relates to the forests' future development (IP 3,8).Thus, the stakeholders often had to remind themselves about this delimitation and stick to the task. A majority of the stakeholders identified the fact that all of the stakeholders did not participate in all of the meetings as a problem.Nonattendance was due to other priorities, often related to their day-to-day work, lack of time, and in some cases, lack of financial resources.Some of the stakeholders argued that it would have been helpful if the meetings could have been held over several consecutive days rather than being spread out over time (IP 5,7,9,11,13).The length of time between meetings made it easy to forget what had been discussed at the previous meeting.Furthermore, stakeholders stressed that they had not actually gained more knowledge about young stand management after the process, indicating that they considered the learning process on SDM more rewarding (IP 3,8,10,11). There was, however, no real discussion about how to make tradeoffs between different management objectives.Stakeholders pointed out that there may be several different ways to meet a particular objective.They indicated that there is a need to demonstrate the effects of a management system that accommodates multiple values, for example, benefiting both game and timber production.However, it is difficult to quantify the effects of factors such as browsing damage and loss of biodiversity (IP 2, 3, 7).Stakeholders drew attention to the fact that it would be beneficial if problems related to forest damage, such as those caused by game, could be handled at the landscape level.Failure to do so was considered a shortcoming of current management (IP 2,7,9). In terms of the results of the process, the final silvicultural programs developed by the Agency's experts were generally reasonable, and no major concerns were identified.Some of the stakeholders noted that the objectives were still developed within the framework of traditional even-aged forestry and that the silvicultural programs were standard solutions with few surprises (IP 7,11,12,13).Some stakeholders also complained that they were given little insight into how the silvicultural programs were https://www.ecologyandsociety.org/vol23/iss4/art16/designed in detail (IP 7,11,13).Moreover, a majority of the stakeholders argued that the eight final decision options were not particularly visionary, making it possible to scrutinize the content and provide even fewer, more comprehensive instructions for future management (IP 3,6,7). In general, the stakeholders were positive toward the collaborative process and the SDM approach, but when asked to describe the model in their own words, almost none of them could easily do so.A clear majority felt that the discussions had been respectful and transparent and that the facilitator had done a very good job of keeping the group together and providing valuable information and feedback.Most importantly, all of the stakeholders' views and opinions were respected, which is reflected in the eight options that came out of the process.This would certainly not have been the case if a smaller number of proposals had been discussed.In general, the proliferation of interest groups was considered good, and many different perspectives were brought to the table, although some stakeholders raised the importance of keeping the discussions open (IP 6,9,11).It is important to stress that all of the stakeholders who did not have a forest owner perspective argued that it was important to look beyond the financial perspectives of forestry, although a financial perspective was often the focus of the discussions.One issue that was raised in the interviews was "What actually is an economic profit, and for whom?" (IP 11,13).From a forest owner perspective, however, the tendency was to think in the opposite direction.One respondent stated, "We already work with different values every day in our management operations," (IP 3), and a common viewpoint was expressed in the following way: "You cannot maximize all values in the same stand; you simply have to pick and choose," (IP 3). At the end of the process, a majority of the stakeholders were still uncertain about how the Forest Agency would proceed with the decision options.However, everyone agreed that it was important that private landowners receive updated recommendations, especially because many options are actually within the framework of current laws and regulations.Thus, some of the stakeholders were a bit pessimistic about the potential of the collaborative process (IP 3,8,11,13).They felt that it was unlikely that large-scale forest companies would benefit from the results.From a forest owners' perspective, the options identified were seen as fairly obvious.Thus, it was considered important to recognize that a forest owner may have different objectives with his or her forestry (IP 4,5,7,11,12,13). DISCUSSION AND CONCLUSIONS Here, our aim was to analyze the feasibility and practical relevance of collaboration and dialogue inspired by SDM in the governance of multiple forest values.Our empirical case study drew on a collaborative process to improve the management of young, evenaged forest stands in Sweden.For a number of reasons, the Forest Agency deviated from the original SDM approach.For instance, we identified the absence of a formal decision maker and a lack of alternative strategies and their estimated consequences.The decision context was not clarified from the beginning; rather, developing a context was viewed as a part of the collaborative process.However, the process would have gained value had the context been determined previously.Because the process was guided by the formulation of many different decision options, key trade-offs were largely left to individual landowners in the form of a "pick-and-choose" support option.It was also clear that little attention was given to the identification of possible knowledge gaps relevant to the quality of the decision making.In the absence of an identified decision maker, the aim of the process turned into the development of new decision options for the Forest Agency's counseling services and updating advice to forest owners in their management planning.As such, the results fit very well into the underlying principle of "freedom with responsibility" of Swedish forest policy and the overarching idea of achieving more variation in Swedish forestry.However, the lack of discussions on tradeoffs between different management objectives may well result in ambiguity.Furthermore, there are several ways to meet a particular objective, and the discussions would have gained value if this idea had been clarified.There is a risk that a collaborative process will retain the same level of uncertainty as before the process and that the initial visions return to the status quo.However, given the "more-of-everything" pathway and the ambitions of adaptive management in Swedish forest policy, key trade-offs are particularly important to bring to the table.It should also be acknowledged that different stakeholders have different perspectives and priorities, which affect how "freedom with responsibility" is interpreted in practice.Even so, because Swedish forest policy relies extensively on collaboration to reach tangible and sustainable outcomes, it is vital to find new ways of harmonizing multiple values (Mårald et al. 2015, Johansson 2016). Given the context, it was probably necessary to make many adaptations of the SDM process to fit the Swedish system.However, as a result, this study can only shed light on the ways in which an SDM approach can be interpreted and developed in this context and cannot provide answers about its full applicability.Regarding generalizability, we do not claim to have generated results that are directly applicable to any case of SDM in the forest sector in Sweden or elsewhere (see Ogden and Innes 2009, Marcot et al. 2012, Ferguson et al. 2015).Despite the limitations of this study, it confirms and sheds additional light on the struggles that resource agencies deal with when setting up collaborative processes.For instance, it was difficult for the Forest Agency to get broad participation and to engage all stakeholders to commit and dedicate time to the process.For some of the stakeholders, collaborative processes are not considered part of their daily work, and when they had to prioritize their tasks, the meetings were not their first priority.At the end of the process, some issues still remained unclear, including how the silvicultural programs would be made practically useful and accessible to forest owners.Furthermore, concerns were raised about the lack of novelty of the options.As a result, it was uncertain to what extent the options would contribute to a more varied forest landscape that takes multiple values into consideration. On a positive note, the results show that the SDM rationale of step-by-step teamwork, the involvement of expertise, and guidance by an independent facilitator fostered trust among the stakeholders and between them and the Agency.A number of positive results were identified by a majority of the stakeholders, including the creation of genuine discussion with consideration of different interests and values.Such social learning, or "soft" fallouts, should not be dismissed when it comes to the implementation of forest management in countries that rely on https://www.ecologyandsociety.org/vol23/iss4/art16/voluntary participation to reach often competing objectives.Our study confirms the importance of devoting careful attention to the process of stakeholder dialogue and not merely its results.Despite the fact that a majority of the stakeholders were highly unsure about how the decision options would be made practically relevant and accessible to landowners and whether they would actually produce any changes on the ground, they were generally positive toward the SDM approach.In particular, this attitude was the result of stepwise work under the guidance of an independent and skilled facilitator.In general, stakeholders recognized value differences and were able to revise their own positions.However, it must be acknowledged that the management of forests in the young phase, as it is undertaken today, is not one of the most controversial issues in Swedish forestry, even though it is complex (Mårald et al. 2015).This point is confirmed by examining the final decision options.Consistent with Ferguson et al. (2015), we found no drastically different objectives among the stakeholders, and many of the decision options could be merged.We also could not identify a single best management option.Rather, the main objective of the process was to inform forest owners about a variety of decision options suitable for all of the goals a forest owner might have.Another objective was to make forest owners aware of potential trade-offs between different goals.Despite the fact that the stakeholders represented different interests, we argue that the outcomes of the process were determined by the open decision atmosphere and by the various objectives available from the start. This case study has enabled us to identify many advantages of a collaborative process inspired by a structured decision approach when the issue at hand is multifaceted and complex.It is important to stress that collaborative processes in forest management need to consider adaptability at all stages.In our case, an adaptive model functioned relatively well despite, or perhaps because of, deviations from the initial model early in the process.Because many steps in the process worked, it can be argued that there is empirical support for SDM, although the model needs to be adapted to real settings.We recommend that resource agencies continue to use this model and develop processes suitable for each particular context.This development will include a careful choice of issues to be handled and how the issues are linked to policy or decision-making processes.It also involves a well-designed process in which the roles and responsibilities of the actors involved, both the public agency and the stakeholders, are recognized.Finally, it also requires access to appropriate expertise and decision-support tools to facilitate the comparison of relevant decision alternatives. Responses to this article can be read online at: http://www.ecologyandsociety.org/issues/responses.php/10347 Sweden is one of the most extensively forested countries in Europe, with 28 million ha of forest land, of which approximately 75% is under active management.Sweden holds just under 1% of the world's commercial forest area, but provides 10% of the sawn timber, pulp, and paper that is traded on the global market.The forest industry accounts for between 9 and 12% of Swedish industry's total employment, exports, sales, and added value.Close to 90% of paper and pulp production is exported, and the corresponding figure for sawn-wood products is almost 75%.These figures make Sweden the world's third largest exporter of pulp, paper, and sawn timber (Royal Swedish Academy of Agriculture and Forestry 2015).Sweden has a relatively high percentage of privately owned forests: approximately 50% of the country's forest lands are owned by nonindustrial private forest owners; private corporations own 25%; the state (including stateowned corporations) owns 17%; and other private and public bodies own the remaining 8% (Swedish Forest Agency 2013). Table 3 . Objectives of the collaborative process. Table 4 . Examples of fundamental objectives.
2018-10-31T21:58:39.224Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "81ab126b22fc832d11c9daccf66735ab4d414674", "oa_license": "CCBY", "oa_url": "https://www.ecologyandsociety.org/vol23/iss4/art16/ES-2018-10347.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09bb474422eb9dedaecb5d779c8c81c5e1da6f15", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
258675644
pes2o/s2orc
v3-fos-license
Short I · · · O Interactions in the Crystal Structures of Two 2-Iodo-Phenyl Methyl-Amides as Substrates for Radical Translocation Reactions : Radical translocation reactions are finding various uses in organic synthesis, in particular the stereospecific formation of complex natural products. In this work, the syntheses and single-crystal structures of two substituted 2-iodo-phenyl methyl-amides are reported, namely cyclo -propane carboxylic acid (2-iodo-phenyl)-methyl-amide, C 11 H 12 INO ( 1 ), and cyclo -heptane carboxylic acid (2-iodo-phenyl)-methyl-amide, C 15 H 20 INO ( 2 ). In each case, the methyl-amide group has a syn conformation, and this grouping is perpendicular to the plane of the benzene ring: these solid-state conformations appear to be well setup to allow an intramolecular hydrogen atom transfer to take place as part of a radical translocation reaction. Short intermolecular I · · · O halogen bonds occur in each crystal structure, leading to [010] chains in 1 [I · · · O = 3.012 (2) Å] and isolated dimers in 2 [I · · · O = 3.024 (4) and 3.057 (4) Å]. The intermolecular interactions are further quantified by Hirshfeld surface analyses. Introduction Radical translocation reactions (radical generation by photolysis, heat, or reaction with an initiator, followed by an intramolecular H atom shift) have found various uses in organic synthesis, from forming simple carbocycles [1] to complex natural products [2,3]. These reactions rely on the translocation (i.e., H atom migration) of an initially generated radical to a remote site, usually four [4] to seven [5] atoms away. Figure 1 illustrates the general principle involved. Introduction Radical translocation reactions (radical generation by photolysis, heat, or reaction with an initiator, followed by an intramolecular H atom shift) have found various uses in organic synthesis, from forming simple carbocycles [1] to complex natural products [2,3]. These reactions rely on the translocation (i.e., H atom migration) of an initially generated radical to a remote site, usually four [4] to seven [5] atoms away. Figure 1 illustrates the general principle involved. The driving force behind the rearrangement is assumed to be the formation of a more stable radical after translocation; typically, the initially formed radical is an aryl or vinyl species, perhaps formed by photolytic cleavage of a C-X (X = halogen) bond in a benzene ring, which is unstable and highly reactive [6]. The high reactivity and lack of stability of these radicals is presumed to be a result of the lone electron occupying an σ-orbital that The driving force behind the rearrangement is assumed to be the formation of a more stable radical after translocation; typically, the initially formed radical is an aryl or vinyl species, perhaps formed by photolytic cleavage of a C-X (X = halogen) bond in a benzene ring, which is unstable and highly reactive [6]. The high reactivity and lack of stability of these radicals is presumed to be a result of the lone electron occupying an σ-orbital that is orientated perpendicular to the aromatic/conjugated π system, and therefore, stabilization of the radical by delocalization is not possible. In order for the intramolecular translocation reaction to occur, it has been determined that the molecule must adopt a cis conformation [7], in which the two components (the initial radical and the C-H bond to supply the transferrable H atom) face each other, and the use of a suitable N-bonded substituent attached to the aromatic ring can provide a 'conformational lock', thereby optimizing the likelihood of translocation and subsequent cyclization [8]. In order to help further understand this process, the related compounds cyclo-propane carboxylic acid (2-iodo-phenyl)-methyl-amide (alternative name: N-(2-iodophenyl)-N-methylcyclopropanecarboxamide), C 11 H 12 INO (1), and cyclo-heptane carboxylic acid (2-iodo-phenyl)-methyl-amide (alternative name: N-(2-iodophenyl)-Nmethylcycloheptanecarboxamide), C 15 H 20 INO (2), were prepared, and their crystal structures were determined. Synthesis of 1 Cyclopropane carbonyl chloride (1.04 g, 10.4 mmol) was added dropwise to a solution of 2-iodoaniline (2.00 g, 9.13 mmol) and N,N-diisopropylethylamine (Hünig's base or DIPEA) (1.53 g, 11.9 mmol) in tetrahydrofuran (THF) (20 mL) at 0 • C under nitrogen ( Figure 2). The solution was then allowed to warm to room temperature and stirring was continued for a further four hours. The reaction mixture was then diluted with diethyl ether (50 mL) and washed with brine (2 × 30 mL) and then water (30 mL). The ether layer was then dried with MgSO 4 and filtered, and the solvent was removed at reduced pressure. The product was purified by recrystallization from the mixed solvents of dichloromethane and hexane, yielding cyclopropane carboxylic acid (2-iodo-phenyl)-amide (3) (Figure 1) Figure 2. Synthesis schemes for 1 and 2 via intermediates 3 and 4 (see the text for abbreviations). Synthesis of 2 The 2-iodoaniline (3.00 g, 13.7 mmol) was added to a solution of cycloheptane carboxylic acid (2.4 g, 17 mmol) in dichloromethane (DCM) (20 mL). A solution of di-cyclohexylcarbodiimide (DCC) (3.17 g, 15.4 mmol) in DCM (10 mL) was then added dropwise at 0 °C and 4-(dimethylamino)pyridine (DMAP) (0.17 g, 0.14 mmol) was added as a catalyst. The resulting solution was stirred for 30 min at room temperature then cooled in ice, and the solid was filtered off and washed with DCM. The filtrate was collected and washed with 2 N HCl solution (3 × 30 mL), and then with saturated NaHCO3 solution (3 A solution of 3 (1.00 g, 3.48 mmol) in THF (10 mL) was added dropwise to a suspension of sodium hydride (502 mg, 4.53 mmol) in dry THF (20 mL) at 0 • C. Once hydrogen evolution had ceased, iodomethane (497 mg, 3.48 mmol) was added to the solution and was allowed to stir overnight. The reaction was quenched with ammonium chloride solution (2 mL) and diluted with diethyl ether (50 mL). The ethereal solution was washed with brine (2 × 30 mL) and then water (30 mL). The ethereal solution was then dried with MgSO 4 , filtered, and solvent was removed at reduced pressure. The crude product was purified by column chromatography, eluting with hexane/ethyl acetate (4:1), yielding 1 as a colorless solid (942 mg, 90%); m.p. 97-99 • C; HRMS: found MH + , 302.0032 C 11 Synthesis of 2 The 2-iodoaniline (3.00 g, 13.7 mmol) was added to a solution of cycloheptane carboxylic acid (2.4 g, 17 mmol) in dichloromethane (DCM) (20 mL). A solution of di-cyclohexylcarbodiimide (DCC) (3.17 g, 15.4 mmol) in DCM (10 mL) was then added dropwise at 0 • C and 4-(dimethylamino)pyridine (DMAP) (0.17 g, 0.14 mmol) was added as a catalyst. The resulting solution was stirred for 30 min at room temperature then cooled in ice, and the solid was filtered off and washed with DCM. The filtrate was collected and washed with 2 N HCl solution (3 × 30 mL), and then with saturated NaHCO 3 solution (3 × 30 mL) and water (30 mL). The organic layer was collected and dried over MgSO 4 , filtered, and the solvent was removed at reduced pressure. The crude product was then purified by column chromatography, eluting with hexane/ethyl acetate (3:1), yielding cycloheptane-carboxylic acid (2-iodo-phenyl)-amide (4) as a white solid ( A solution of 4 (500 mg, 1.45 mmol) in dry THF (5 mL) was added dropwise to a suspension of sodium hydride (45 mg, 1.89 mmol) in dry THF (10 mL) at 0 • C. Once hydrogen evolution had ceased, iodomethane (227 mg, 1.60 mmol) was added and the solution was allowed to stir overnight. The reaction was quenched with ammonium chloride solution (2 mL) and taken up in diethyl ether (50 mL). The ethereal solution was washed with brine (2 × 30 mL) and water (30 mL). The ethereal solution was then dried over MgSO 4 , filtered, and then reduced to yield the crude product, which was purified by column chromatography, eluting with hexane/ethyl acetate X-ray Data Collection and Refinement The intensity data for 1 were collected on a Bruker SMART1000 CCD diffractometer at 293 K, and the corresponding data for 2 were collected on an Enraf-Nonius KappaCCD diffractometer at 120 K. Empirical (SADABS multi-scan) absorption corrections were applied at the data reduction stage and the structures were routinely solved by direct methods with SHELXS-97, while the atomic models were completed and optimized by refinement against |F| 2 with SHELXL-2018. One of the cyclo-heptyl rings in 2 is disordered over two orientations for atoms C26, C28, and C29 and their attached H atoms in a 0.60 (3):0.40 (3) ratio. The H atoms were mostly located in difference maps and relocated to idealized locations (C-H = 0.93-0.97 Å), and they were further refined as riding atoms with the constraint U iso (H) = 1.2U eq (C) or 1.5U eq (methyl C) applied. The methyl groups were allowed to rotate, but not to tip, to best fit the electron density. Full details are provided in the deposited cifs. Crystal Compound 1 crystallizes in the monoclinic space group P2 1 with one molecule in the asymmetric unit ( Figure 3). The dihedral angle between the mean planes of the C1-C6 benzene ring and the C7/C8/N1/O1 methyl-amide grouping is 86.09 (14) • . The bondangle sum at N1 of 360.0 • implies the expected sp 2 hybridization for this atom, and its un-hybridized 2p orbital is therefore well-aligned to interact with the π system of the adjacent C=O group, as reflected in the typical amide C8-N1 bond length of 1.357 (4) Å. However, this 2p orbital lies almost perpendicular to the delocalized π system of the benzene ring, and therefore, the C6-N1 bond length of 1.435 (4) Å is essentially that of a single bond. The conformation of the methyl-amide group is syn (C7-N1-C8-O1 torsion angle = 2.5 (5) • ), as is the conformation of the C6-N1-C8-C9 grouping (2.6 (5) • ). The dihedral angle between the C7/C8/N1/O1 grouping and the C9/C10/C11 cyclo-propyl ring is 86.3 (3) • , and the dihedral angle between the benzene and cyclo-propyl rings is 58.7 (3) • . The cyclo-propyl ring is, of course, strictly planar, and the terminal C10-C11 bond length of 1.471 (6) Å is notably shorter than the other two bonds (1.500 (5) and 1.502 (5) Å), which is normal when an unsaturated substituent is attached to the methine group [9]. The C10-C9-C11 bond angle of 58.7 (3) • is notably smaller than the C9-C10-C11 and C9-C11-C10 angles (60.7 (3) and 60.6 (3) • , respectively). The overall conformation of the molecule of 1 could be described as V-shaped, in which the C9-H9 bond (i.e., the methine group of the cyclo-propyl ring) faces the C1-I1 bond in the aromatic ring (H9···C1 = 2.94 Å, C9-H9···C1 = 113 • ), in what appears to be a very favorable orientation for a 1,5-translocation reaction to occur, assuming that the solid-state conformation is maintained in solution. Key geometrical data for 1 are summarized in Table 1. , in what appears to be a very favorable orientation for a 1,5-translocation reaction to occur, assuming that the solid-state conformation is maintained in solution. Key geometrical data for 1 are summarized in Table 1. For molecule 2B, see Figure 5 for the equivalent atom designations. See the text for further discussion of the torsion angles ϕ, ξ, and ψ. The acceptor O and C atoms in 1 are generated by the symmetry operation 1-x, y-½, −z. −165.5 (7) I· · · O=C-N −87.6 (4) −82.3 (7) −85.2 (7) For molecule 2B, see Figure 5 for the equivalent atom designations. See the text for further discussion of the torsion angles ϕ, ξ, and ψ. The acceptor O and C atoms in 1 are generated by the symmetry operation 1-x, y-1 2 , −z. In the crystal of 1, a very short C1-I1· · · O1 i (i = 1-x, y-1 2 , −z) contact or 'halogen bond' [10] with an I···O separation of 3.012 (2) Å occurs, which is some 0.49 Å shorter than the expected Bondi [11] van der Waals' separation of about 3.50 Å for these two atoms. One way to interpret this directional contact is in terms of an electrostatic attraction between the Lewis base (the lone pair bearing an O atom of the carbonyl group) and an 'σ hole' [12] on the Lewis acid (the iodine atom), which has close parallels with the way that hydrogen bonds can be envisaged [13]. The C-I· · · O grouping is almost linear (bond angle = 171.78 (9) • ), which is quite typical for this type of interaction, and the I· · · O=C bond angle is 135.3 (2) • . This I···O halogen bond leads to C(6) chains [14] of molecules propagating in the [010] direction in the crystal of 1 (Figure 4), with adjacent molecules related by the operation of the 2 1 screw axis. There are no π-π stacking interactions in 1, with the shortest separation between the benzene-ring-centroids of nearby molecules in the crystal being greater than 5.6 Å. Thus far, as the H atoms of the cyclo-propyl rings are concerned, the shortest intermolecular H···H contacts are H9···H5 (2.42 Å), H10a···H11a (2.56 Å), H10b···H10a (2.60 Å), H11a···H10a (2.56 Å), and H11b···H10a (2.37 Å). Only the last of these is slightly shorter than the expected van der Waals' radius sum of 2.40 Å for two H atoms; thus, we may conclude that van der Waals (dispersion) forces are most important in determining the packing. This is further quantified by the Hirshfeld surface analysis described below. [10] with an I⋅⋅⋅O separation of 3.012 (2) Å occurs, which is some 0.49 Å shorter than the expected Bondi [11] van der Waals' separation of about 3.50 Å for these two atoms. One way to interpret this directional contact is in terms of an electrostatic attraction between the Lewis base (the lone pair bearing an O atom of the carbonyl group) and an 'σ hole' [12] on the Lewis acid (the iodine atom), which has close parallels with the way that hydrogen bonds can be envisaged [13]. The C-I⋯O grouping is almost linear (bond angle = 171.78 (9)°), which is quite typical for this type of interaction, and the I⋯O=C bond angle is 135.3 (2)°. This I⋅⋅⋅O halogen bond leads to C(6) chains [14] of molecules propagating in the [010] direction in the crystal of 1 (Figure 4), with adjacent molecules related by the operation of the 21 screw axis. There are no π-π stacking interactions in 1, with the shortest separation between the benzene-ring-centroids of nearby molecules in the crystal being greater than 5.6 Å. Thus far, as the H atoms of the cyclo-propyl rings are concerned, the shortest intermolecular H⋅⋅⋅H contacts are H9⋅⋅⋅H5 (2.42 Å), H10a⋅⋅⋅H11a (2.56 Å), H10b⋅⋅⋅H10a (2.60 Å), H11a⋅⋅⋅H10a (2.56 Å), and H11b⋅⋅⋅H10a (2.37 Å). Only the last of these is slightly shorter than the expected van der Waals' radius sum of 2.40 Å for two H atoms; thus, we may conclude that van der Waals (dispersion) forces are most important in determining the packing. This is further quantified by the Hirshfeld surface analysis described below. In the crystal of 2, the asymmetric molecules associate into dimers with approximate local C2 symmetry, linked by pairs of C-I⋯O interactions with a slight asymmetry between the I⋯O separations (3.024 (4) and 3.057 (4) Å) and C-I⋯O angles (171.71 (17) and 175.98 (16)°), which could possibly be ascribed to packing effects (see Table 1 for the full geometrical details). Otherwise, no directional intermolecular interactions beyond normal van der Waals contacts could be identified: the shortest contact between hydrogen atoms is H15⋅⋅⋅H28b at 2.38 Å. Hirshfeld Surface Analyses In order to further quantify the intermolecular interactions in these crystals, their Hirshfeld surfaces were generated using CrystalExplorer [15] following the methodology described by Tan et al. [16]. The Hirshfeld surface of 1 ( Figure 6) shows intense red spots in the vicinity of atoms O1 and I1, which clearly correlate the halogen bond described above. Otherwise, the surface is blue, indicating contacts at the expected van der Waals' distance or greater. In the crystal of 2, the asymmetric molecules associate into dimers with approximate local C 2 symmetry, linked by pairs of C-I· · · O interactions with a slight asymmetry between the I· · · O separations (3.024 (4) and 3.057 (4) Å) and C-I· · · O angles (171.71 (17) and 175.98 (16) • ), which could possibly be ascribed to packing effects (see Table 1 for the full geometrical details). Otherwise, no directional intermolecular interactions beyond normal van der Waals contacts could be identified: the shortest contact between hydrogen atoms is H15···H28b at 2.38 Å. Hirshfeld Surface Analyses In order to further quantify the intermolecular interactions in these crystals, their Hirshfeld surfaces were generated using CrystalExplorer [15] following the methodology described by Tan et al. [16]. The Hirshfeld surface of 1 ( Figure 6) shows intense red spots in the vicinity of atoms O1 and I1, which clearly correlate the halogen bond described above. Otherwise, the surface is blue, indicating contacts at the expected van der Waals' distance or greater. The percentage contributions of the different types of interactions identified in twodimensional fingerprint plots [17] are listed in Table 2. The percentage contributions of the different types of interactions identified in twodimensional fingerprint plots [17] are listed in Table 2. These data indicate that the H⋅⋅⋅H contacts are the most important in both structures, with a significantly higher percentage for 2 than 1, although this is not consistent with the atom percentages of hydrogen in the structures (44% H in 2 versus 46% H in 1). The O⋅⋅⋅H/H⋅⋅⋅O contacts in 1 contribute almost three times as much to the surface as in 2, whereas the H⋅⋅⋅C/C⋅⋅⋅H contacts in 2 are almost double those in 1. Despite their presumed importance in establishing the packing, the I⋅⋅⋅O interactions only contribute a very modest percentage to the surfaces. The fingerprint plot for the I⋅⋅⋅O contacts for 1 (Figure 7) shows distinctive 'crescent' shapes with the tips at di + de ≈ 3.0 Å, obviously corresponding to the I⋅⋅⋅O separation established in the crystal structure. These data indicate that the H···H contacts are the most important in both structures, with a significantly higher percentage for 2 than 1, although this is not consistent with the atom percentages of hydrogen in the structures (44% H in 2 versus 46% H in 1). The O···H/H···O contacts in 1 contribute almost three times as much to the surface as in 2, whereas the H···C/C···H contacts in 2 are almost double those in 1. Despite their presumed importance in establishing the packing, the I···O interactions only contribute a very modest percentage to the surfaces. The fingerprint plot for the I···O contacts for 1 (Figure 7) shows distinctive 'crescent' shapes with the tips at d i + d e ≈ 3.0 Å, obviously corresponding to the I···O separation established in the crystal structure. Comparison with Related Structures The syn orientation of the methyl-amide group is common to all three molecules (1, 2A, and 2B) and is by far the most common geometry for this grouping: a survey of the Cambridge Structural Database [18] yielded the scatterplot shown in Figure 8, which com- Comparison with Related Structures The syn orientation of the methyl-amide group is common to all three molecules (1, 2A, and 2B) and is by far the most common geometry for this grouping: a survey of the Cambridge Structural Database [18] yielded the scatterplot shown in Figure 8, which compares the C1-C6-N1-C8 (ϕ) and C7-N1-C8-O1 (ξ) torsion angles (using the atom numbering in this paper) for some 120 different structures. The methyl-amide torsion angles are heavily clustered in the range −10 • < ξ < 10 • , with one or two outliers with |ξ| > 160 • (i.e., corresponding to an anti-conformation for the C-N-C-O grouping), which might be attributable to severe steric strain. The φ angle (equivalent to the torsion angle between the benzene ring and the methyl-amide group) shows clustering around ϕ = ±90 • , i.e., a near-perpendicular arrangement in all cases. Conclusions We have prepared and structurally characterized the related compounds C11H12INO (1) and C15H14INO (2) as possible precursors for radical cyclization translocation reactions: the donor H atom and the pre-radical C-I bond appeared to be well-aligned in the solid state for this to occur. The crystals of both compounds featured short C-I⋅⋅⋅O halogen bonds, which generated chains in 1 and dimers in 2. A survey of the Cambridge Structural Database showed that almost all methyl-amide groups adopted a syn confirmation, and when this grouping was bonded to a benzene ring, the moieties were orientated approximately normal to each other. The Hirshfeld surfaces indicated that the I⋅⋅⋅O contacts made a modest percentage contribution. Conclusions We have prepared and structurally characterized the related compounds C 11 H 12 INO (1) and C 15 H 14 INO (2) as possible precursors for radical cyclization translocation reactions: the donor H atom and the pre-radical C-I bond appeared to be well-aligned in the solid state for this to occur. The crystals of both compounds featured short C-I···O halogen bonds, which generated chains in 1 and dimers in 2. A survey of the Cambridge Structural Database showed that almost all methyl-amide groups adopted a syn confirmation, and when this grouping was bonded to a benzene ring, the moieties were orientated approximately normal to each other. The Hirshfeld surfaces indicated that the I···O contacts made a modest percentage contribution. Informed Consent Statement: Not applicable. Data Availability Statement: CCDC 2254542 and 2254543 contain the supplementary crystallographic data for this paper. These data can be obtained free of charge via: www.ccdc.cam.ac.uk/ data_request/cif (accessed on 2 May 2023), by e-mailing data_request@ccdc.cam.ac.uk, or by contacting The Cambridge Crystallographic Data Centre, 12, Union Road, Cambridge CB2 1EZ, UK; fax: +44-1223336033.
2023-06-23T02:55:21.812Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0bf344b6c93785caef2b79ab23efa4cc4c55c048", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2624-8549/5/2/83/pdf?version=1683872864", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0bf344b6c93785caef2b79ab23efa4cc4c55c048", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
18660545
pes2o/s2orc
v3-fos-license
HECTD1 controls the protein level of IQGAP1 to regulate the dynamics of adhesive structures Background Cell migration including collective cell movement and individual cell migration are crucial factors in embryogenesis. During the spreading/migration of cells, several types of adhesive structures physically interacting with the extracellular matrix (ECM) or with another cell have been described and the formation and maturation of adhesion structures are coordinated, however the molecular pathways involved are still not fully understood. Results We generated a mouse embryonic fibroblast line (MEF) from homozygous mutant (Hectd1 R/R, Hectd1 Gt(RRC200)) mouse of the E3 ubiquitin ligase for inhibin B receptor (Hectd1). Detailed examination of cell motion on MEF cells demonstrated that loss of Hectd1 resulted in accelerated cell spreading and migration but impaired directionality of migration. In Hectd1 R/R cells paxillin and zyxin were largely mis-localized, whereas their expression levels were unchanged. In addition the formation of focal adhesions (FAs) was impaired and the focal complexes (FXs) were increased. We further identified HECTD1 as a key regulator of IQGAP1. IQGAP1 co-localized together with HECTD1 in the leading edge of cells. HECTD1 interacted with IQGAP1 and regulated its degradation through ubiquitination. Over-expression of IQGAP1 in control MEF phenocopied the spreading and migration defects of Hectd1 R/R cells. In contrast, siRNA-mediated knockdown of IQGAP1 rescued the defects in cellular movement of Hectd1 R/R cells. Conclusions The E3 ligase activity of Hectd1 regulates the protein level of IQGAP1 through ubiquitination and therefore mediates the dynamics of FXs including the recruitment of paxillin and actinin. IQGAP1 is one of the effectors of HECTD1. Electronic supplementary material The online version of this article (doi:10.1186/s12964-016-0156-8) contains supplementary material, which is available to authorized users. Background Cell migration including collective cell movement and individual cell migration are crucial factors in embryogenesis [1,2], as best exemplified in neurulation [3,4]. Generally, cell migration has been conceptualized as a cyclic process [5], in which a spreading phase is followed by migration involving actin polymerization and myosin contraction. Various mechanisms have been proposed for the regulation of cell spreading/migration, including active C-terminal Src kinase (CSK) remodeling [6], activation of focal adhesion kinase (FAK) and APR 2/3 [7], actin polymerization and the development of contractile forces [8,9]. During the spreading/migration of cells in culture several types of adhesive structures physically interacting with the extracellular matrix (ECM) or with another cell have been described [10]. Owing to their highly dynamic nature and size, nascent adhesive structures and FXs typically are sized smaller than 1 μm 2 [11]. As cells migrate, these structures either disappear or develop to mature FAs, which are large in size (>5 μm 2 ). Although it is clear that the formation and maturation of adhesion structures are coordinated, the molecular pathways involved are still not fully understood [12]. EULIR was first identified as an E3 ubiquitin ligase for the putative inhibin B receptor in our laboratory [13], but international nomenclature later renamed EULIR to HECTD1. Sarkar and Zohn suggested that HSP90 is a binding partner of HECTD1 and that increased secretion of HSP90 in the cranial mesenchyme of HECTD1mutants is in part responsible for the altered organization and behavior of these cells [14]. Tran and coworkers suggested that HECTD1 promotes the interaction of the adenomatous polyposis coli (APC) protein with Axin to negatively regulate Wnt signaling through Lys-63 polyubiquitination [15]. We found that knockdown of HECTD1 expression by siRNAs increased the migration velocity and membrane ruffling of HeLa cells. However during the course of our studies, Sarkar and Zohn demonstrated that opm mice increased the cranial mesenchyme cell migration [16,17] but the findings from Li and coworkers showed that knockdown of HECTD1 inhibits the migration of breast cancer MDA-MB-231 cells [18]. To resolve this contradictory issue, we have used the Hectd1 homozygous mutant (Hectd1 R/ R ) mouse embryonic fibroblasts (MEF) generated from a gene-trap mouse embryonic stem (ES) cell line RRC200 (BayGenomics, San Francisco, CA, USA), for cell migration studies. IQGAP1 belongs to the IQGAPs family of scaffold proteins. Despite the homology of amino-acid sequence with GAP, IQGAP1 does not exert any GTP hydrolysis activity [19][20][21]. In eukaryotic cells, IQGAP1 localizes to actin-containing structures such as lamellipodia, membrane ruffles and cell-to-cell adhesions. As such, IQGAP1 is involved in regulating cellular motility and morphogenesis [22]. Under normal conditions, through its coordinating with small GTPase, Rac1, RhoA and CDC42, IQGAP1 supports cell movement via regulating adherens junctions, actin filaments and microtubules. Initially, IQGAP1 was identified as a target of Rac1 and CDC42. In addition, activation of Rac and CDC42 in response to stimulation signals leads to the recruitment of IQGAP1, APC and CLIP-170, forming a complex which connects to the actin cytoskeleton and microtubules promoting cell polarization and directional cell migration [23][24][25]. Another mechanism proposed that IQGAP1 requires PIPKIγ for targeting to the leading edge of migrating cells and be activated specifically by PIP2 to promote actin polymerization and cell migration [26]. In contrast, IQGAP1 may also negatively impact on cell migration. One study demonstrated that IQGAP1 suppresses TβRII-and TGF-β-dependent myofibroblastic differentiation in tumors thereby inhibiting tumor growth [27]. Besides, anti-GTPase activity of IQGAP1 sustains the amount of GTP-bound Rac1 at sites of cell-to-cell contact, resulting in stable adhesion [28]. Recently, IQGAP1 was found to localize in FAs [29,30] and in FXs together with integrin-linked kinase ILK [31]. Schiefermeier and coworkers reported that IQGAP1 interacts with FA proteins [32]. However, whether IQGAP1 is directly involved in regulation of the dynamics of FAs is still not known, neither is there anything known about its regulation. Through screening various ECMs and a number of adhesion proteins, we found that the stability of IQGAP1 is regulated by HECTD1. We here propose a novel molecular mechanism explaining the role of Hectd1 in cell movement. Deficiency in Hectd1 results in failure to recruit phaxillin and zyxin to FAs thereby promoting rapid cell migration. Taking all data together, our results demonstrate that Hectd1 contributes to morphogenesis through the regulation of cell migration. Aminals and mating scheme of mutant mouse To generate Hectd1 mutation mice [33], the gene-trap mouse embryonic stem (ES) cell line RRC200 on a 129 background (129P2/OlaHsd) obtained from (BayGenomics, San Francisco, CA, USA) was selected since the insertion site of the gene trap (β-geo) was mapped onto the intron 26 of the Hectd1 gene, which includes the entire open reading frame but lacking the HECT1-domain (Additional file 1: Figure S1A). The ES cells were microinjected into blastocysts (C57BL/6NCrl × 6 J). Resulting agouti chimeric male mice were crossed with C57BL/6 female mice. Then F1 mice were intercrossed to generate more Hectd1 Gt(RRC200)Byg mice for more than 10 generations. Generation and culture of mouse embryonic fibroblast (MEF) cells On the day of E14.5, Hectd1 heterozygote mice were sacrificed. Then their embryos were photographed with a Leica M80 Stereomicroscope and plated on clean dishes. The trunks of the embryos were cut out with sterile scissors. The tissues were transferred to clean dishes and washed thoroughly with PBS, followed by gently mincing the tissues into small clumps of cells using two sterile needles. The cell clumps were digested with 500 μl Trypsin-EDTA at 37°C for 20 min. After that, the digestion was stopped by 500 μl high glucose DMEM medium with 10% FBS, pipetted up and down for 5-10 times to disperse the clumps and centrifuged at 1000 rpm at room temperature for 1 min. Then the supernatant was removed through aspiration. The pellets were washed with PBS and repeated centrifuged. The pellets were dispersed by pipetting and grown on new culture plates in a humidified incubator at 37°C, 5% CO2. MEF cells were sub-cultured when they reached 80-90% confluence. Cell culture and transfection MEF cells were maintained in high glucose DMEM medium (HeLa cells in low glucose medium) with 10% FBS, 1% of Sodium Pyruvate, 1% of L-Glutaminate and 1% of Penicillin-Streptomycin. Cells were grown in a humidified incubator at 5% CO 2 at 37°C. MEF or HeLa cells used for transfection were pre-seeded 24 h in culture vessels. On the day of transfection, the confluence was 50-80%. Transfection of MEF or HeLa cells with plasmid DNA using Effectene reagent according to the protocol of Qiagen. Fibronectin coating For cell spreading and migration assay, 24-well plates were coated with 2 μg/ml fibronectin (R&D, 1030-FN) in PBS overnight. For immunohistochemistry staining, glass coverslips were used for coating. Cell spreading assay Cells were seeded on 6-well plates and incubated at 37°C for 24 h before serum starvation overnight. Starved cells were counted and seeded on fibronectin pre-coated 24-well plates. The plate was immediately sent to timelapse microscopy (Nikon IX81) pre-warmed to 37°C and maintaining the CO 2 level at 5%. Quickly adjusting the positions, the focus, the time interval and total time by CellSens software, the programme was initiated. Duration of spreading was analyzed from attachment to formation of leading protrusion. Cell spreading area was quantified by Image J software. Wound-healing assay In monolayer wound-healing assays, 4 × 10 4 cells were collected and plated in 24-well plate for 24 h. Cells were washed twice with PBS and continuously cultured for 24 h in growing medium containing 0.5% FBS, then cells were starved in serum free medium supplemented with 1 μM aphidicolin overnight. Then, cells were scratched with a 200 μl pipette tip, washed twice with PBS and placed into a complete medium containing 10% FBS and aphidicolin. The plate was immediately sent to timelapse microscopy (Nikon IX81) pre-warmed to 37°C and with 5% CO 2 . Migration images were taken at 10 min intervals for a period of 24 h with a 4× lens. Cell trajectories were measured by tracking the position of the cell over time using "Manual Tracking" plugin (Image J, v 2.0) and the cell velocity and straightness were determined by "Chemotaxis Tool" plugin (Image J, v 2.0). Cells that proliferate or that failed to migrate during the experimental period were not evaluated. Directionality of cell migration The percentage of MTOC orientated towards the wound was determined at 10 h post wounding. Cells were fixed with 4% paraformaldehyde then co-stained with acetylated alpha tubulin and Giantin antibodies. Bar, 50 μm. The percent of cells at the wound edge having their Golgi apparatus in the forward-facing 120°sector was measured after wounding. Over 600 cells from 3 independent experiments were analyzed. Orientation of the Golgi apparatus with respect to the wound edge corresponds to percent on the ordinate. *, P < 0.05. Immunocytochemistry Cells were seeded on glass coverslips pre-coated with fibronectin for defined time intervals. After that, cells were washed with PBS, then fixed with 4% paraformaldehyde for 10 min, and permeabilized with 0.15% Triton-×100 in PBS for 15 min and blocked with 5% BSA in PBS for 1 h at room temperature. Primary antibody diluted in PBS was added to the coverslips and incubated at 4°C for overnight. Primary antibodies were used as follows: rabbit anti-paxillin (N-term) (1: 300, epitomics, Burlingame, USA), Rabbit anti-paxillin (phospho Y118) (1: 300, Abcam, Cambridge, UK), rabbit anti-zyxin Martin Spiess, Biozentrum, University of Basel). After washing the cells with PBS for 5 times with PBS, the secondary antibody (goat anti-rabbit-FITC, 1:1000; goat anti-mose-FITC, 1:1000; goat anti-mose-546, 1:1000, Invitrogen, Carlsbad, USA) tagged with fluorescent dye was added and incubated for 1 h in the dark at room temperature. After washing, cells were incubated in DAPI in PBS for 3 min at room temperature for counter staining. After washing, cells were mounted with Prolong® Gold Antifade Reagent and stored in 4°C protected from light. The fluorescent pictures were made with the Nikon Confocal microscope. Western blot Equal amounts of protein were loaded into the wells of SDS-PAGE gel, along with molecular weight markers. After running the gel at 100 V for 60-90 min, the protein was transferred to PVDF membrane and continued running at 300 mA for 60-80 min in pre-cooled transfer buffer. The blots were blocked in 5% milk in TTBS for 1 h at room temperature followed by primary antibody incubation for overnight at 4°C. Primary antibodies were used as follows: rabbit anti-paxillin (N-term) (1: 300, epitomics, Burlingame, USA), Rabbit antipaxillin (phospho Y118) (1: 300, Abcam, Cambridge, UK), rabbit anti-zyxin (1:200, Epitomics, Burlingame, USA), Rabbit anti-IQGAP1 (H-109) (1: 800, Santa Cruz Biotech, Dallas, USA), Rabbit GAPDH (14C10) (1:3000, Cell Signalling, Danvers, USA). After 3 times washing in TTBS, the blots were incubated in secondary antibody (goat anti-rabbit-HRP, 1:1000; goat anti-mose-HRP, 1:1000, Invitrogen, Carlsbad, USA) for 1 h at room temperature. To remove the unspecific bound antibody, the blots were washed in TTBS for 3 times. Bands were detected by ECL substrates, visualized by an infraredbased laser scanner (LiCor) and quantified using Image Lab software (Bio-Rad). The band intensity of wild-type cells of no stimulation was normalized with GAPDH as control and the other results were recorded as fold changes compared to control. Immunoprecipitation Cell pellets were lysed with IP lysis buffer (20 mM Tris-HCl, PH 8.0, 137 mM NaCl, 1% NP40 and 2 mM EDTA supplemented with 1% protease inhibitor cocktail) on ice for 20 min and vortexed in between. Cellular débris was removed by centrifugation at 14,000 g for 5 min and the supernatant was transferred to pre-cooled fresh tubes. The protein amount was equilibrated with the IP buffer. 2 μl primary antibodies (Mouse anti-GFP GF28R, Thermo scientific, Waltham, USA) was added per 500 μg protein samples and incubated for overnight at 4°C. The lysates were then incubated with prewashed protein A/G agarose beads (20 μl/500 μg protein) and rocked for 1 h at 4°C. Beads were washed three times with IP buffer, 6000 rpm, 3 min. After washing, the beads were heated for 5 min at 95°C in 2× Laemmli sample buffer. Target proteins were detected by western blot by using specific antibodies. Antibodies were used as: Rabbit anti-HECTD1 (M03), clone 1E10 (1:1000, Abnova, Taipei, Taiwan), Rabbit anti-PIP5K1A (1:1000, Cell Signaling, Danvers, USA), Rabbit anti-β-Catenin (D10A8) (1:1000, Cell Signaling, Danvers, USA). In vivo ubiquitination MEF cells were transfected with plasmids DNA for HA-ubiquitin and GFP-IQGAP1 at ratio of 1:1. Twenty four hours after transfection, the cells were washed twice with PBS and changed to serum-free medium supplemented with 1 nM MG132 or DMSO, then incubated for overnight at 37°C. For endogenous ubiquitination assay, MEF cells were seeded for 24 h and directly treated for starvation. Starvated cells were harvested as pellets and re-suspended in serum-free medium. Half of the pellets were spinned down and lysed with ubiquitination lysis buffer (50 mM Tris, pH 7.5, 1 mM EDTA, 150 mM NaCl, 0.1% Triton X-100, complete protease inhibitor cocktail, 100 μM MG132 and 100 μM N-ethylmalemide) on ice for 15 min followed with centrifugation (12,000 g, 5 min) at 4°C. The other half was seeded on fibronectin pre-coated plates and cultivated in 37°C for 60 min, after that, the plates were placed on ice, washed with pre-cooled PBS and lysed with lysis buffer (as previously) 15 min on ice before centrifugation. The supernatant was collected and then we continued with the protein concentration assay. Equal amount of protein was immune-precipitated with target protein and detection of ubiquitin by Western blot. Ubiquitination of target proteins were normalized by the protein amount in MEF cells. Statistical analysis All data analyzed using the statistical software package SPSS 13.0 for Windows 7 (SPSS Inc., Chicago, Ill, USA). Normally distributed data was analyzed for statistical differences using the t-test (paired comparisons) or ANOVA (Analysis of Variance). For data not normally distributed, non-parametric ANOVA and the Mann-Whitney U test were used. All values are reported as means ± SEM. Differences are considered statistically significant with P < 0.05, highlighted with *. For each particular experiment, statistical analysis is presented in the figure legend. Loss of Hectd1 results in accelerating cell spreading/ migration and impairs directional migration of cells Knockdown of HECTD1 by siRNAs in HeLa cells increased the rate of migration (Fig. 1a), to confirm this result we generated a mutant mouse of the E3 ubiquitin ligase for inhibin B receptor (Hectd1). We found that Hectd1 homozygous mutant embryos display defective of neural tube closure with excencephaly (Additional file 1: Figure S1 and D'Alonzo et al., manuscript in preparation). We used mouse embryonic fibroblast (MEF) cells obtained from matched wild-type and Hectd1 R/R mouse to analyze the time period from cell attachment to migration by time-lapse microscopy on various extracellular matrices, such as fibronectin (FN), collagen type I (CL1) or IV (CL4), matrigel (MT), laminin (LM) and gelatin (GL). There were significant differences in cell spreading and migration between the adhesion of wild-type and Hectd1 R/R cells on FN but not or to much less extent on other ECMs (Fig. 1a), suggesting that HECTD1 regulates cell migration through only certain subtypes of integrin receptors. When FN was used as an extracellular matrix, wildtype cells initially adopted a flattened morphology and started to form leading edges within 40 min while this process occurred approximately 10 min earlier in Hectd1 Hectd1 R/R cells (Fig. 1b and c). We further examined the migration/directionality of cells in wound healing assays (Fig. 2). Loss of Hectd1 results in accelerating cell migration ( Fig. 2a and time-lapse images were shown in Additional file 2: Figure S2A and Additional file 3: Figure S2B). The velocity (total distance/time) of Hectd1 Hectd1 R/R cells was to 0.25 ± 0.07 μm/min compared to 0.19 ± 0.05 μm/min in wild-type cells (P < 0.05) (Fig. 2b), agreed to the results found in HeLa cells (Fig. 1a). Wild-type cells migrated in a cohesive fashion with little dispersion and with aligned displacement paths. In contrast, the trajectories of Hectd1 Hectd1 R/R cells was more scattered (Fig. 2c). The straightness (Euclidean distance/Accumulated distance) was 0.60 ± 0.14 in Hectd1 R/R cells versus 0.78 ± 0.09 in wild-type cells (P < 0.05) (Fig. 2c), indicating that the directed migration of cells was impaired. To further confirm the results, both wildtype and Hectd1 R/R cells were stained with acetylated α tubulin and giantin (Fig. 2d), which are cell directional markers since the microtubule-organizing center (MTOC) and the Golgi matrix are reorient and toward leading edges during cell migration or wound healing [34][35][36]. The percentage of cells with giantin and acetylated α tubulin oriented to the wound was 61.29 ± 15.33% in wild-type cells, whereas this percentage dropped to 40.67 ± 11.25% in Hectd1 R/R cells. a b c Fig. 1 Fibronectin is a critical extracelluar matrix in HECTD1 regulating cell adhesion and the mutant HECTD1 accelerates cell spreading. a Wound healing assay. Equal amount of wild-type and Hectd1 R/R MEF or Hela cells were seeded on 24 well plates coated with various ECMs for 24 h with 0.5% FBS, followed by starvation overnight with 1 μg/ml aphidicolin (see Methods). Wounds were created by 200 μl pipette tips and placed into a complete medium containing 10% FBS and aphidicolin. Migration images were acquired by time-lapse microscopy for 24 h. FN indicates fibronectin, CL1 stands for collagen type I, CL4 for collagen IV, MT for matrigel, LM for laminin and GL for gelatin. Experiments for each ECM were conducted for at least three times (paired t test, *P < 0.05). b Wild-type and Hectd1 R/R cells were starved overnight, then plated on FN coated plates and immediately sent to time-lapse microscopy for recording 2 h (1 min / picture). Spreading on different time points were shown. c Duration of cell spreading was quantified by Image J software (paired t test, *P < 0.05) Loss of Hectd1 impairs the subcellular localization of adhesion proteins To dissect the molecular mechanism involved in causing the observed changes in cell migration in Hectd1 R/R cells, we examined functional molecules in integrin signaling. α5β1 is the major FN receptor in fibroblasts but we did not observe differences in expression and localization of subunits α5 and β1, as well as β3 in contrast to the expression of α-actinin (Additional file 4: Figure S3 and data not shown), these results suggested that Hectd1 functions downstream of the receptors. The expression and localization of talin and vinculin, which have been shown to be incorporated into adhesive structures at early stage [37], did not significantly differ in both cell types when cultured on FN (data not shown). When both cells were cultured on FN, the total proteins of paxillin and zyxin were equally expressed (Fig. 3a) and the total focal adhesion area for paxillin did not have significantly different (Fig. 3b). However in Hectd1 R/R cells, the proteins show a decrease of size distribution at the leading edges (Fig. 3d). Furthermore, paxillin-Y118, one of the FN-stimulated paxillin phosphorylation, became located in FAs where it was associated with stress fibers in WT. The expression of paxillin-Y118 was mostly located at the cell leading edges as FXs with disperse distribution in cytoplasm in Hectd1 R/R cells (Fig. 3e). These results indicate that paxillin and zyxin were mislocalized in the adhesions of Hectd1 R/R cells. It has been suggested that α-actinin acts as a bridge to connect adhesion structures with the actin-cytoskeleton [38]. At the leading edges of wild-type cells activated by FN for 30, 60 and 90 min, α-actinin was mainly colocalized together with paxillin and zyxin in wild-type cells, while this co-localization was not present in Hectd1 R/R cells. 60 min after spreading of Hectd1 R/R cells, we could barely detect any patches of α-actinin at the cellular periphery. As the Hectd1 R/R cells continued to migrate, some patches of α-actinin became visible at the leading edges, but still fewer than in wild-type cells Figure S4). These data indicate that Hectd1 exerts its function at adhesion sites. The formation of FAs but not FXs is impaired in Hectd1 R/R cells FXs are characterized as small punctate adhesions with <1 μm 2 surface area lying close to the cell periphery, whereas FAs are classified as larger structures with their surface area varying between 5 and 20 μm 2 [11]. Having allowed cell spreading for 30 min on FN, we started to analyze the dynamics of early (paxillin-only) versus late (paxillin-and-zyxin) adhesions. As shown in Fig. 5a, significantly more paxillin-containing FXs developed in Hectd1 R/R cells (P < 0.05) than in wild-type cells. In contrast, FAs were more prominent in wild-type cells than in Hectd1 R/R cells. However, zyxin, being a late-stage marker in adhesion formation, was similarly present in the FXs of both cell types (Fig. 5b). In migrating wildtype cells, both paxillin and zyxin showed similar distribution patterns in FAs after 60 min and after 90 min, whereas in Hectd1 R/R cells the dominant cell adhesion structures consisted of FXs. Our results suggest that the defects in assembly of FAs at cell leading edges were caused by differences in the accumulation or transportation of proteins rather than by differences in the synthesis of the proteins. One of the main kinases thought to be responsible for tyrosine phosphorylation of FA molecules is Src [39,40]. Fig. 5c showed that there was no statistically significant difference in the expression level and activity of c-Src between Hectd1 R/R and in wild-type cells after FN stimulation (P > 0.05). a c e b d Fig. 3 Loss of HECTD1 leads to mislocalization of paxillin and zyxin. a Expression of paxillin or zyxin was determined by Western Blots. b Total focal adhesion area was determined by Image J. c Wild-type and Hectd1 R/R MEF cells were seeded on coverslips pre-coated with 1 μg/ml FN for 2 h, followed by anti-paxillin or anti-zyxin staining. Bar, 50 μm. d Amount of paxillin and zyxin and the individual focal adhesion were analyzed by Image J software. All the experiments were repeated at least 3 times and over 50 cells were analyzed in each group. Mann-Whitney U test were conducted. e In the same condition, cells were stained with anti-paxillin (phosphor Y118) and rhodamine phalloidin Localized activation of Rac and Rho regulate adhesion dynamics during migration. Using the RhoA activation assay, we found that the activities of RhoA were significantly (P < 0.05) enhanced in Hectd1 R/R cells 60 min after FN stimulation as compared that of wild-type cells (Fig. 5d), in which the total level of Rac1 and RhoA were not significant altered (Fig. 5e). IQGAP1 interacts and co-localizes with HECTD1 We found that IQGAP1 is a protein component of Hectd1 complexes [30] involved in formation of integrin adhesome and membrane ruffling. It has been demonstrated that IQGAP1 is an important factor in regulation of cell migration [26,41]. As shown in Fig. 6a, the protein level of IQGAP1 was higher in Hectd1 R/R cells than wild-type cells (P < 0.05). Consistent with this result, we observed that IQGAP1 is not only expressed in the leading edge of the Hectd1 R/R cells but also heavily present in entire cytoplasm (Fig. 6b). Thus, we further focus on the functional relationship between IQGAP1 and HECTD1 in cell migration. To confirm the interaction between HECTD1 and IQGAP1, we transfected GFP-IQGAP1 plasmids into HEK293 cells for immunoprecipitation. As shown in Fig. 6c, immunoprecipitation of endogenous HECTD1 resulted in the co-immunoprecipitation with GFP-IQGAP1 and co-immunoprecipitation was enhanced after 60 min of stimulation with FN. Next, we performed co-localization assays to verify the proteinprotein interaction of IQGAP1 with HECTD1. HeLa cells transfected GFP-IQGAP1 were plated on FN coated plates for 60 min. Similar to the presence of HECTD1 in the cell, IQGAP1 was mainly localized in the cytoplasmic of the cells, but was enriched at the leading edge of cells. The Pearson's correlation coefficient of GFP-IQGAP1 and HECTD1 at the cell leading edge was 0.65 ± 0.19 (Fig. 6d), suggesting that they co-localized with each other. Ubiquitination of IQGAP1 is regulated by HECTD1 and the half-life of IQGAP1 is increased in Hectd1 R/R cells To evaluate whether IQGAP1 is ubiquitinated by HECTD1 we first examined the ubiquitination level of IQGAP1. We treated cells with the proteasome inhibitor MG132 to block the ubiquitin-proteasome degradation pathway. Compared to DMSO-treated control cells the overall ubiquitination level of IQGAP1was increased after treatment with MG132. The degree of ubiquitination of IQGAP1 after treatment with MG132 was more pronounced in wildtype cells than in Hectd1 R/R cells 60 min after stimulation with FN (Fig. 6e). We then verified whether the half-life of IQGAP1 varies accordingly in wild-type and in Hectd1 R/R cells. We tested the degradation profile of IQGAP1 using cycloheximide (CHX-chase experiment). The CHX-chase experiments showed that the IQGAP1 level remained largely unchanged after up to 30 h in Hectd1 R/R cells, whereas in wild-type cells this level decreased to near 50% within 12 h (Fig. 6f ), suggesting that HECTD1 is involved in the degradation of IQGAP1. Overexpression of GFP-IQGAP1 in wild-type cells induces defects of FAs As IQGAP1 can be ubiquitinated by HECTD1 and degraded and as IQGAP1 has been reported to regulate FAs and cell migration [28], we speculated that the elevated protein level of IQGAP1 in Fig. 4 Loss of HECTD1 leads to mislocalization of α-actinin and paxillin/zyxin. Equal amounts of wild-type and HECTD1 MEF cells were seeded on culture dishes for 24 h, followed by starvation overnight. The cells plated on FN for 60 min were co-stained with anti-α-actinin and anti-paxillin or anti-zyxin, respectively. The dotted frame was zoomed out at the right panel, and the colocalization of two proteins across the dashed line was shown in the fluorescence intensity profiles. Bar, 20 μm Hectd1 R/R cells were the direct cause of the impaired formation of FAs. In order to examine this hypothesis, we overexpressed GFP-IQGAP1 in wildtype cells, then performed immunostaining for paxillin and zyxin and measured the average number FXs and FAs per cell at different time points using paxillin or zyxin as markers. Interestingly, regardless whether paxillin or zyxin was chosen as the marker, the expression of FAs was dramatically decreased in the cells overexpressing GFP-IQGAP1 compared to non-transfected wild-type cells (Fig. 7a). In contrast, the expression of FAs in GFP expressing cells remained no change (Fig. 7b). In wild-type cell the ratio of FAs to FXs was 1/2, while the ratio of FAs to FXs decreased to around 1/8 in GFP-IQGAP1overexpressed cells (Fig. 7c). Knockdown of IQGAP1 rescues the dynamics of FAs, the duration of cell spreading and directional cell migration in Hectd1 R/R cells To further test our hypothesis whether overexpression of IQGAP1 is involved in dysfunctional cell adhesion, spreading and migration in Hectd1 R/R cells, we transfected Hectd1 R/R cells with IQGAP1-siRNA (siIQ) or with control-siRNA. As a result the protein level of IQGAP11 in Hectd1 R/R -siIQ-transfected cells was knockdown (am. Unit 4.7 to 1.6 as compared to 1 in the wt cells, Fig. 8a). Subsequently, after IQGAP1-knockdown in Hectd1 R/R cells we analyzed the cytoskeleton and the FAs by immunostaining for actin, paxillin and zyxin. As shown in Fig. 8b, paired t test, *P < 0.05). c Wild-type and Hectd1 R/R cells were starved for overnight. cell lysates were either harvested immediately as 0 min control, suspended in culture medium at 37°C for 60 min or plated on FN coated culture dishes for 30, 60 and 90 min at 37°C. Lysates were analyzed by anti-Src (active) and GAPDH blotting. d After being starved for overnight, lysates of wild-type and Hectd1 R/R cells were harvested immediately or plated on FN-coated culture dishes for 60 min at 37°C. Activity of RhoA was measured by RhoA G-LISA Activation Assay Kit. Activity of RhoA was recorded as fold change from wild-type 0 min group based on three independent experiments (paired t test, *P < 0.05). e Expression of Rac1 and RhoA was determined by Western Blots cortical F-actin enriched at the periphery and wellorganized lamellipodia structures at the leading edge in wild-type cells. In contrast, in control siRNA-treated Hectd1 R/R cells, stress fibers were less prominent than in wild-type cells and lamellipodia were difficult to detect. Importantly, the formation of lamellipodia was rescued by down-regulation of IQGAP1-siRNA in Hectd1 R/R cells. In line with our previous results, taking paxillin and zyxin as cell adhesion markers, the ratio of FAs to FXs was about twice in wild-type cells, while in control siRNA-treated Hectd1 R/R cells, the average ratio of the number of FAs/FXs fell to around 1/3. The ratio of FXs to FAs was rescued by IQGAP1-siRNA knockdown in Hectd1 R/R cells, in which the FAs accounted for the majority of cell adhesions and the ratio of FAs to FXs per cell again became threefold (Fig. 8c). Moreover, activity of RhoA was also evidently increased in control siRNA Hectd1 mutant MEFs after FN stimulation for 60 min (Fig. 5d), whereas RhoA activity could be significantly inhibited by IQGAP1 siRNA scilencing in Hectd1 R/R MEFs (P < 0.05) (Fig. 8d). These results a c e f b d Fig. 6 IQGAP1 interacts and colocalizes with HECTD1 in cell leading edge, and its ubiquitination is regulated by HECTD1. a Lysates of wild-type and Hectd1 R/R MEF cells were harvested immediately or plated on FN coated culture dishes for 60 min at 37°C and were analyzed by anti-IQGAP1 and GAPDH blotting. b Wild-type and Hectd1 R/R MEF cells were stained with paxillin (green), IQGAP1 (red) and phalloidin (blue). c IQGAP1 interacts with HECTD1. HEK293 cells were stably transfected with GFP-IQGAP1 and seeded on fibronectin-coated dishes for 60 min, protein lysates were harvested and immunoprecipitated (IP) by GFP antibody. The lP lysates and whole cell lysates were used for detecting HECTD1, PIP5K1A and β-catenin by western blot. CUGBP1 served as a negative control. d Hela cell stably express His-HECTD1 was transiently transfected with GFP-IQGAP1 for 24 h. Cells were starved overnight and plated on FN-coated slides for 60 min, followed by fixation and staining with HECTD1. Note the site of colocalization shown in intensity profiles (white arrows). Pearson's correlation coefficient was analyzed by Image J software. * P < 0.05. e Endogenous ubiquitination of IQGAP1. Wild-type and Hectd1 R/R cells were treated with the proteasome inhibitor MG132 1 μg/ml or DMSO in serum starvation medium for overnight. Cell were lysed immediately or after 60 min seeded on FN coated dishes. The ubiquitination of IQGAP1 was further verified by immunoprecipitating IQGAP1 and detecting with an anti-ubiquitin antibody. f Half-life of IQGAP1 is increased in Hectd1 R/R cells. Equal amounts of cells were plated on 100 mm dishes for 24 h and then treated with 100 μg/ml of Cycloheximide (CHX) for further 6 h (hours), 12 h, 24 h and 30 h. Cell pellets were harvested and the expression of IQGAP1 was detected by Western blot. Relative protein expression is quantified by densitometric analysis of Western blots with Image J software, based on three independent experiments suggest that in the absence of Hectd1, the activation of RhoA correlated with increased protein levels of IQGAP1. Furthermore we found that the spreading duration time shortened to (29.03 ± 4.48 min) in Hectd1 R/R cells (Hectd1 R/R control group) in contrast with (41.80 ± 10.19 min) in wild-type cells, and the spreading duration time in IQGAP1-silenced cells (37.23 ± 6.60 min) was partly rescued as compared to control siRNA-transfected cells (P < 0.05, P < 0.05, resp.) ( Fig. 9a and b). Next, in order to further investigate whether downregulation of IQGAP1 in Hectd1 R/R cells would also affect directional cell migration, confluent cell layers of wild-type cells and Hectd1 R/R cells with down-regulated IQGAP1 through siRNA and Hectd1 R/R cells with control siRNA were scratched and wound closure was recorded by time lapse microscopy. The migration speed of control-siRNA treated cells was 0.97 ± 0.14 μm/min, as compared with 0.86 ± 0.17 μm/min in wild-type cells, which was consistent with our previous results. The migration defect was rescued by siRNA-mediated downregulation of IQGAP1 (0.94 ± 0.14 μm/min). Similarly, as compared with wild-type cells the straightness of directional cell migration was impaired in control-siRNA MEFs, whereas of siRNA-mediated knockdown of IQGAP1 compensated the defect (Fig. 9c). Discussion Although the eminent role of HECTD1 in embryogenesis, including neural tube formation, placenta formation and embryonic growth, has been clearly demonstrated in at least two transgenic mouse models, limited information has been collected so far to uncover the regulatory mechanisms involved. Moreover, the involvement of HECTD1 in regulating cell migration during organogenesis has as yet remained unexplored. We observed that loss of HECTD1 induced earlier cell spreading and enhanced cell migration through controlling IQGAP1 and adhesion proteins. Our study proposes a new mechanism of HECTD1 in maintaining accurate cell movement during embryogenesis. HECTD1 is a selective effector of ECM-integrin signaling The complexity of the molecular signaling responsible for ECM selective guidance is associated with various ligand-binding possibilities for integrin subtypes [42][43][44][45]. Our first observation was that the migration patterns of Hectd1 R/R cells is significantly different to that of wild-type cells on various ECM when these cells were incubated in culture medium lacking serum. These results indicate that factors in serum may compensate the loss of HECTD1 through yet unknown signaling pathways. In addition, the localization of paxillin and zyxin but not of talin and vinculin was different during migration of these cells on FN. Furthermore, more FAs formed in wild-type cells whereas more FXs developed in mutant cells. These differences were not apparent when the cells were cultured on collagen type I and on gelatin. The involvement of IQGAP1 in regulating adhesion dynamics is mediated by HECTD1 IQGAP1 has been widely reported to be involved in regulating FAs dynamics and cell migration. We confirmed the interaction of HECTD1 with IQGAP1 and their co-localization through co-immunoprecipitation and double-labeled immunocytochemistry, respectively. We observed that loss of HECTD1, being an E3-ubiquitin ligase, enhances the protein level of IQGAP1 through decreased ubiquitination. When IQGAP1 was overexpressed a c d b Fig. 8 in wild-type cells, it reduced the formation of FAs as determined by differences in the expression of paxillin and zyxin. Moreover, siRNA knockdown of IQGAP1 in Hectd1 R/R cells compensated the defects in the formation of cell adhesions, in cell spreading and migration. Taken all these results together, IQGAP1 has now been demonstrated to be regulated through degradation by HECTD1. We therefore conclude that HECTD1 regulates cell adhesion and controls cell spreading and migration via IQGAP1. High FXs-FAs ratio in Hectd1 R/R cells contributes to higher motility We have demonstrated that the mutation of HECTD1 results in altered cell spreading and migration, in which the velocity of Hectd1 R/R cells was increased with impaired directionality. In our assay, we used aphidicolin to ensure that proliferation did not interfere with cell migration. We also showed that HECTD1 ablation did not influence cell migration speed in the presence of 10% FBS without aphidicolin in MEF cells on FN. This result is consistent with Li's result [18], in which 10 ng/ml of EGF was used in breast cancer cells. We therefore, used the same setting for the Hectd1/IQGAP1 double knockout/down experiments. Instead of measuring total adhesion structures we differentiated FXs from FAs in cells. Interestingly, when compared to wild-type cells, the average total number of small adhesions in Hectd1 R/R cells is increased. Moreover, FXs are evidently associated with fewer FAs, which are bigger in size than FXs in Hectd1 R/R cells than in their wild-type counterparts. Maturation of adhesions occurs along an α-actininactin template that elongates centripetally from nascent adhesions. We found that α-actinin is colocalized with paxillin or zyxin at the leading edge of wild-type cells, but not in Hectd1 R/R cells. These results suggest that in Hectd1 R/R cells, FXs including paxillin fail to reassemble or/and cannot mature to FAs. Since the presence of FXs and nascent adhesions is a marker of highly motile cells, their quick appearance and turnover correlate directly with protrusion and cell movement. The higher number of small a b c Fig. 9 Knockdown of IQGAP1 rescue the defects of spreading and migration of Hectd1 R/R cells. a After 36 h of transfection, cells were starved overnight and plated on FN-coated cell culture dishes and immediately sent to time-lapse recording for 2 h (1 min/picture). Spreading pictures at different time points were shown. Note for cells with leading protrusion (yellow arrows). b Quantification of duration of cell spreading on 30 min is shown. AU, arbitrary unit. *, paired t test, P < 0.05. c 24 h after siRNA transfection, wound healing assays were performed. Migration images were acquired by time-lapse microscopy for 24 h. The images were analyzed quantitatively by Image J software (paired t test, P < 0.05) paxillin patches in Hectd1 R/R cells strongly correlates with their increased motility and fast spreading. In motile cells, the recruitment of the adhesion proteins into FXs occurs sequentially, so that composition of the specific proteins relies on their age. Moreover, using double color staining, time-lapse assay, one study demonstrated that the transition from paxillin-rich FXs to zyxin-containing FAs takes place after the leading edge stops advancing or retracts [37]. Generally, zyxin has been thought to be a component of FA plaques and is absent from FXs [37,53]. Although these three types of adhesions are distinguishable, there is always a continuum between types and many of the same adhesion proteins have been identified in each [54]. Consistent with our findings, in highly motile cells such as melanoma cells, glioma cells and growing neurons [55] the dynamic adhesions most similar to FXs are enriched in the leading edge of cells and act as common features of rapid cell movement [56]. Therefore, we conclude that the accumulation of paxillin and zyxin in the lamellipodia of FXs is a major hallmark of highly motile cells. Thompson has also proved that decreased size of FAs is related to higher velocity and impaired directionality of cells, and vice versa [57]. Increased numbers of the adhesions are accompanied with a lesser motility [58,59]. Here, we show that the dynamics of cell adhesion are responsible for the velocity of cells during migration. We propose that 30 min spreading is too early for the recruitment of abundant zyxin into FAs, so that the presence of zyxin is not enough to distinguish the difference in Hectd1 R/R and wild-type cells. Model for the role of HECTD1 in regulating cell movement Our data revealed that FXs in Hectd1 R/R cells failed to recruit enough adhesion proteins (such as paxillin and zyxin) to mature into FAs. Therefore, the alteration in number and/or size of FXs is expected to influence cell motility. Thus, we propose the following model for the role of HECTD1 in cell movement (Fig. 10). During cell spreading and early migration the cell receives stimulating signals from its extracellular environment, such as FN in the extracellular matrix, which activates relevant integrin receptors and Src in the cell leading edge. The recruitment of paxillin results in phosphorylation of paxillin at Y118. With this event the initiation of focal complexs formation becomes complete. The activation signals are passed to small GTPases, such as Rac1 and RhoA via IQGAP1 recruitment. Together with filamin-A, IQGAP1 inhibits Rac1 activity [60]. Subsequently removal of IQGAP1 from Focal complexs together with high RhoA activities triggers the maturation of focal adhesions by recruiting more paxillin and zyxin. As an E3 ubiquitin ligase, HECTD1 regulates the level of IQGAP1 through ubquitination. Loss of HECTD1 prolonges the half-life of IQGAP1 and thereby reduces Fig. 10 The role of HECTD1 in FAs formation. Upon binding of integrins to ECMs (e.g., Fibronectin), FAK/src signal pathway is activated and the recruitment of paxillin to the binding sites results in phosphorylation of paxillin at Y118 and the initiation of Focal complexs formation (a). Subsequently further recruitment of IQGAP1 passes the activation signals to small GTPases, such as Rac1 and RhoA. Together with FLAm, IQGAP1 inhibits Rac1 activity (b). The role of IQGAP1 on Rac/Rho is regulated by HECTD1 (c). Removal of IQGAP1 from Focal complexs triggers the maturation of focal adhesions by recruiting more paxillin and zyxin (d). HECTD1 is a key regulator of IQGAP1 and through this interaction HEDTD1 impacts on cellular adhesion and movement
2017-08-03T02:02:23.682Z
2017-01-05T00:00:00.000
{ "year": 2017, "sha1": "8666c264a141929ff370a597065d92c47084cac1", "oa_license": "CCBY", "oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/s12964-016-0156-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8666c264a141929ff370a597065d92c47084cac1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252720765
pes2o/s2orc
v3-fos-license
Management of Grants in The World of Education Grants are the provision of money/goods or services from the regional government to other governments, regional companies, the community, and community organizations, whose designations have specifically been determined, are not mandatory and are not binding, and are not continuously aimed at supporting the administration of business affairs. government, or support the achievement of program targets and local government activities in mandatory and optional affairs. While the source of grant funds itself comes from the source of grants. Grants to local governments can be sourced from: a. Government; b. Other local governments; c. Domestic private agency/institution/organization; and D. Domestic community groups/individuals. Grants from the Government can be sourced from: a. APBN revenue; b. Foreign Loans; and/or c. Foreign Grants. Grants from Foreign Loans and Foreign Grants can be sourced from foreign government governments, foreign agencies/institutions, international agencies/institutions, and/or other donors. Grant funds can be a useful additional resource for educational purposes. District/city schools and schools are advised to develop skills in seeking and obtaining grants. INTRODUCTION Indonesia is a country that is still developing today, while Indonesia itself has the wealth and potential of Natural Resources (SDA) which is very supportive but it is unfortunate because Human Resources (HR) in Indonesia is still lacking. This is what makes the number of unemployed and poverty a major problem for Indonesia. Although there have been many programs issued by the government to deal with and alleviate unemployment and poverty, these problems have not been completely erased. Unemployment and poverty in Indonesia from time to time continue to increase along with the increase in the Indonesian population which is widely spread in every region in Indonesia. Several programs have been implemented by the Regional Government, one of which is by receiving grants from every region in Indonesia to be able to help the problems that exist in each region that receives the grant funds, which is expected even though it does not eliminate the existing problems as a whole but if managed properly and properly can help the lower-class community or those who need assistance from the local government. The larger and more unequal population can result in a gap in the level of community welfare in addition to other factors that influence it. It is the responsibility of the City/Regency Government able to reduce these problems. Problems that must be seriously handled by the Government so that there is an increase in the standard of living of the people, and the community can feel the changes even though the changes are not implemented immediately but little by little the Government can feel them. The problem is whether the management, distribution, and distribution of grant funds are appropriate to the community. The management of grant funds will not work properly if aspects of planning, budgeting, procurement of revenues, and distribution are not by the guidelines in existing regulations. Grant aid expenditure is one of the expenditure accounts in the Regional Revenue and Expenditure Budget (APBD) that attracts public attention and often makes headlines in the mass media. This is because many parties need the grant assistance and many interests can be accommodated, both for the benefit of public welfare and certain political interests. RESEARCH METHODS The research method uses library research by collecting data from writings (literacy) that are related to the topics discussed, namely Islamic Education in Era 4.0. The researchers took the data from documentation in the form of books, research journals, and supporting articles. The discussion method uses descriptive-analytical methods, namely explaining and elaborating the main ideas related to the topics discussed. Then present it critically through primary and secondary library sources related to the theme. (Sugiyono, 2005;Sukmadinata, 2005;Trianto, 2011). RESULTS AND DISCUSSION Grants based on the definition contained in the Regulation of the Minister of Home Affairs Number 32 of 2011 Article 1 number 14 are the provision of money/goods or services from the regional government to other governments, regional companies, the community, and community organizations, whose allocations have been specifically determined. not mandatory and nonbinding, and not continuously aimed at supporting the implementation of government affairs, or supporting the achievement of regional government program and activity targets in mandatory and optional affairs. (Regulation of the Minister of Home Affairs Number 32 of 2011 Article). From this understanding, it can be underlined that grants are very important programs because they are used to help other parties and to support local government activities by taking into account the principles of justice, compliance, rationality, and benefits for the community by the Regulation of the Minister of Home Affairs Number 32 of 2011 Article 4 point 3: The provision of grants as referred to in paragraph (1) is intended to support the achievement of regional government program and activity targets by taking into account the principles of justice, compliance, rationality, and benefits for the community. Grants are an important part of the financial resources available for education, although these funds tend to comprise a relatively small percentage of the total funds available in the school district or school. The importance of these funds stems from how schools can use the money, which can range from specifically targeted goals to broad school-based discretionary projects. Types of Grants Grant Formula It is a funding program that distributes grant resources to predetermined recipients according to a defined allocation process. The most common formula grant found in schools is Part A of the Titles Primary and Secondary Education Act (ESEA), which was passed back in 2001 as the No Child Left Behind Act. More than $7 billion of these grants were distributed to states according to a formula included in enabling laws made by the U.S. Congress (Al Ramiez, 1947) The law further determines how states distribute the money to individual school districts, and then to schools. Formula grants often target a specific population for services, for example, the disabled or poor. These grants seek to focus on education-related needs, with the hope that the grant funds will assist or promote locally funded efforts. Money is usually allocated based on the number of eligible students in the population. Si, in the case of Title I, states with a higher concentration of children from economically disadvantaged households will receive proportionately more money than states and their school districts that are less affected by poverty. Most government grants for pre-college education can be classified as formula grants. Competitive Grants Competitive Grants This is a funding program that distributes the source of grant funds to target recipients based on how well-qualified applicants demonstrate the ability to meet some predefined funding criteria and successfully meet the requirements of the grant, relative to other applications. As the name implies, competitive grants presuppose the number of applicants will exceed the number of grants that are also awarded on a competitive basis. Many grants are sector-based, and foundations are competitive grants. Competitive grants often have the objective of stimulating new educational practices or services to a new population that provides financial incentives to states, school districts, or schools. Therefore, the criteria for selecting grantees for competitive grants often include items such as the probability of success of the proposed program; the willingness of grantees to share program evaluation results or demonstrate program operations; geographic location; staff quality local matching funds available. In the two broad classifications, the formula provides financial resources for a specific group of targets or specific purposes. ESEA Title I and the Education Act on Individuals with Disabilities (IDEA), apart from being formula grants, are categorical programs because their funding is related to children from poor families or children with disabilities. In addition, because grant money is restricted to a limited number of uses, teaching in mathematics, reading, writing, staff development, or related services associated with the grant category of individual student education plans with disabilities usually has narrow educational objectives. A grant program to assist school libraries with the acquisition of their collections would be an example of a categorical grant. The grant will explicitly prohibit spending for other purposes. The following are some terms that are often used in grant funds: Block Grant Yet another grant approach, block grants can take many forms. What distinguishes a grant from, say, a categorical grant is that the money is used for a variety of purposes. Block grants are characterized by grantees receiving small amounts of funds with broad parameters of how the money can be spent. Sometimes, the legislature will combine several categorical programs and permit grantees to spend their money as they see fit under one of the goals of the previous categorical program. While not general aid, block grant recipients typically value the ability to spend money on the broad purposes of block grants. Direct Grant These grants are made from the granting institution to the recipient without regard to potential recipients or other similar. These grants can be made because the grantor only decided to arbitrarily choose the grantee or because of special circumstances. An example of direct grants is when state legislatures allocate money to school districts to build or repair schools as part of disaster relief related to natural disasters. Discretionary Grant Discretionary Grants This is the term used to define the giver and the agency to exercise the freedom of choice about who should receive the funds and how the funds should be used. Typically, discretionary grants have a broader purpose, and funding from discrete grant grants provides a greater choice of expenditure to fund recipients. Competitive grants and direct grants can sometimes be classified as discriminatory grants when the eligibility criteria and uses of the money are very broad. Discriminatory grants are used by grantors to target some new or innovative program, often on a pilot basis. A recent example is Race to the Top funding, which is distributed to states on a competitive basis by the US secretary of education. Research Grant Research Grants As the name implies, these grants aim to discover new knowledge. Common examples of research grants are found at universities; the medical school received a research grant from the National Institutes of Health to investigate the smoking habits of fifty populations as part of a national broad effort in this area. Research grants are often also competitive grants and are made available to highly specialized technical institutions. Thus, the eligibility criteria are very strict. Legal Sources of Grants Funds in Indonesia In the implementation of grants, a firm legal basis is needed to be able to carry out development with a clear legal basis based on grant funds. Law While the source of the grant funds itself comes from the source of grants. Grants to local governments can be sourced from: a. Government; b. Other local governments; c. Domestic private agency/institution/organization; and D. Domestic community groups/individuals. Grants from the Government can be sourced from: a. APBN revenue; b. Foreign Loans; and/or c. Foreign Grants. Grants from Foreign Loans and Foreign Grants can be sourced from foreign governments, foreign agencies/institutions, international agencies/institutions, and/or other donors. (Government regulation number 2 of 2012 concerning regional grants, Chapter II article 4). Managing Grants Fund When a grant proposal is accepted for funding, the applicant is notified in several formal ways. This notice can be in the form of a letter or, as in the case of the federal government and some states, a grant award document. Foundations and private businesses will provide grant award letters. The award grant document contains important information about the amount of money given, the length or duration of the award, the contact person, and account number information needed to withdraw funds from the bank. (Prastama, 2019). The grant document is a financial commitment on the part of the granting agency. At this stage in the proposal and grant process, the applicant commits to implementing the program as proposed in the proposal and the agency commits to funding the project. In essence, both parties have entered into the contract. It is based on the two essential components of any bidding and accepting bid contract. Once a school or district is successful in securing a grant, a new set of issues will arise regarding proper grant management. The grantee has obligations beyond those specified in the activities section of the proposal. It must also agree to comply with other conditions to receive a cash grant. This obligation is referred to as a guarantee in many government grants. The granting agency usually has the organization that receives the grant "sign-off" on the guarantee. This signing is usually done as part of the documentation included in the proposal. In most cases, the chief school or district operational officer is required to sign. In some cases, the awarding agency may request formal school board acts as a condition of submitting a proposal. Sometimes in grant writer jargon, this part of the proposal is called the "boilerplate." For example, a request for proposal from the federal government will require applicants to sign a series of guarantees related to federal civil rights laws, and fiscal and auditing requirements. But experienced grant managers understand that boilerplate is serious business given the contractual relationship that exists between grantors and grantees. Other terms and conditions may be included or referenced in the notification letter or provided in an award document read it. Program Evaluation This is a systematic investigation of program benefits (Fink). Most RFPs require evaluation, but often grantees fail to make meaningful evaluations once they receive a grant. Often, they see the evaluation section of a grant proposal as just another piece to be completed with no idea how it could help their program or school. In other cases evaluation of the grant program is carried out separately and thus has little or nothing to do with the effectiveness of the school as a whole. When a school functions with a coherent management style, it uses all available resources to improve the school. Thus, the grant program evaluation requirements were used as an opportunity to be incorporated into overall school evaluation and accountability efforts. If a school does not have the expertise to design and carry out a sound program evaluation, it should seek outside help from a competent expert. Many grant programs allow this as an acceptable fee. Proposal Writing Strategies Administrators, teachers, and sometimes parent groups accept the challenge of seeking additional grant funding from multiple sources to meet critical resource needs in their school district, school, or classroom. Below is an outline of proven techniques for experienced grant proposal writers to help them win some extra funding. Grant proposal writing is often a competition, and to win the competition, it is important to learn more about this source text is required for additional translation information Send feedback Side panel for basic information about competition sponsors and the rules of the game. Make sure you know who the grant giver or sponsor is and what their organization's mission is. Your proposed program may or may not be compatible with that mission; otherwise, don't bother applying. Beginner grant proposal writers often make some common mistakes that are a waste of time for themselves, their staff, and the funding organization. First, determine if you are an eligible recipient. Make sure you are eligible to receive a grant from the organization you are applying to before you write your proposal. Grant givers are very specific about who they think will receive their grant. You must understand the funding criteria. Even if your organization is eligible to be a grantee, your proposal idea may not be eligible for funding. Some problems to avoid in this area might be asking for too much or too little money or asking for funds for something that the donor does not fund, for example, requesting construction funds when the grantor has determined that the grant will go to schools for curriculum development. The adage "no free lunch" applies to grants, so be sure to understand what the deliverables are before you write a proposal. Results are what the grantor expects from you. Some grants can be more trouble than they are worth. Grantees must assess whether the grant will be a relief or a burden to their organization before they apply. Grants are not "found money" and always have a price for the recipient. One way to become proficient at writing grant proposals is to analyze what happens to rejected proposals. In a way, one can learn from failure. Grantees are often passionate about this and interested in helping potential grantees become better at preparing proposals. They often share comments and assessment forms from proposal evaluators and will often provide suggestions on how to improve your proposal for further competition. Get to know the granting organization and help them get to know your organization or program. If possible, meet with the granting organization well in advance of the grant competition to learn about their priorities and to share information about what your mission and needs are. Audit and Reporting Grant funds are subject to audit. Financial and program audits are conducted regularly in all government grants received by school districts. The school district's annual audit will review the financial integrity of grant management and program implementation in terms of agreed activities and legal parameters of the program from an implementation perspective. The audit will look at expenditures relative to activities approved for the grant program and "permitted" expenditures." Therefore, a grant program that limits personnel costs for classroom teachers will have an "audit exception" if the money is used to hire professionals. In such a case, the school district will become obligated to repay any improperly used money and may be subject to other actions by the grantor. In the case of a government grant, this could include criminal prosecution. (Heni Rohaeni and Arenawati, 2020). Basic data reporting is another distinctive aspect of grant management. Funders are eager to obtain information on the number and types of participants and other program-related information about the programs they support. This data is often used as an indicator of program impact across states or nations. In many cases, data is used to justify requests for additional allocations from the legislature. Program evaluation done properly is often complex and expensive. Grantees are warned to keep this in mind when preparing their budget requests. A final consideration in the area of grant management is the requirement to disseminate information about the program. Some grantmaking agencies insist that grantees actively submit information about programs such as school districts, the media, or potential donors from the grant-making agency. The deployment of obligations may even include the establishment of a demonstration site specifically designed to receive visitors to view educational programs. Schools and school districts should be selective about which grants to pursue and how many program grants to commit at one time. School leaders who view grant programs primarily as "money found" are: more likely to have difficulty managing their grants and the school program as a whole. School administrators need to remember that grant-making institutions have their agendas which they fulfill through the distribution of grant money. The awarding agency always wants something from the recipient. Grants are generally designed as an incentive or motivator for the recipient institution. Grantees want to get schools to do something on their behalf, for example, serve a certain type of student or offer a specific curriculum. School leaders must be able to discern the value of a grant program for schools and assess whether or not to pursue grants. Contrary to popular wisdom, when it comes to grants, one should always "look the gift" of a horse in the mouth." Grant funds can be a useful additional resource for educational purposes. District/city schools and schools are advised to develop skills in seeking and obtaining grants. However, care is needed so that grant submission is done wisely and strategically. Grants also have obligations to their recipients and these should be considered before applying. The best approach to establishing a grant-seeking process is to ensure that the grant program considered aligns with and supports the school's strategy or school district plan. CONCLUSION A grant is a provision of money/goods or services from the regional government to other governments, regional companies, the community, and community organizations, whose designations have been specifically determined, are not mandatory and are not binding, and are not continuously aimed at supporting the implementation government affairs, or support the achievement of program targets and local government activities in mandatory and optional affairs. While the source of the grant funds itself comes from the source of grants. Grants from the Government can be sourced from APBN Revenues, Foreign Loans, and/or Foreign Grants. Grants from Foreign Loans and Foreign Grants can be sourced from foreign government governments, foreign agencies/institutions, international agencies/institutions, and/or other donors Grant funds can be a useful additional resource for educational purposes. District/city schools and schools are advised to develop skills in seeking and obtaining grants. But caution is needed so that grant applications are made wisely and strategically. Grants also have obligations to their recipients and these should be considered before applying. The best approach to establishing the grant-seeking process is to ensure that the grant program considered aligns with and supports the school's strategy or school district plan.
2022-10-06T15:16:01.158Z
2022-07-03T00:00:00.000
{ "year": 2022, "sha1": "c3fd83e958dbe83ea0f0adf654623d59065937ea", "oa_license": "CCBYSA", "oa_url": "https://e-journal.ikhac.ac.id/index.php/nidhomulhaq/article/download/2133/953", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "51f5a55cd063d33a250c933dcd1cf4926e50f11c", "s2fieldsofstudy": [ "Education", "Business" ], "extfieldsofstudy": [] }
226595848
pes2o/s2orc
v3-fos-license
Could Mergers Become More Sustainable? A Study of the Stock Exchange Mergers of NASDAQ and OMX This study investigates whether the merger of NASDAQ and OMX could reduce the portfolio diversification possibilities for stock market investors and whether it is necessary to implement national policies and international treaties for the sustainable development of financial markets. Our study is very important because some players in the stock markets have not yet realized that stock exchanges, during the last decades, have moved from government-owned or mutually-owned organizations to private companies, and, with several mergers having occurred, the market is tending gradually to behave like a monopoly. From our analysis, we conclude that increased volatility and reduced diversification opportunities are the results of an increase in the long-run comovement between each pair of indices in Nordic and Baltic stock markets (Denmark, Sweden, Finland, Estonia, Latvia, and Lithuania) and NASDAQ after the merger. We also find that the merger tends to improve the error-correction mechanism for NASDAQ so that it Granger-causes OMX, but OMX loses predictive power on NASDAQ after the merger. We conclude that the merger of NASDAQ and OMX reduces the diversification possibilities for stock market investors and our findings provide evidence to support the argument that it is important to implement national policies and international treaties for the sustainable development of financial markets. Introduction The ongoing globalization process and the rapid technological advancements in telecommunications and the internet have increased competition in many, if not most, sectors around the world. To grow or even survive, some companies have used alliances and mergers to expand their activity to other countries. The same is also happening to stock exchanges. Over the last decade, the largest stock exchanges began to merge with other stock exchanges around the world. Some examples include the Euronext (2005), the NYSE acquisition of Euronext (2006), the OMX merger (2003)(2004)(2005)(2006), the merger between the London Stock Exchange and Borsa Italian (2007), and the NASDAQ acquisition of the OMX Nordic stock exchange (2007). These improvements, in terms of new technologies and the possibility of remote access, create a favorable environment to invest in foreign markets, diversify portfolios, attract new investors, and increase trading volumes [1][2][3]. It is important to say that stock exchanges were created like mutual organizations (owned by its member stockbrokers), but some players in the markets still do not realize that major stock exchanges have demutualized; their members sell their shares in an initial public offering, and, actually, they run the business like a normal private company, trying to increase as much as possible the wealth of the shareholders. Examples of some of these movements to private companies are the Australian Securities Exchange (1998) (2007), and so on. The principal question is, after a stock exchange merger, working like a private company and not like a mutual organization, do the stock exchanges care about the sustainable development of investments, allowing investors to diversify their investments and reduce the risk of their investments? Is it necessary to develop national policies and international treaties for sustainable development and implement and monitor policies for the sustainable development of stock markets? A recent study, Otchere and Abukari [4], investigated whether the stock exchange mergers are the way for some powerful stock exchanges to become even more powerful in order to get a higher market share of stock exchanges around the world, and they concluded that the industry's concentration does not change the exchanges' profitability during the postmerger period. Unfortunately, Otchere and Abukari [4] do not analyze who the shareholders of the stock exchanges were, and, most importantly, what dividends they received after the merger. Profits cannot increase, but dividend payouts increase. They only analyzed the Herfindahl-Hirschman Index (HHI), "one of the most widely used measures of market concentration". Prior studies have described that stock exchange mergers increase competition between stock exchanges around the world [5] and decrease the trading costs based on economies of scale [6][7][8][9]. Amihud and Mendelson [10], Brennan and Subrahmanyam [11], and Datar et al. [12] also argued that stock exchange consolidations help the listed firms to reduce their cost of equity financing by improving their stock liquidity, informational environment, and governance on the secondary market. Hasan, Schmiedel, and Song [13] provided evidence to show that global exchange merger activities may promote the efficiency of cross-border capital flows and increased governance standards, and thus, it has the potential to benefit both the markets and investors around the world. Although stock exchange mergers benefit the shareholders of the stock exchanges, they do not generally help individual investors who prefer to diversify their portfolios to reduce risk. In this regard, the motivation of this study is to investigate whether stock exchange mergers can reduce the possibility of investors diversifying their portfolios and reducing risk and whether it is necessary to implement national policies and international treaties for the sustainable development of financial markets. In addition, authors like Rua and Nunes [14] argued that the evaluation of the comovements between stock markets is extremely important for investors to assess the risk of portfolios. Thus, the findings in our analysis are also useful to policymakers because both shocks and crises can be quickly transmitted across closely linked markets [15]. Like in all types of business, having only a very small number of stock exchanges around the world controlling all investments could be dangerous. For example, the EU refused to allow, in 2017, the merger of the German and British stock exchanges, arguing that this would lead to a monopoly. The first contribution of our study to the literature is that we find that the effect of stock exchange mergers affect the comovement between market indices. In addition, employing cointegration analysis, we find that the comovement between each pair of indices in the Nordic and Baltic stock markets and NASDAQ increases due to the merger. We recognize that the period of the merger concurs with some huge events, e.g., the subprime crises of 2007 and the sovereign crisis. Moreover, it might be the crisis that accelerated the process of the merger. Moreover, using Granger causality tests, we show that the merger tends to improve the error-correction mechanism for NASDAQ so that it Granger-causes OMX, but OMX loses predictive power on NASDAQ after the merger. Despite stock exchange mergers being an "old-fashion story", the strangest situation is that nobody investigated the impact of these mergers for investors. Thus, our paper bridges the gap in the literature to investigate the impact of the mergers for investors. In addition, stock exchanges are now (and not in the past!) normal private companies with several shareholders who want benefits and dividends. Thus, another contribution of our paper is that it bridges the gap in the literature to investigate the impact of these mergers by treating stock exchanges as normal private companies with several shareholders that want benefits and dividends. Our findings by using mean-variance (MV) and Omega ratios show that the merger does not reduce returns, yet it increases volatility by reducing diversification. Another important problem is that we move forward (without any investigation from academia before) to a monopoly in terms of stock exchanges around the world. This is our third contribution to make an urgent academic start to analyze stock exchange mergers around the world. The empirical\theoretical contribution of this investigation is to provide evidence to show that because stock exchanges are now running like private companies and the biggest stock exchanges are merging around the world, the diversification possibilities of stock market investors are reducing, and it will be important to implement national policies and international treaties for the sustainable development of financial markets. Our investigation wants to inform the academics and practitioners about the necessity to further explore, in several areas of finance, the impact of stock exchange mergers. The academics from finance have already made an amazing investigation on boards of directors, governance, and ethics, in several aspects, but it is very strange why the number of researchers that investigate stock exchange mergers is so small. What we know is extremely incipient. It is good to have more investigations into stock exchange mergers from different angles. Section 2 will describe the literature review and research hypotheses. Section 3 discusses data and all the methodologies being used in our study. Section 4 describes the empirical analysis, and Section 5 concludes. Literature Review Essentially, during the last 20 years, stock exchanges have moved from being governmentowned or mutually-owned organizations to being private companies, and it seems that academics are forgetting to analyze the impact of the changes from several aspects of finance and sustainability. Stock exchanges are now performing like normal companies, and they are owned by private shareholders. Despite being private companies, these private stock exchanges decide the listing and compliance standards for companies that want to go public. If we examine the ownership structure of several other major exchanges, we understand that NYSE Euronext is the largest stock exchange in terms of both market capitalization and traded value; it went public in 2006 and acquired Euronext in 2007. The Nasdaq OMX Group is the second-largest public stock exchange in the world in terms of traded value, and, in 2008, it acquired seven Nordic and Baltic exchanges. Tokyo Stock Exchange is the third-largest private stock exchange in the world. London Stock Exchange, which is owned by the London Stock Exchange Group, is also actually a publicly traded company. Based on this information, it is possible to conclude that running a stock exchange can be a good business for entrepreneurs. They can then manage the stock exchanges and demand that the companies and investors pay listing and transaction fees, respectively, and traders pay to have access to the markets. Hence, it is not surprising that big stock exchanges try to buy other small stock exchanges in order to control all the fees around the world. Authors like Otchere and Abukari [4] recently examined whether stock exchange mergers could increase efficiency or if it is a question of market power and found that the industry's concentration levels have not significantly increased and the concentration levels do not influence the exchanges' profitability in the postmerger period. Although the merger of stock exchanges could affect the shareholders of the stock exchanges, it does not generally help individual investors who usually want to diversify their portfolios to reduce risk. International portfolio diversification was established in the 1960s and 1970s when the USA and other investors became very active in foreign securities markets [16]. Grubel [17] found that investors gain from internationally diversified portfolios. Since then, this topic has received considerable attention in international finance. International diversification can be beneficial if it reduces the total portfolio risk by adding securities based in different countries, with lower correlations. Due to the introduction of new technologies and financial market liberalization in recent years, it is becoming easier to invest internationally [16]. The literature, however, has not yet shed much light on whether stock exchange mergers have had any impact on this process. Up to now, economic agents and policymakers have only explored whether national markets have become more integrated and what the impact on international portfolio diversification is. This paper, which considers the merger of NASDAQ with OMX, represents the first step to investigating the effect of mergers on international portfolio diversification. According to Choudhry et al. [18], Kearney and Lucey [19], and Chen et al. [20], cointegrated stock markets weaken the benefits of international portfolio diversification in the long run. Cointegrated assets exhibit significant long-term comovements, thereby lessening their diversification potential. Authors like Brooks and Del Negro [21,22], King et al. [23], Longin and Solnik [24,25], Lin et al. [26], Karolyi and Stulz [27], and Forbes and Rigobon [28] documented that the comovement of stock returns is not constant across the time. Candelon et al. [29] complemented this information, arguing that comovement analysis should also take into account the distinction between the short-and long-term investors because investors who invest for the short term are naturally more interested in the comovement of stock returns at higher frequencies (short-term fluctuations) whereas long-term investors focus essentially on the relationship at lower frequencies (long-term fluctuations). A'Hearn and Woitek [30] and Pakko [31] also show that the frequency level is important when analyzing comovement. However, besides Smith [32], few investigations make this distinction. Hassan and Naka [33] argue that portfolio diversification benefits would continue to accrue in the short run but not in the long run if markets are cointegrated and that the benefits of international diversification might be overstated for investors with long-term investment horizons. Charles et al. [34]] analyzed the impact of stock exchange mergers on the degree of informational efficiency and found that higher levels of efficiency are less frequent than lower levels of efficiency after a stock exchange merger and that the impact on the levels of efficiency is correlated with the levels of development, size, and both geographical and industrial diversification of the stock exchange. Research Hypotheses Our study contributes to the literature on international stock market cointegration by examining the impact of the merger of OMX (Denmark, Sweden, Finland, Estonia, Latvia, and Lithuania; we do not report the result for Norway because we cannot find data for the Norwegian stock market) with NASDAQ. The main hypotheses tested in this paper are Hypothesis 2 (H2). Mergers reduce diversification opportunities. Based on the information that we have already described in Section 2.1-stock exchanges are merging and turning slowly to a monopoly-we conjecture that comovements will increase between the indices and diversification opportunities will be reduced, as stated in Hypotheses 1 and 2 above. Data and Methodology In this section, we discuss the data and methodology being used in our paper. First, we collected data from DataStream. Second, cointegration tests were used to test the long-term relationships between OMX indices and the NASDAQ index. Third, causality tests were utilized to test the linear causal relationship between OMX indices and the NASDAQ index. Fourth, we tested whether nonlinear causalities exist between OMX indices and the NASDAQ index. Last, we compared the mean and variance of the returns of the OMX indices and the NASDAQ index before the merger to the ones after the merger. Data The data used in this study are the daily NASDAQ index and the six Euronext OMX indices, including Copenhagen 20 Index (Cop), Helsinki 25 Index (Hel), Riga All-Share Gross Index (Riga), Stockholm 30 Index (Sto), Vilnius All-Share Gross Index (Vil), and Tallinn All-Share Gross Index (Tal). Data were extracted from DataStream, and the total return index (capital gains and dividends) is used after the conversion of all currencies to USD (code "X(RI)~U$"). NASDAQ announced the purchase of OMX, the Swedish-Finnish financial company that controls seven Nordic and Baltic stock exchanges, on 25 May 2007. As of 27 February 2008, the deal was completed. In order to study the effect of the merger in the short, medium, and long run, we used the data from around five years before the merger (1 March 2002) until around five years after the merger (28 February 2013) of the NASDAQ Stock Exchange with OMX on 27 February 2008 and studied the short period (1 year), medium-range period (3 years), and the longer period (5 years) before and after the merger. Among the seven Nordic and Baltic stock exchanges that OMX controls, we do not extend our analysis to the Iceland Stock Exchange since OMX 15 was canceled in 2008 and was replaced by the OMX Iceland 6 index in 2009 due to severe financial problems. In addition, the Armenian Stock Exchange, the eighth stock exchange operated by OMX, is excluded from our sample because it was purchased by OMX after the announcement of the merger studied in this paper. Engle and Granger [34] proposed a two-step cointegration test that connects the moving average, autoregressive, and error correction representations for cointegrated systems. Before applying the two-step procedure, we first identify the integrated order of the variables. After confirming that the variables being analyzed are I(1), we applied the following cointegation equation to test whether there is any comovement relationship between any of the OMX indices and the NASDAQ index and whether there is any effect from the merger. Y = δ + δ X + δ D + δ X * D + ε (1) where Considering the potential effect of EU accession by Baltic countries in 2004 and the change in reporting regime by listed companies in 2005 (switch to mandatory IFRS reporting by the EU-listed firms, we include dummy variables, year, to control the compound effects. Not every panel includes the variable year since the sample of the panels does not cover 2004 and 2005. In addition, we apply the following cointegration equation without the merger dummy variable to test whether there is any comovement relationship between any of the OMX indices and the NASDAQ index in the subperiods separated by the date of the merger. Y = δ′ + δ ′X + u (2) where Y and X are defined in (1). If the standardized residual is not rejected as I(0), then the stock indices X and Y are cointegrated in the subperiods separated by the date of the merger. Linear Causality After establishing the long-run relationship between any of the OMX indices (Y ) and NASDAQ (X ), as shown in Equation (2), we proceed to examine the short-run dynamics and test whether there is any causality between any of the OMX indices and the NASDAQ index by using the following short-run dynamic models: where Y and X are defined in (1), the error correction term ECM is the standard residual at time t − 1 , obtained by running Equation (2), and the speeds of adjustment γ and γ′ are the coefficients of ECM . Engle and Granger [35] proved that when Y and X are cointegrated, there always exists a corresponding error-correction representation, as shown in Equations (3) and (4), implying that the change in the dependent variable is a function of the level of disequilibrium in the cointegration relationship captured by the error correction term as well as changes in other explanatory variable(s). The error correction term refers to the level of disequilibrium in the long run relation, while the speeds of adjustment represent the proportion by which the long-run disequilibrium (or imbalance) in the dependent variable is being corrected in each time period. If we do not reject the hypothesis that all α = 0 and γ = 0, then X does not Granger-cause Y . Similarly, the failure to reject that all β′ = 0 and γ′ = 0 suggests that Y does not Granger-cause X . We note that if any of the OMX indices (Y ) and NASDAQ (X ) are not cointegrated (that is, there is no long-run relationship between the OMX indices and NASDAQ) but both Y and X are still I(1), then we still apply Equations (3) and (4) to examine whether there is any linear causality between Y and X but the error correction term ECM has to be removed from the equations. We note that the causality tests developed by Engle and Granger [35] and Granger [36] are powerful. That is why many recent studies, for example, Billio et al. [37] and Jin & Kim [38], still apply the tests in their analyses. Nonlinear Causality Besides classical linear causality, we test nonlinear causality as well. Granger [36] originally proposed a novel idea to test the causal relationship between two-time series variables. Using two strictly stationary and weakly dependent residual series, u , u , which are obtained from Equations (3) and (4) and are denoted by x and y , we can detect the nonlinear causal relation. Following Baek and Brock [39], series Y does not strictly Granger-cause another series X if and only if: where Pr( ) denotes probability distribution and ‖ ‖ denotes the maximum norm. m ≥ 1, L , L > 1 are the given values and e > 0. Mean-Variance Analysis and Mean-Omega Analysis Traditionally, mean-variance (MV) criteria could be used as tools for decision making. [52], does not require the normality assumption for the distribution of returns. It measures the likelihood of achieving a given return, such as a minimum acceptable return or a target return. A higher Omega value implies a greater probability that a threshold return will be achieved. It is calculated by creating a ratio between the cumulative return probability of being above and being below the threshold return, representing the probability-weighted ratio of gains versus losses for some targeted return. The Omega ratio is defined as follows: where r is the threshold return and F is the cumulative density function of returns. Since the Omega ratio is the ratio between the expected return in excess of the threshold and the first-order lower partial moment, it is also a risk measure using the first-order lower-partial moment. Compared to MV, the Omega ratio considers all moments and is consistent with stochastic dominance [46,53]. We employ it as a measure to compare portfolios before and after the merger. Readers may refer to the following authors for more information: Chow et al [54] on the Omega ratio and stochastic dominance; Chan et al. [55] for the relationship between stochastic dominance and the extension of the mean-variance rule; Ma and Wong [56] Niu, Wong, and Xu [57], Guo, Niu, and Wong [58], and others for the relationship between stochastic dominance and other risk measures. Empirical Analysis Before analyzing the relationship between any of the six OMX indices and the NASDAQ index, we first examined the nature of the indices and exhibit in Table 1 (Panel A/B/C) some basic statistics of the daily stock prices and returns of the indices. The indices include the NASDAQ and the six OMX indices for the periods of one/three/five years (reported in A/B/C panel) before and after the merger of OMX with NASDAQ on 27 February 2008. For easy comparison, we also report the statistics for the combined periods (combining the periods before and after the merger). From the table, we find that except for Tal and NASDAQ in Panel B, the means of all the stock returns studied in this paper are higher before the merger than after the merger in all panels. We also find that the standard deviations of the stock returns of all the indices studied in this paper are smaller before the merger than after the merger in all three panels. The highest values of all indices appear before the merger except for Cop and NASDAQ in Panel C. However, the greatest returns of each index appear after the merger. On the other hand, the minimum prices of each index appear after the merger for Panels A and B. For Panel C, the minimum prices of Cop, Hel, Sto, Vil, and Tal indices appear before the merger and those of Riga and NASDAQ indices appear after the merger. Cointegration Before applying the cointegration tests, we first employed the Philips-Perron (PP) unit-root test to examine the stationarity property of the variables for the periods of 1/3/5 years before and after the merger of OMX with NASDAQ, exclusively and inclusively. We report in Table 2 the stationarity status for each series on both level and first differences. The table shows that all the price series involved do not reject the null hypothesis that the series has a unit root at the 10% level but reject the null hypothesis at the 1% significant level after the first difference, implying that all the indices are I(1) in the subperiods before and after the merger and in the entire combined period. This meets the nonstationarity requirement for the establishment of the cointegration relationship. Table 2. Unit-root tests for the levels and differences of stock price series. We turn to examine whether there is any cointegration relationship in the first, third, and fifth years, before and after the merger and in the combined periods. We report the results of the cointegration model stated in Equation (1) for the combined periods in Table 3. The compound effects have been controlled as well. From Table 3, we find that the p-values of PP tests of the residuals after fitting the cointegration equation stated in (1) are all smaller than 1%. The results imply that there is a cointegrated relationship between all of the six OMX indices and the NASDAQ index. In other words, we can conclude that there is a common stochastic long-term trend between each of the OMX indices and the NASDAQ index over the entire period after the dummy variable of the merger is included. author's own calculation. Y = δ + δ X + δ D + δ X * D + year + ε . In addition, we find that except for the intercept for Vil in the short-run (one year before and after the merger), which is insignificant, all other estimates of both the intercepts and the slopes are significant for the short, medium, and long runs. Moreover, except for the short-run Vil, which is negative, all other intercepts are positive in the short run. They become negative in the medium run and more negative in the long run. On the other hand, the slope coefficients are all positive, implying that each of the OMX indices and the NASDAQ index are moving in the same direction. In addition, we find that except for the slope for Hel, which is larger in the medium run than in the long run, for all other slopes, the longer the time period being tested, the larger the values become. These findings imply that, in general, the positive relationship between each of the OMX indices and NASDAQ index is stronger in the long run than in the short run. We then looked into the effects of the control merger dummy D on the cointegration relationship in Equation (1). To do so, we examined the estimates of both δ and δ . From Table 3, we find that all estimates of δ except Sto are statistically significant. Briefly, 1-year Riga is significant at the 5% significant level and all the others at the 1% level. All estimates of δ are statistically significant at the 1% level except for 3-year Tal. The implication is that the control merger dummy D strongly affects both the intercept and the slope of the cointegration model in (1). The results also imply that the long-run linear relationships between each OMX index and the NASDAQ Index change after the merger, irrespective of the sample time period in question. When we check the signs of the estimates of δ and δ for different periods, we find the following two interesting results: (1) The estimates of δ are all significantly positive in the period one year before the merger to one year after the merger. All become significantly negative in the periods of three and five years before and after the merger. The absolute values of the coefficients are larger for the five-year periods than for the three-year periods. (2) On the other hand, the estimates of δ are all significantly negative in the period one year before the merger to one year after the merger. Except for Riga, the estimates of δ become significantly positive in the period of three years before the merger to three years after the merger. For the period of five years before and after the merger, the estimates of δ are all significantly positive and larger than the three-year before-and-after periods. The first finding implies that the merger has a positive effect on the comovement of the OMX indices and NASDAQ in the short run (one year before to one year after the merger). In the medium run (3 years before and after), the effect is negative and becomes more pronounced in the longer run (5 years before and after). The second finding implies that the merger has a negative effect on the OMX indices in the short run. In the medium run, the effect is positive and becomes more pronounced in the longer run. Taken together, the two findings suggest strong short-run diversification effects that are reversed and exacerbated as the sample period increases. To further investigate the impact of the merger on the integration between the OMX indices and NASDAQ, we estimated the cointegration model stated in Equation (2) on separate samples and report the results in Table 4. Since the conclusion drawn from the results of Table 4 should be similar to those from Table 3, we only report the results that Table 3 cannot reveal. The most striking results from Equation (2) that Equation (1) cannot reveal is that except for Vil, none of the OMX indices are cointegrated with NASDAQ in the one-year short-run period before the merger, but all become cointegrated after the merger, implying that the merger of NASDAQ and OMX becomes more sustainable. In the medium/long run, all OMX indices are cointergrated with the NASDAQ index before and after the merger at or above the 5% significant level. We turn to examine the impact of the merger on the intercept and slope. We first examined the intercept coefficients (δ′). We find that with the exception of Vil, the intercepts are all positive, and, with the exception of Sto, all become negative in the short run after the merger. On the other hand, in the median run, the intercepts are all negative before the merger. They remain negative after the merger but with considerably smaller absolute values. All are strongly negative in the long run before the merger and, except for Tal, become strongly positive. Comparing the coefficients of slopes before the merger in Table 3 with the coefficients of slopes (δ ′) in Table 4 and the coefficients of slopes (δ ′) after the merger, all become larger in the short run but become smaller in the median run and become further smaller in the long run. This finding is also consistent with the results that the merger had a positive effect on the comovement of the OMX indices and NASDAQ in the short run but a negative effect on the comovement of the OMX indices and NASDAQ in a median period and a more negative effect in the long run. In all, we conclude that OMX indices and the NASDAQ index have a positive common trend, and the comovement between them enlarges after the merger in the short run but diminishes in the long run. The OMX Exchange operates eight stock exchanges, mainly in the Nordic and Baltic countries, while NASDAQ is mainly in the USA. Before the merger, OMX and NASDAQ were mainly influenced by their local financial issues in the short run. This could be the reason why these two exchanges were not cointegrated in the short run. Meanwhile, financial markets are linked with each other nowadays, and, then, two long-distance markets may be cointegrated with each other in the long run if affected by a similar global financial environment. However, after the merger, OMX and NASDAQ became one company and were cointegrated even in the short run. Our finding implies that the merger of NASDAQ and OMX becomes more sustainable. Linear Causality Since all variables are I(1), and there is cointegration between all OMX indices and the NASDAQ index except at one year before the merger, we next employ an error-correction model (ECM; Engle and Granger, [34]) to test whether there is any unidirectional or bidirectional relationship between the NASDAQ and OMX indices. The main results of the Granger causality test are reported in Tables 5 and 6, including the estimated speeds of adjustment. The null hypothesis of Table 5 is that NASDAQ does not Granger-cause OMX indices, while the null hypothesis of Table 6 is that OMX indices do not Granger-cause the NASDAQ index. According to Table 5, all the statistics of the F-test are significant at the 1% level, implying that the NASDAQ index Granger causes the OMX indices both before and after the merger, no matter how long the time period is. When we further assess the effect of the merger and estimate the speeds of adjustment, we find that the estimates of the error correction mechanism (γ) for NASDAQ causing OMX (except Vil) are not significant one year before the merger. Five out of seven become significant one year after the merger. Only two of γ are significant in the medium run before the merger, but all except Sto become significant after the merger. None of the γ are significant in the long run before the merger, but 5 out of 7 become significant at or above 5% after the merger. Over all periods, we find that the speed of adjustment increases for more than half of these estimates (one in short, five in medium, and six in the long run of the estimates become absolutely larger after the merger). These results imply that single-directional causality exists before and after the merger, but after the merger, there is a modestly more significant and rapid return to equilibrium. Looking at the results of Table 6, which tests the model in Equation (4), we are unable to reject the null hypothesis that OMX indices do not Granger-cause the NASDAQ index one year before/after the merger at the 5% significant level (see f−value in the Table 6). Thus, we are unable to conclude whether there is a positive or negative effect on the predictive power of the OMX indices on the NASDAQ index in the short run. In the medium run, Cop/Hel only Granger-causes the NASDAQ index before the merger and Sto Granger-causes the NASDAQ both before and after the merger. The implication is that after the merger, some OMX indices lose their predictive power on NASDAQ. In the long run, 5 out of 7 of the statistics of the f−test for OMX indices that Granger-cause the NASDAQ before the merger become insignificant after the merger. When we check the estimates of the errorcorrection coefficient, except for Vil, it is not significant in the short run before the merger. After the merger, two coefficients become significant. In the medium term, two estimates go from insignificant before the merger to significant after the merger. In the long run, six out of seven estimates go from significant to insignificant. Furthermore, four estimates of the error-correction coefficient in the medium run and six in the long run become absolutely smaller after the merger. These findings show that (1) the relationship of OMX causing NASDAQ only exists in the relatively longer time period before the merger, (2) the error−correction mechanism only exists in the short run after the merger and in the long run before the merger, and (3) NASDAQ index seems to return to the long-run equilibrium more slowly in the long run. Using subsamples, namely, one, three, and five years before and after the merger of NASDAQ with OMX, we test causality between NASDAQ and OMX stock indices. This table shows the f-value with the null hypothesis of no causality and the coefficients of the speed of adjustment between OMX indices and the NASDAQ index when the NASDAQ index is regarded as an independent variable. m = n = 10. The coefficients of the first two lag terms of X and Y are reported as well. In addition, ECM is not included in the model for the one year before the sample as a result of no cointegration. *, **, and *** denote the significance at 10%, 5%, and 1% respectively. Source: author's own calculation. ∆Y = δ + ∑ α ∆X + ∑ β ∆Y + γ • ECM + u . Using subsamples, namely, one, three, and five years before and after the merger of NASDAQ with OMX, we test causality between NASDAQ and OMX stock indices. This table shows the f−value with the null hypothesis of no causality and the coefficients of the speed of adjustment between OMX indices and the NASDAQ index when the NASDAQ index is regarded as a dependent variable. m = n = 10. The coefficients of the first two lag terms of and are reported as well. In addition, is not included in the model for the one year before the sample as a result of no cointegration. *, **, and *** denote the significance at 10%, 5%, and 1%, respectively. Source: author's own calculation. ∆X = δ + ∑ α′ ∆X + ∑ β′ ∆Y + γ′ • ECM + u . Nonlinear Causality To test the existence of strictly nonlinear causal relationships between the NASDAQ and OMX indices, we employed the nonlinear nonparametric causality test with m = 1, L = L = 10, e = 1.5. Table 7 shows the results of the nonlinear causality before and after the merger. We first look into the results of Panel A, which is based on the null hypothesis that NASDAQ does not nonlinearly cause OMX. We find only one rejection of noncausality at the 5% level before the merger in the short, medium, and long runs, while after the merger, NASDAQ nonlinear noncausality is never rejected at the 5% level in the short run. In the medium and long run, it is rejected three times each. In Panel B, we consider the opposite directional nonlinear causality between NASDAQ and OMX. As in Panel A, the number of significant rejections at or above the 5% level after the merger diminish in the short run but increase in the medium and long runs. These findings imply that the causality between NASDAQ and OMX becomes more complex after the merger in the medium and long runs. and an independent variable separately. m = 1, L = L = 10, e = 1.5 . *, **, and *** denote the significance at 10%, 5%, and 1% levels, respectively. Source: author's own calculation, using C programs software. Combining the results of linear and nonlinear causality, we find that (1) before the merger, NASDAQ and OMX have bidirectional causal relations in the long run and unidirectional relations in the short run, and these relations are primarily linear. (2) After the merger, the error-correction mechanism pushing OMX back to long−run equilibrium works better and more significantly; it does not work for NASDAQ. (3) After the merger in the medium and long runs, the causal relation of OMX causing NASDAQ becomes nonlinear. These results hint that NASDAQ and OMX operated independently before the merger. However, after the merger, NASDAQ and OMX operate as a group or a team. NASDAQ performs like a leader, and OMX performs like a follower. The predictive power of OMX on NASDAQ becomes weaker and nonlinear after the merger. Additionally, OMX, instead of NASDAQ, becomes the one who is responsible for adjusting and returning to the long equilibrium. Mean−Variance and Mean Omega Analysis We now turn to the question of whether and how the performance of the indices changes after the merger. Table 8 presents the basic statistics and Omega ratios for the daily stock excess returns of the stock indices from Euronext OMX in the short, medium, and long runs. Except for Tal and NASDAQ in the medium term, we find that the mean returns before the merger are higher than those after the merger. However, except for Vil in the short/long run, none of the coefficients are significant at the 5% level or better. Thus, we conclude (1) that there is no premerger or postmerger outperformance. On the other hand, the standard deviations of each index are larger after the merger. Among them, the F−statistics of the return between pre− and postmerger are all significant at the 1% significance level. This result infers that investors suffer more volatility after the merger when they invest in the OMX markets. All the Omega ratios with the threshold return of 0.00% are larger in the premerger period, showing a lower probability of earning positive profits after the merger. These results are consistent across the different time periods included in the sample. When we set the threshold return, −0.50%, the Omega ratios of the OMX and NASDAQ indices after the merger are much smaller than they are before the merger. However, when we set the threshold return relatively higher, at 0.50%, all Omega ratios are higher in the postmerger period except Vil in the long run. These findings imply that it is easier for investors to earn positive profits or control losses before the merger, but investors enjoy a higher probability of achieving a relatively high return after the merger. According to the three points above, we conclude that there is no existence of significant change in the mean of performance and that risk-averters prefer to invest before the merger to control risk while risk-seekers prefer to invest after the merger to earn a higher return. Using subsamples, namely, one/three/five years before and after the merger of NASDAQ with OMX, we report the mean-variance of the daily return of the stock index. This table shows the results of the t-test and the f-test with the null hypothesis that the mean and volatility of the stock index are different pre-and postmerger. Omega ratios with different returns, i.e., 0.00%, −0.50%, 0.50%, are shown as well. * and *** denote the significance at 10% and 1% levels, respectively. Source: author's own calculation. Conclusions This paper investigates how the stock exchange merger of NASDAQ with OMX affects the comovement between the stock markets of OMX and NASDAQ and briefly examines whether the merger reduces investor utility by reducing diversification opportunities. Some players in the market may not realize that stock exchanges were created like mutual organizations and owned by its member stockbrokers, but some players in the markets have demutualized and their members sell their shares in an initial public offering. Actually, stock exchanges are run like normal private companies and try to increase the wealth of the shareholders as much as possible. Thus, the principal question is, do they care about the sustainable development of investments, allowing investors to diversify their investments and reduce the risk of their investments? In this regard, we are interested in whether it is necessary to set up some national policies and international treaties for sustainable development and to implement and monitor policies for the sustainable development of stock markets. We find that the comovement between indices in the OMX and NASDAQ indices adjusts due to the merger. The cointegration test shows that the long-run common trend exists one year after the merger but not one year before the merger, implying that the merger improves the integration of the two stock exchanges, which, in turn, implies that the merger of NASDAQ and OMX becomes more sustainable. The results are congruent with Choudhry et al. [17], Kearney and Lucey [18], and Chen et al. [19], in that cointegrated stock markets weaken the benefits of international portfolio diversification in the long run. Using Granger causality with ECM, we find that the error-correction mechanism for NASDAQ causing OMX indices becomes significant after the merger, providing further evidence of the improvement of integration after the merger. However, the causal relation from OMX to NASDAQ becomes insignificant and/or nonlinear after the merger. These findings show that the relationship between the two exchanges changes after the merger. Finally, our study shows that the volatility of stock returns seems to be higher, with no clear rise of mean after the merger. In addition, the probability that a low threshold return will be achieved becomes lower after the merger, implying that it is difficult for investors to control risk as a result of the decreased diversification opportunities after the merger; however, the probability of achieving a relatively high target return becomes higher. Our finding confirms that the merger increases in the long−run comovement between each pair of indices in Nordic and Baltic stock markets, implying that the merger of NASDAQ and OMX reduced the diversification possibilities for investors in stock markets and inferring that it is important to implement national policies and international treaties for the sustainable development of financial markets. As already mentioned, Otchere and Abukari [4] examined whether stock exchange mergers could increase efficiency or if these stock exchanges mergers are only a question of market power, finding that the industry's concentration levels have not significantly increased and the concentration levels do not influence the exchanges' profitability in the postmerger period. Our investigation complements the Otchere and Abukari [4] findings, describing that stock exchange mergers do not benefit stock market investors in terms of portfolio diversion. One limitation of our study is that we have not compared other mergers of stock exchanges that have occurred in history. An extension of our study could compare other mergers of stock exchanges that have occurred in history to check whether the effects of other mergers are the same as those in our study and whether the effects have changed from time to time. Another limitation of our study is that we have not explored, at the same time, whether the wealth of Euronext shareholders increased after the merger with OMX. An extension of our study could also study the change in the wealth of the shareholders after the mergers.
2020-10-28T19:21:38.978Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "d8449dea864de7ef3bfb47f5f762f277c9f48e68", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/20/8581/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "14b9d559a7bde03773cfbfa426b178bdb98e8216", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
245702109
pes2o/s2orc
v3-fos-license
Genetic identification of bat species for pathogen surveillance across France With more than 1400 chiropteran species identified to date, bats comprise one-fifth of all mammalian species worldwide. Many studies have associated viral zoonoses with 45 different species of bats in the EU, which cluster within 5 families of bats. For example, the Serotine bats are infected by European Bat 1 Lyssavirus throughout Europe while Myotis bats are shown infected by coronavirus, herpesvirus and paramyxovirus. Correct host species identification is important to increase our knowledge of the ecology and evolutionary pattern of bat viruses in the EU. Bat species identification is commonly determined using morphological keys. Morphological determination of bat species from bat carcasses can be limited in some cases, due to the state of decomposition or nearly indistinguishable morphological features in juvenile bats and can lead to misidentifications. The overall objective of our study was to identify insectivorous bat species using molecular biology tools with the amplification of the partial cytochrome b gene of mitochondrial DNA. Two types of samples were tested in this study, bat wing punches and bat faeces. A total of 163 bat wing punches representing 22 species, and 31 faecal pellets representing 7 species were included in the study. From the 163 bat wing punches tested, a total of 159 were genetically identified from amplification of the partial cyt b gene. All 31 faecal pellets were genetically identified based on the cyt b gene. A comparison between morphological and genetic determination showed 21 misidentifications from the 163 wing punches, representing ~12.5% of misidentifications of morphological determination compared with the genetic method, across 11 species. In addition, genetic determination allowed the identification of 24 out of 25 morphologically non-determined bat samples. Our findings demonstrate the importance of a genetic approach as an efficient and reliable method to identify bat species precisely. Introduction Actinopterygii and Malacostraca) from putrefied samples [27]. Primer design is based on an alignment of referenced cyt b gene sequences (-1140 nt) from 751 Mammalia species, including bats. Primers have been used for the identification of different animal species belonging to 38 families, except bats. Many types of sample have been tested, including muscle, brain, lung or spleen tissue, blood, oral swabs, and others [20,27]. However, the drawbacks of collecting these types of samples involve the need to capture and restrain the animals combined with the difficulty of handling them. To avoid sampling live animals, using a non-invasive sampling technique such as faeces sampling can be an alternative solution to the capture of bats. Faecal samples represent a simple and easy method to collect samples from living bats without disturbing them using capture/release methods [28,29]. One study has demonstrated the possibility of genetically identifying bat species from guano samples and other non-invasive samples based on the amplification of a segment of the mitochondrial gene cox1 [21]. Despite the fact that some studies have shown disadvantages of studying faeces samples, due to the presence of PCR inhibitors, fragmented DNA and the poor quality of extracted nucleic acids [30], other studies have demonstrated the efficacy and success of studying bat guano [9,21]. The aim of this study was 1) to optimize the rapid PCR method previously described in Lopez-Oceja et al. (2016) with the new universal cyt b primers to identify autochthonous bat species from different types of bat sample, namely guano and wing punches tested for the first time; 2) to genetically determine bats in France and 3) to compare the morphological and genetic species identification of bat carcasses submitted for rabies diagnosis in 2018 and 2019. Bat specimens The specimens used in this study were selected from a frozen and archived collection of bat carcasses submitted to the ANSES-Nancy Laboratory for Rabies and Wildlife for rabies diagnosis between 2018 and 2019. Wing punches (each~8 mm,~0.02 mg) were sampled from bat carcasses diagnosed negative for rabies and stored at -20˚C. All bats were previously identified using a morphological identification key by bat specialists [15]. The choice of bat samples was based on the following essential criteria: bat species and the geographic zone of collection. A total of 200 bat carcasses belonging to one of three families, Rhinolophidae, Vespertilionidae and Miniopteridae, representing 22 species were included in the study. Of the 200 bat wing punches tested, 37 were included in the development of the PCR and 163 were used in the PCR amplification of the partial cyt b gene followed by sequencing of amplified products and sequence analysis. Tables 1 and 2 gives the characteristics of the 200 bat specimens used in this study. In addition, bat guano (one faecal pellet~50 mm 2 ;~0.02 mg) was also collected by bat specialists from the French Bird Protection League (LPO) Alsace as part of authorized bat studies. Faecal pellets were collected directly on the ground under the bat colony in three different sites in the Grand Est region in France. Bat species were determined in each selected area by inspected hanging individuals in the colony. A total of 31 bat faecal samples representing 7 species belonging to the families Rhinolophidae and Vespertilionidae were included in the genetic identification study (Table 3). Samples were collected in individual bags, stored at -20˚C and then at -80˚C before analysis. Ethics statement Bats are protected species in Europe and in France. All biological samples employed in this study had been submitted for rabies diagnosis by ANSES-Nancy Laboratory for Rabies and Wildlife in accordance with the formal authorization by the French Ministry of the Environment [31]. In France and within the European Union, the legal frame-work for using under experimentation purposes is governed by Regulation 2010/63/EU of the European parliament and of the council of 22 September 2010 (applicable and translated in French in 2013) and handling of wildlife animal in the field does not require any prior specific ethical approval. DNA extraction DNA extraction was performed using 1 punch per animal or 1 faecal pellet per site or per bat. Wing punches were directly used for DNA extraction, whereas a pre-extraction step was carried out to prepare bat faeces. Each faecal pellet was ground with 120 μL of 1X PBS buffer (phosphate buffered Saline, Sigma-Aldrich, Saint Quentin-Fallavier, France) then centrifuged for 5 min at 30,000 x g. For DNA extraction, 20 μL of supernatant was used and the extraction was performed using the Nucleospin Tissue Kit (Macherey Nagel, Hoerdt, France), following the manufacturer's recommendations. DNA samples were quantified using a Qubit fluorometer (Invitrogen, Marseille, France) and stored at -20˚C before use. Sequencing and phylogenetic analysis Amplicons were analysed using 2% agarose gels stained with the intercalant SYBR Safe (Thermo Fisher Scientific, IIIkirch, France) then visualized using Bioimager (Bio-Rad, Roanne, France). Sanger sequencing of PCR products was carried out by a service provider (Eurofins, Ebersberg, Germany) with the reverse and forward primers used in the PCR. All nucleotide sequences were assembled using Vector NTI software (version 11.5.3) (Invitrogen, France). Sequence alignments and determination of the percentages of identities and similarities were carried out with BioEdit Software (version 7.2.5) and MEGA X. Genetic identification was determined using BLAST (Basic Local Alignment Search Tool) and by constructing a phylogenetic tree with MEGA-X using the maximum likelihood algorithm and the Tamura-Nei model between the 25 sequences from this study (representing 2 families and 15 species) and 52 representatives of bat species (3 families, 29 species) ( Table 4). The bootstrap probabilities of each node were calculated using 500 replicates to assess the robustness of the maximum likelihood method. Bootstrap values over 70% were regarded as significant for phylogenetic analysis. The nucleotide sequences were identified using BLASTN with the following parameters: standard nucleotide database and standard algorithm parameters by default (threshold of 0.05 and mismatch scores of 1,-2). In each case, the top BLAST hit was retained if the BLAST alignment covered more than 95% of the query length and the BLAST high-scoring segment pair identity was greater than~90%. Genetic identification of bat carcasses and bat faeces Bat carcasses. Of 163 bat wing punches tested using cyt b PCR, 152 were genetically identified by BLAST analysis and/or phylogeny. The 152 genetically identified samples represented the 3 families currently distributed throughout France with bat species belonging to the families Miniopteridae (n = 1), Rhinolophidae (n = 2) and Vespertilionidae (n = 19), respectively (Table 5). Twenty species out of the 35 bat species reported to date in France were genetically determined with an over representation of Pipistrelle bats in the sampling (37% = 61/ 163 � 100). BLAST analysis allowed the identification of 2 bat species belonging in the Rhinolophidae family with~96% of nucleotide similarity with the GenBank sequences KU531352 (R. hipposideros) and MH029812 (R. ferrumequinum) and the identification of M. schreibersii from the Miniopteridae family with 93% of nucleotide similarity with the MK737740 sequence. Within, the Vespertilionidae family, 16 bat species were genetically identified by BLAST with a % nucleotide identity ranging from 87% to 100% (S1 Table). Twenty out of the 156 samples belonging in the Vespertilionidae family could not be identified by BLAST sequence analysis of the cyt b amplicons. These samples had previously been morphologically determined as E. serotinus (n = 6), V. murinus (n = 2), E. nilssonii (n = 1), and Plecotus sp (n = 11). Interestingly, the phylogeny allowed the genetic determination of two species, Plecotus austriacus and Plecotus auritus for 9 samples analysed with a boostrap of 99 (Fig 1). The partial D-loop amplification (424-bp) of five bats morphologically identified as E. serotinus showed 100% of nucleotide similarity with E. serotinus (GenBank no. accession MF187797.1). Bat faeces. The analyses of cyt b sequences led to a specific identification of the 31 samples of bat species from one faecal pellet for the seven bat species tested (Table 6). The 31 genetically identified samples represented 2 out of the 3 families currently distributed throughout France with bat species belonging to the families Rhinolophidae (n = 1) and Vespertilionidae (n = 3), respectively (Table 6). BLAST analysis allowed the identification of the bat species, R. hipposideros with~96% of nucleotide similarity with the GenBank KU531352 and KC978344 sequences.~94% of similarity were shown between bats morphologically identified as M. emarginatus and the AF376849 GenBank sequence representative of M. emarginatus. Within the two species P. Interestingly, and as for bat carcasses, the samples that had previously been morphologically determined as Plecotus sp (n = 11) could not be identified by BLAST sequence analysis of the cyt b amplicons but was identified by phylogeny with a bootstrap of 99 (Fig 1). Genetic identification allowed clarifications for 26 bats tested (18 bats morphologically identified as not determined and 8 bats morphologically identified as Pipistrellus sp.) ( Table 5). Bat faeces. The genetic identification of bat species from the guano samples showed 2 morphological misidentifications out of the 31 guano samples tested. Misidentifications were reported in two sites: the site 22 among Plecotus sp. and R. hipposideros and the site 31 among E. serotinus and R. hipposideros (S1 Table). Our results corroborate the Nadin-Davis (2012) study, which also showed non-negligible percentages of morphological bat species misidentification of between 10 and 15%. It is rare and very complicated to collect samples for research or rabies diagnosis from autochthonous bats. The fact that all bat carcasses included in this study came from a sample collection compiled for rabies diagnosis at ANSES Laboratory led to an over representation of P. pipistrellus in our sampling. In France, P. pipistrellus is a very common bat species compared with other bat species. On average, there is one P. pipistrellus colony in each town in France (Laurent Arthur, personal communication). P. pipistrellus represents on average between 45 and 50% of the total number of carcasses in the rabies diagnosis sample collection. In our study, P. pipistrellus represented 16% of the total number of samples. The species could not be identified for 11 of the 163 samples tested. These samples were morphologically identified as E. serotinus (n = 6), E. nilssonii (n = 1), V. murinus (n = 2) and Plecotus sp (n = 2). One hypothesis of species non-identification is that the cyt b PCR was not able to identify these 8 samples due to DNA degradation. Two published studies investigated the genetic structure of E. serotinus bats by amplifying the partial D-loop region [25,26]. Thus, the amplification of the partial D-loop region on the five E. serotinus was successful and our results on Sanger sequencing confirmed the morphological species determination as E. serotinus. Regarding bat faecal specimens, results and analyses of the 31 amplicons showed that the cyt b PCR allowed specific identification of bat species from just one faecal pellet of bat guano. Bat species have previously been genetically identified from guano samples by amplification of a segment of the cox1 mitochondrial gene using real-time PCR [21]. Some studies have demonstrated the advantages of using real-time PCR compared with conventional PCR: real-time PCR is more sensitive, specific and rapid as a diagnostic method for detecting Vibrio vulnificus and Samonella spp. compared with conventional PCR [55,56]. Both PCR techniques are equally effective for detection of the genome of visceral leishmaniasis [57]. The discrepancy between the results obtained in our study and those of the Walker et al. study likely arises from using a traditional PCR with the cyt b gene universal primers [21,27]. In our study, the genetic determination of bats was based on universal primers of the cytb gene, described by Lopez-Oceja et al., as highly specific, especially for highly degraded DNA samples (Lopez-Oceja et al., 2016). Species identification from bat faecal samples can also be undertaken by DNA minibarcode assay based on the amplification of a segment of the mitochondrial gene cytochrome c oxidase I (COI) [21]. New primers targeting a 580 bp fragment of the COI gene were described for the identification of bat species [21]. Interestingly, the comparison between the cytb and COI genes was studied by Tobe et al. for reconstructing mammalian phylogenies [58]. Their results tend to support the use of Cytb over that of COI. Conventional PCR allowed us to obtain nucleotide sequences from amplicons and to genetically determine bat species using BLAST and/or phylogeny. In addition, the cost of real-time PCR is higher than conventional PCR. In our study, we demonstrated the efficacy of using universal cyt b primers to genetically identify autochthonous bats from faecal samples, a non-invasive method. The cyt b PCR made it possible to determine 18 bat samples that could not initially be identified based on morphological criteria. Non-determination of bats can be attributed to the state of decomposition of bat carcasses, the age of the bat, especially for juveniles or pups, or inexperienced bat naturalists. Morphological identification of bat species is usually carried out on living bats. Some morphological features disappear if the carcasses are not fresh, and identification becomes more complicated, creating a source of errors [59,60]. It is important to identify bat species to preserve bats, which play a key role in the environment. Bats play an important biological and ecological role and many studies have suggested that they are reservoirs in the transmission of many zoonoses and infectious diseases from animals to humans [3,9,61]. To better understand bats and their role in the circulation of pathogens, specific and precise identification of bat species is required. Our results here showed that genetic identification is an efficient way to identify bat species in France and is a rapid and reliable tool to use compared with morphological identification. Supporting information S1
2022-01-06T05:20:02.823Z
2022-01-04T00:00:00.000
{ "year": 2022, "sha1": "c32dee7ffb16faa4f0163bce39ee762fb6d50422", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0261344&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c32dee7ffb16faa4f0163bce39ee762fb6d50422", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244501299
pes2o/s2orc
v3-fos-license
Bovine serum albumin promotes reactivation of viable but non-culturable Mycobacterium tuberculosis via activation of protein kinase-dependent cell division processes Abstract in shade. Excessive dye was removed using 3% hydrochloric acid ethanol for 15 minutes. Next, 164 the smear was covered with 10 µg/mL Nile Red in ethanol and incubated at room temperature 165 for 15 minutes. Finally, the smear was counterstained with 0.1% (w/v) potassium permanganate 166 solution with 1 minute. Stained slides were air-dried and mounted as described previously. 167 Fluorescence microscopy was performed at 100× magnification for green (acid-fastness The VBNC reactivation assay was performed as described elsewhere (18) into 24-well plates in 1.5 mL increments and sealed with gas-permeable film. The plate was 215 incubated for 20 days at 37˚C under 5% CO2. At the end of incubation, the number of colonies 216 grown was measured as described previously. 217 Evaluation of the reactivation-promoting effect of fatty acid and globulin-free BSA 218 To determine the effect of BSA Cohn fraction V contaminants, we measured the reactivation-219 promoting activity of fatty acid and globulin-free BSA toward DPI-treated VBNC Mtb cells. for the untreated control population and 4.64 ± 6.10 × 10 3 CFU/mL for DPI-treated population. culturability. The addition of albumin alone into the medium could also induce reactivation. 282 The addition of FBS in BSA-free Dubos medium slightly reduced the regrowth rate; however, BSA promotes reactivation of VBNC M. tuberculosis cells were successfully reactivated at the end of incubation with or without FBS. These 284 phenomena were also observed in H37Ra (Suppl. Fig 2). Thus, we used the H37Rv strain for 285 the further analyses in this study. 286 Incubation with sodium pyruvate, which was reported to have a reactivation-promoting 287 effect on VBNC cells elsewhere, led to transient regrowth by day 15 and a slight reduction at 288 day 20. (Suppl. Fig. 3[A] and [B]). 289 We also confirmed that the presence of small population of intact Mtb cells (10 3 CFU/mL) 290 could grow normally in BSA-free Dubos medium, suggesting that the reactivation might not 291 be due to the presence of small number of culturable cells after DPI-treatment (data not shown). IV. The antioxidative property or the fatty acid from BSA did not promote 293 reactivation. 294 As shown in Fig. 4(A) and (B), the reactivation-promoting effect of BSA was specific to 295 bovine and human serum albumin. Ovalbumin did not show a reactivation-promoting effect 296 but maintained the number of culturable cells in this system. NAC (antioxidative agent) and 297 D-mannitol (free radical scavenger) did not show any reactivation capacity. 298 We also checked whether the purity of albumin affects the promotion of reactivation using 299 fatty acid and globulin-free BSA and confirmed that there was no significant difference in 300 reactivation-promoting effects (Suppl. Fig 4). These results suggest that commercially 301 available albumin including fatty acids do not affect the reactivation of DPI-treated Mtb. incubation with 10 µM or 30 µM H89 resulted in a CFU/mL value for 3.42 × 10 6 ± 7.45 × 10 5 308 CFU/mL and 3.63 ± 1.70 × 10 2 CFU/mL, while incubation without H89 resulted in 3.00 × 10 8 309 ± 5.07 × 10 7 CFU/mL. We also confirmed that staurosporine, which is known as mycobacterial 310 protein kinase PknB inhibitor (35), suppressed reactivation at 10 µM and resulted in a CFU/mL 311 value for 2.09 × 10 3 ± 7.62 × 10 2 CFU/mL, while incubation without staurosporine resulted in 312 3.58 × 10 8 ± 5.78 × 10 7 CFU/mL (Fig 5[A] and [B]). All inhibitors used in this study did not 313 cause a reduction of the growth of intact MTB cells (Suppl. Fig. 5A and 5B). 314 We also performed molecular docking simulation of these inhibitors toward their 315 considerable targets on Mtb. As shown in Suppl. In this study, we confirmed that DPI could induce a VBNC state in H37Rv as well as H37Ra. 319 The mechanism underlying the effect of DPI is considered to involve the inhibitory effect of 320 NADH oxidase, which results in the inhibition of the electron transport system of Mtb. This with Nile Red (Fig. 2[F]). This could also reveal the distribution of the lipid body in the cell as 330 some foci of relatively strong signals of Nile Red, suggesting that the transformation from 331 growing state to VBNC, with drastic alteration of the lipid metabolism. BSA promotes reactivation of VBNC M. tuberculosis Secondly, we found that DPI-induced VBNC could facilitate reactivation not only by 333 incubation with FBS but also with OADC supplementation and BSA alone, suggesting albumin 334 might act as a reactivation-promoting agent in both H37Rv and H37Ra (Fig .3 and Suppl. Fig. 335 2). These findings were contrary to the findings of the previous study, which showed that only 336 FBS could facilitate reactivation (18). We considered the reason for the difference may be due further analyses were performed using H37Rv. 342 We also tested the reactivation-promoting effect of BSA in Wayne's hypoxic culture, which 343 is widely used for inducing a VBNC of Mtb, and found that there was no significant difference 344 with or without BSA (data not shown). 345 Pyruvate, which is known to act as reactivation promoting agent for both gram-negative In this study, we should note that impurities 385 of albumin did not affect reactivation. Although the underlying mechanism is still unclear, both 386 fatty acid and globulin-free BSA and BSA Cohn fraction V showed similar reactivation-387 promoting effect toward DPI-treated Mtb (Suppl. Fig. 4). 388 The reactivation inhibition assay by SQ22536, H89 and staurosporine gave us an important 389 clue for understanding the effect of BSA. In the present study, we could suppress the which was shown in PknB (64) and staurosporine, which act as PknB inhibitor, could also 408 inhibit reactivation of DPI-induced VBNC Mtb (Fig. 5). 409 Our study suggested that the inhibition of mycobacterial protein kinase by H89 and BSA promotes reactivation of VBNC M. tuberculosis staurosporine seems to critically affect several important cellular processes, followed by 411 reactivation (Fig. 6).
2021-11-24T16:24:57.769Z
2021-11-22T00:00:00.000
{ "year": 2023, "sha1": "62ddd6bae7d148daf53fe57c32bfbda40cb8f787", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/11/22/2021.11.22.468319.full.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "6b3b39109d782f7de4b966554c39716d2bff5757", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
229214549
pes2o/s2orc
v3-fos-license
Distortions, deviations and alternative facts: reliability in crystallography In this text based on the 2018 Lonsdale lecture, beginning with early work by Kathleen Lonsdale, instructive examples are given of unusual and unexpected structures derived from X-ray crystallography, not all of which are genuine results. Introduction This article is based on the Lonsdale lecture given by invitation at the British Crystallographic Association Spring Meeting at Warwick University in March 2018; by tradition, the Lonsdale Lecturer, nominated by the BCA Young Crystallographers' Group, is expected to combine aspects of original research with an educational approach. The lecture in 2018, with a title partly inspired by a current political catchphrase, took its starting point from work carried out by Professor Dame Kathleen Lonsdale (Fig. 1), in whose honour the annual lecture was created. Lonsdale, born Kathleen Yardley in 1903, was the first woman President of the IUCr (1966) and one of the first two women to be elected Fellow of the Royal Society (1945); she was appointed DBE (Dame Commander of the Most Excellent Order of the British Empire) in 1956. She died of cancer in 1971. Probably her best-known published work was as joint editor of Volume I (Symmetry Groups) of International Tables for X-Ray Crystallography in 1952. In addition to scientific works, she wrote books expressing her Christian faith and pacifism (as a Quaker), including an account of her time in prison as a conscientious objector, and based on her own balance of scientific research and family life; she was a fulltime mother of three children in the early 1930s. She saw no conflicts in these diverse aspects of her life. Angular distortions in and around benzene rings Such out-of-plane distortions can be considerable with six bulky substituents. The most extreme case in which the substituents are chemically identical is C 6 (SiMe 3 ) 6 (KELVOM; Sakurai et al., 1990) with internal torsion angles up to 12 and external up to 62 . A particularly distorted ring, with maximum internal and external torsions of 45 and 73 , respectively, is found in 1,3,5-tris(diethylamino)-2,4,6-trinitrobenzene (JARLOD; Chance et al., 1989); this is most certainly not a planar arrangement! Polyaromatics, in which benzene rings are fused together, extend the scope for out-of-plane twists generated by steric hindrance of substituents on adjacent rings. This is well illustrated by a series of so-called 'twistacenes' (Fig. 2), in which the overall molecular twist is measured by the dihedral angle between the end groups; with just one diphenyl-substituted ring this is already 66 , and increases with the addition of each further ring to 105, 144 and 184 (Pascal et al., 1986;Rodríguez-Lojo et al., 2013;Lu et al., 2004;Clevenger et al., 2018). Angular distortions may also occur in the plane of the ring; although ring internal angles generally deviate only slightly from the ideal value of 120 , the two external angles X-C-C for a substituent X can vary much more, one of them expanding while the other shrinks. Having found some marked distortions of this kind in our own research (Fig. 3, top and middle), I needed to find suitable structures for comparison and assessment of this effect. A search of the CSD with threshold values for the relevant geometric parameters (e.g. an external angle <100 ) is straightforward but needs to be carried out with care for two reasons. First, significant angular distortions are a necessary consequence of small rings fused to the benzene ring (a four-membered ring has internal angles around 90 ), so structures in which X is part of a fused ring should be excluded from the search. Second, a surprising number of severely distorted structures (several hundred) are identified, even with few and simple substituents. Closer investigation shows that most of these are structures with probable disorder that has not been handled satisfactorily or some other artefact of a poor refinement model; a particularly common case (almost 30%) is an undoubtedly disordered and feature articles poorly modelled toluene solvent molecule, for which a significant distortion is not likely to be real. Compared with genuine cases of in-plane angular distortions in this way, our own results (Maddock et al., 2018) are indeed extreme, the smallest C-C-Fe angle in these ferrated benzene derivatives being 97 with others not much larger. Clearly the cause here is a significant secondary FeÁ Á ÁN interaction that may be regarded as incipient covalent bonding leading to angular distortions also at the Fe and N atoms. A similar distortion has been found for a PÁ Á ÁB interaction in a compound with adjacent phosphane and borane substituents (Cowie & Emslie, 2014). An even more extreme case occurs with an angle of 85 when one of the trimethylsilyl groups is removed from a ferrated benzene to give an anionic species (Clegg & Hevia, 2020; Fig. 3, bottom). The FeÁ Á ÁN secondary bonding interaction here is obviously strengthened and this raises the related question of what distinguishes a secondary interaction from a covalent bond. To find appropriate answers in scientific research we must make sure we ask the right questions! Another class of compounds that are expected on simple arguments to have planar molecules are porphyrins. Large folding distortions to give bowl and saddle shapes can be produced by a combination of electronic and steric effects of substituents (Smith et al., 2005(Smith et al., , 2018Blake et al., 1998). Reliable characterization of these structures is challenging in the face of high-Z 0 values and extensive disorder leading to overall low precision, but the observation of consistent bond length patterns permits an assignment of NH versus N in the porphyrins, even though the H atoms cannot be located in difference electron density maps. 'Added value' from consistencies and trends in a series of structures and comparison with theoretical models Another good example of geometric pattern recognition yielding useful information beyond the statistical significance of a single structure determination is provided by a series of hexameric imidolithium clusters [Li(N CRR 0 )] 6 (Clegg et al., 1983;Barr et al., 1986;Armstrong et al., 1987); though some of these have crystallographic inversion symmetry, others crystallize with more than one molecule in the asymmetric unit, so the entire series gives many instances of a motif of a triply bridging N atom over an Li 3 triangle (Fig. 4). The motif is unsymmetrical and in principle has three different, inequivalent Li-N bond lengths; their mean values taken over a total of 24 symmetry-independent units in this series of structures are 1.99, 2.01 and 2.05 Å -differences that are statistically feature articles 6 Clegg Reliability in crystallography IUCrJ (2021). 8, 4-11 Structural features of hexameric amidolithium clusters. insignificant for individual motifs but are a consistent pattern without exception across all the cases. A theoretical calculation for the archetypal amidolithium [LiNH 2 ] 6 published at about the same time (Raghavachari et al., 1987) suggested bond lengths of 1.99, 1.99, and 2.06 Å ; the small mismatch with the experimental structures was ascribed by those authors to 'crystal packing forces', despite the consistent pattern observed in molecules with different crystalline environments, and led to a discussion in print (Clegg et al., 1988;Raghavachari et al., 1988). The theoretical study did not recognize the subtle but important distinction between amido and imido ligand systems; the geometrical distortions away from equal bond lengths are small but significant. An evolving conflict between theoretical and experimental structures was also found in the geometrically simpler case of five-coordinate complex anions [MCl 5 ] 3À , where M is a divalent metal, in crystalline salts with an [M 0 (NH 3 ) 6 ] 3+ cation (M 0 = Cr or Co) [Fig. 5(a)]. Previous results with trigonal-bipyramidal geometry were known for M = Cu, where shorter axial bonds are an expected consequence of the d 9 metal ion electron configuration, and for M = Cd, where the axial and equatorial bonds are almost the same length; for d 10 metal ions theoretical models suggested equal bond lengths or an axial elongation (Raymond et al., 1968;Long et al., 1970;Burdett, 1975Burdett, , 1976Rossi & Hoffmann, 1975). The structure for M = Hg, in the same cubic space group as these two, was found to have a marked axial compression in contradiction to this expectation (2.519 versus 2.640 Å ) (Clegg et al., 1975). Subsequent modified theoretical treatments were able to reflect this experimental result (Shustorovich, 1978). However, a second polymorph with lower symmetry (and minor disorder, easily modelled), discovered later, has the opposite trend, with 3.034 Å axial and 2.417 Å equatorial bonds (Clegg, 1982), so the situation is not so simple [Fig. 5(b)]. A comparable axial elongation was subsequently found for two different polymorphs of the salt with M = Hg and M 0 = Co (Clegg, 1982;Herlinger et al., 1981) [Fig. 5(c)]. The structure of the complex with M = Zn and M 0 = Cr is different again. It is isomorphous with the second (rhombohedral) polymorph of the corresponding Hg complex, but with disorder for the Zn atom as well as the 'equatorial' Cl atoms, such that the observed structure represents an intermediate stage of a ligand-exchange reaction between tetrahedral [ZnCl 4 ] 2À and a further chloride anion (Clegg, 1976): three (disordered) pseudo-equatorial Zn-Cl bonds are 2.215 or 2.270 Å in length, while the breaking and forming 'axial bonds' have lengths of 2.513 and 3.533 Å [ Fig. 5(d)]. As well as studying a series of related compounds, valuable information beyond that available from a single-crystal structure can also be derived from measurements on the same sample under different conditions of temperature, pressure or other environmental variables. With modern equipment including diamond anvil cells and highly reliable controlledtemperature devices this is a relatively straightforward undertaking; we used it, for example, in an investigation of the phase transition of barbituric acid dihydrate observed on cooling (Nichol & Clegg, 2005). It was a much more challenging experiment when Kathleen Lonsdale used variabletemperature data collection with photographic methods (Lonsdale, 1956), for example, to study atomic and molecular vibrations and thermal expansion for anthraquinone (Lonsdale et al., 1966) and for the [2.2 0 ]cyclophane molecule di-paraxylylene (Lonsdale et al., 1960), the latter being another example of benzene ring distortion out of planarity. Structural disorder: artefacts, misinterpretation and avoidance The incidence of disorder in a crystal structure, as well as complicating the process of structure determination from diffraction data, can lead to problems and ambiguities in interpreting the resulting refinement model. These issues may arise from questions of how the various disorder components should be considered as belonging to the same or different combinations, and also from the possibility that the disorder modelling is inappropriate or incomplete. In some cases, of course, where disorder is likely to be present but has not been recognized, the structure may be seriously misinterpreted. One of the classic examples is the saga of the so-called 'bondstretch isomers' of molybdenum complexes, elegantly summarized by Parkin (1993). What appeared to be markedly different Mo-O bond lengths in what were otherwise essentially identical molecules were actually artefacts of unrecognized and unresolved disorder of oxo (O) and chloro (Cl) ligands in a solid-solution mixture of two different compounds. Disorder can be a particular nuisance for molecules with a degree of pseudo-symmetry and may thwart the whole purpose of a structure determination experiment. An especially good example in my research experience is the investigation of carbaboranes with the intention of finding the structural consequences, particularly the influence on bond lengths, of introducing substituents with different electron- donating characteristics (Fig. 6). It was expected that the use of a range of substituents (X) on one of the two C atoms would have a particularly marked effect on the length of the C-C bond, which is similar to that of the other cage C-B and B-B bonds in many compounds of this family. Unfortunately, initial attempts in which the second C atom remained unsubstituted and retained its terminal H atom led to structures in which this C atom and the four B atoms bonded to the substituted C atom were disordered, there being no crystallographic evidence from geometry or electron densities to distinguish among these five atoms. The disorder, a consequence of five possible orientations of the molecule, is avoided by replacing the carbon-bound H atom by a substituent that is 'innocent' in the sense of having no significant electronic influence, thus clearly marking the C atom and, at the same time, providing a steric factor discouraging disorder. The use of a phenyl substituent has the added bonus of improving crystallization by offering the prospect of intermolecular aromatic ringstacking interactions. The results for a series of compounds with strongly electron-donating substituents (X) are unambiguous and very marked, with a considerable elongation of the C-C bond, to as much as 2.001 Å (from around 1.7 Å ) for a deprotonated OH substituent that behaves essentially as a pentuply bridging carbonyl group in the cage (Brown et al., 1987;Coult et al., 1992;Boyd et al., 2004;Fox, MacBride et al., 2009;); the 'proton sponge' salt of this anion is shown in Fig. 6. A similar approach has met with less success in the case of some isatogens (Fig. 7), bioactive isomers of isatins. The main structural interest here is the five-membered ring with its two attached O atoms, and the effect of different substituents (R) on its electronic and hence geometrical structure. A total of 21 isatogen structures are found in the CSD; 8 of them have been published in journal articles (Adams et al., 1986(Adams et al., , 1990Błaszczyk et al., 2006;Sö derberg et al., 2009;Kirk et al., 2017), with 13 CSD communications (refcodes: GOGBUB HODXUV HODYAC SAWVAO SAWVES SAWVIW SAWVOC SAWXEU SAWXIY SAWXOE SAWXUK SAWYAR SAZQUI), most of which report disorder. The problem here is that at least one substituent R 0 is required on the fused benzene ring in order to be completely sure which is the carbonyl group and which the nitroxide in an X-ray crystal structure determination, and that the fused ring system is ordered; otherwise a 180 rotation about the C-R bond generates the potential for disorder that is not easily resolved because of the similarity of the electron density of carbon and formally positive nitrogen atoms and the small differences in expected bond lengths -there are two possible orientations in which the CO and NO groups are exchanged along with the double and single bond connecting them in the five-membered ring. Only 4 of the 21 structures in the CSD have such a substituent R 0 to ensure structural ordering; for these the difference in the N-O and C O bond lengths ranges from 0.047 to 0.054 Å , while a larger difference, 0.154-0.170 Å is found for the intervening N C and C-C bonds. A scatterplot of the C-C versus N C bond lengths is shown in Fig. 8. The four R 0 -substituted and thus ordered molecules are represented by green points; they clearly have very similar geometry in this respect and display the largest bond length differences. The two red points are symmetry-independent Substituted isatogens with potential disorder of two orientations. molecules of one crystal structure that has been solved and refined as non-centrosymmetric but strongly pseudo-centrosymmetric with relatively high R factors and imposed restraints (Kirk et al., 2017); this model must be regarded with some suspicion. All the other structures (black and blue points) have smaller bond length differences that, along with the green points, follow an obvious general trend, which could be interpreted as an electronic effect of substituents. However, the exact same effect would be produced by the type of disorder described above, which leads to a partial averaging of the lengths of these chemically inequivalent bonds; such disorder is explicitly described as partially modelled for 11 of the 16 structures (these 11 are represented by blue points; two points, one blue and one black, are almost completely coincident) and must be regarded as probable for the others, negating any attempt to draw conclusions about the detailed geometry and bonding of the isatogen system and the influence of substituent R. Crystal structure validation and some selected errors Examples were cited of earlier structures found in the CSD which have significant deviations from the expected geometry likely to be artefacts of unresolved disorder or some other defect of the structural refinement model. Although such suspect results might be tolerated from historical studies using what are now obsolete and superseded equipment and methods, there is really no excuse for them in modern X-ray crystallography; nor would they occur if all practitioners of the subject had the thorough approach of crystallographic champions such as Kathleen Lonsdale. The technique inherently has a number of characteristics making it very reliable when appropriately used, to which a range of available tools for checking and validation are added. It has generally always been the case that, given significant diffraction intensity to an appropriate resolution (a generally accepted desirable minimum resolution for chemical crystallography is approximately 0.84 Å , corresponding to measuring diffraction patterns up to a Bragg angle of 25 with Mo K radiation and 67 with Cu K radiation), the number of symmetry-independent reflections in the unique set of data is many times the number of refined parameters in a typical refinement model; the ratio of data to parameters in this so-called overdetermined problem is usually at least 6-8, even if Friedel pairs are averaged for a non-centrosymmetric structure having negligible resonant scattering so that there is no significant difference between the intensities of reflections hkl and h h k k l l, and may be as high as 20 or more with modern equipment. With an appropriate refinement model this high data/parameter ratio leads to low standard uncertainties on the refined parameters, i.e. high precision. The measurement of a high 'multiplicity of observations' (also known as redundancy, the collection of symmetry-equivalent data and of the same reflections in different geometrical diffractometer settings) also provides a consistency check on the data as well as information that can be used to detect and correct for systematic effects such as absorption. Structure validation involves checking a refined crystal structure for internal consistency and also comparison with expected results (we have huge accumulated experience of what might be called 'chemical sense' in looking at a molecular structure) and with related known structures. The topic of validation has been addressed recently in an educational conference session (Spek, 2020). Comprehensive and reliable software tools are available for this purpose. These include PLATON (Spek, 2003), which performs internal consistency checks and some comparisons with expected behaviour, raising 'alerts' with different levels of severity if potential issues are identified; CheckCIF, an online implementation of PLATON with additional functionality provided by the IUCr with particular use as a pre-publication check (Spek, 2009); Mogul (Bruno et al., 2004) for comparison of molecular geometry features with those found in similar structural environments in the CSD to identify unexpected deviations; and specific user-generated searches of the CSD for particular features of interest, which can then be visualized and examined in detail by the graphics and analysis program Mercury from the CCDC (Macrae et al., 2006(Macrae et al., , 2020. Some of these and related validation tools are used for all new entries included in the CSD, with correction of obvious errors, consultation of authors and contributors to deal with others and flags for those that cannot be resolved. Many of the corrections and flagged errors for earlier entries arose from mistakes made manually in transcribing information between computer programs and in publication manuscripts, but these are now rare, particularly since the virtually universal adoption of the CIF standard for archiving and transferring crystal structure results. Other previous potential pitfalls that are now much less likely with integrated software packages and better interfaces between computer programs include the transformation of a unit cell from an initial setting to a different one for reasons of convention or convenience without the corresponding transformation of reflection indices, or with a nonmatching transformation. In this context it should be noted that refined fractional atomic coordinates are derived essentially from the reflection intensities, but that the molecular geometry then involves calculations combining these coordinates with the unit-cell parameters, so if these do not match feature articles Figure 8 Scatterplot of reported isatogen C-C (vertical axis) and N C (horizontal axis) bond lengths (Å ) for 21 structures in the CSD correctly the resulting geometry is distorted, even if the refinement statistics based on measured and calculated intensities are excellent. It is worth remarking that the small number of entries in the CSD from Kathleen Lonsdale's work, for which atomic coordinates are recorded, generate no significant Mogul or PLATON alerts beyond those that would be expected for results derived from photographic data collection methods. Perhaps one of the easiest mistakes to make in a refinement model, particularly when the compound being studied proves to be different from the one expected, is that of wrongly assigned atom types. This means the wrong atomic scattering factor is used for one or more of the atoms in the structure, corresponding to an incorrect electron density. The refinement attempts to compensate for this, mainly by adjusting the displacement parameters, though there may also be an impact on the atomic position and hence the molecular geometry. Such a mistake may be revealed in a number of ways in structure validation. These include unusual bond lengths and/ or angles for the atom concerned and its neighbours; unexpectedly large differences in displacement parameters of bonded atoms, including the so-called Hirshfeld 'rigid bond' test (Hirshfeld, 1976); and residual electron density peaks and holes around the misidentified atom. An example from a manuscript submitted for publication and rejected because of these errors (correction of which demonstrated that the structure was already known) was described recently in the IUCr Newsletter (Clegg, 2020): a putative carboxylic acid was in fact a nitro group, and the chemically highly unlikely trihydroxymethyl substituent should have been trifluoromethyl. The validation alerts for this incorrect structure included impossible hydrogen bonding interactions as well as Hirshfeld test infringements. Another case I encountered as an Editor of Acta Crystallographica Section E in the early years of the journal was the claim of an unprecedented one-coordinate copper atom attached to only a single ligand; closer inspection demonstrated that the 'copper' atom was almost certainly bromine. A recent thorough analysis of one probable error of this kind, with several misidentified atoms including the rather extreme case of cadmium instead of rhenium (Amemiya et al., 2020), also gives an extensive list, in its references 66-68, of other known examples. Other inappropriate structural models and refinement techniques, leading to results that may constitute incorrect structures and raise validation alerts, include unsuitably applied constraints or restraints, particularly in the placement and treatment of hydrogen atoms. 'Alternative facts': scientific fraud Although a misassigned atom type (the wrong element) may be a genuine mistake resulting from ignorance, incompetence or wishful thinking, there have unfortunately been a number of cases where it has been part of deliberate fraud in which falsified results have been submitted for publication, sometimes successfully until the abuse was uncovered by careful validation processes. The first large-scale scandal of this type involved an extensive series of essentially invented crystal structures in which different metal atoms were substituted into the refinement models of previous, genuinely determined structures. In the most blatant cases, exactly the same set of diffraction data was used for the refinement of more than one complex, the differences among electron densities of neighbouring lanthanides, for example, being very small; a little more subtlety was employed in making minor changes to the data and/or the unit-cell parameters at the same time as exchanging the metal. A similar approach was used to 'substitute' atoms or chemical groups in organic structures, such as CH 2 for NH or nitro for carboxylate. This extensive fraud operation, its discovery and consequences, were reported in an editorial by Harrison et al. (2010). It led to a considerable number of retractions of published articles, and corrective action was also required for the corresponding entries in the CSD (Groom, 2010). Alerted by this shocking development, the editors of other journals carried out investigations of their own publications, and several fabricated macromolecular structures, some of them of considerable importance and published in internationally leading journals such as Nature and Cell, were identified and had to be retracted from the Protein Data Bank (PDB) (Dauter & Baker, 2010); while mistakes may occur in the interpretation of such complex structural problems, in this case it appears that no experimental data actually existed and the false structures were pure inventions. We are fortunate as crystallographers to have tools available for detecting such nefarious behaviour; X-ray crystallography is, to some extent, a self-checking technique because of the nature and the volume of the diffraction data required for a crystal structure determination. Fraud in many other scientific disciplines must be much harder to detect. This distasteful and, almost certainly extremely rare, fraudulent behaviour brings us back in conclusion to the qualities of honesty, openness and humility valued and encouraged in scientists by Kathleen Lonsdale, herself a suitable role model for each new generation of crystallographers. These are particularly highlighted in her Nature article entitled Science and Ethics (Lonsdale, 1962) -an account that should be read and absorbed by all scientists and also by politicians who currently claim to be 'following science' in making their decisions -and would surely be part of the suitable training she proposed should be given to young crystallographers (Lonsdale, 1953) -a call that is as relevant now as it was almost 70 years ago and is being addressed in part by courses and schools run by the IUCr and its adhering regional and national associations. samples of crystals for X-ray diffraction; fellow crystallographers (chiefly Drs Mark Elsegood, Mark Fox, Alan Kennedy, Mike Probert, Eric Raper and Paul Waddell) for their contributions to the structural studies described here; and IUCr editorial staff for support during my time as a journal editor.
2020-11-26T09:05:51.366Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f88ad29004efa4ecb981dcc3361ea53b47dc2d0f", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/m/issues/2021/01/00/cx5004/cx5004.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30537f46a8dbb34499c69b1bf2b3fa461164a344", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
250534227
pes2o/s2orc
v3-fos-license
Preliminary Application of Magnetization Transfer Imaging in the Study of Normal Uterus and Uterine Lesions Purpose The aim of this study is to evaluate the utility of magnetization transfer (MT) imaging in the study of normal uterus and common uterine lesions. Methods This prospective study enrolled 160 consecutive patients with suspected uterine lesions. MT ratio (MTR) map was obtained by pelvic MT imaging on a 3.0T MRI scanner. Patients confirmed by pathology were divided into microscopic lesion group and lesion group, according to whether the maximum diameter of the lesion was less than 5 mm. After evaluating and eliminating patients with poor image quality by a three-point Likert scale, MTR values of lesions and normal endometrium, myometrium, and cervix were independently measured on the MTR map by two radiologists. Inter-reader agreement was evaluated. MTR values were compared among different uterine lesions and normal uterine structures using the Mann–Whitney U test with Bonferroni correction. Receiver operating characteristic curve was performed. The correlations between age and MTR values were explored by Pearson correlation analyses. Results A total of 96 patients with 121 uterine lesions in the lesion group and 41 patients in the microscopic lesion group were measured. The MTR values among normal endometrium, myometrium, and cervix were statistical significant differences (P < 0.05). There were significant differences between endometrial cancer and normal endometrium and between cervical cancer and normal cervix (both P ≤ 0.001). Area under the curve (AUC) for diagnosing endometrial and cervical cancer were 0.73 and 0.86. Myometrial lesions had significantly higher MTR values than endometrial lesions and cervical cancer (both P < 0.001), and the AUC for differentiating myometrial lesions from them were 0.89 and 0.94. MTR values of endometrial cancer were significantly higher than those of cervical cancer (P = 0.02). There was a critical correlation between age and MTR values in endometrial cancer (r = 0.81, P = 0.04). Conclusions MTR values showed significant differences among normal uterine structures. It was valuable for diagnosing and differentiating uterine cancer. MTR values could differentiate myometrial lesions from endometrial or cervical lesions. INTRODUCTION Common uterine lesions include endometrial cancer, cervical cancer, and leiomyoma. According to Globocan 2018 estimates, endometrial cancer and cervical cancer are the most common malignant uterine tumors in developed and developing countries, respectively, and rank sixth and fourth in the world for their incidence rates, respectively (1). Surgery is the most important way to treat endometrial cancer (2). Management of cervical cancer is stage-specific and involves chemoradiotherapy (3). Uterine leiomyoma is the most common benign uterine tumor and can be treated with nonsurgical options (4). Therefore, it is essential to determine the origins of the uterine lesions prior to treatment as management strategies differ. MRI is currently a common imaging method for non-invasive detection and evaluation of uterine lesions (5,6). In particular, it is valuable for the differentiation of benign and malignant uterine diseases and preoperative staging of malignant tumors (7,8). Conventional T2-weighted imaging (T2WI) and some functional MRI sequences such as diffusion-weighted imaging (DWI) and dynamic contrast-enhanced MRI (DCE-MRI) have been widely explored for diagnosing uterine diseases (9). However, because of coexisting multiple lesions, extensive lesions, metratrophia, and other factors, the accuracy of MRI in identifying different primary uterine lesions needs further improvement, especially for cancers involving both cervix and the lower uterine segment, leading to ambiguous diagnosis of endometrial and cervical cancer (10). Because both of them showed high signal on T2WI, obvious high signal on DWI, and mild enhancement on contrast-enhanced MRI (CE-MRI) (11). Novel imaging techniques that could reveal histological origins of uterine lesions are needed in clinical practice. Magnetization transfer (MT) imaging can indirectly reflect the content of structural macromolecular substances (such as protein, lipid, and nucleic acid) in biological tissues by quantitatively measuring MT ratio (MTR) values (12). This parameter represents the efficiency of the magnetization exchange between the protons bound to macromolecules and the relatively free water protons inside tissue (13). Any pathological change in cell macromolecules will cause a change of MTR value. This technique has already been well applied in the study of glioma histological grade (14), assessment and identification of brain tumors (12,15,16), and evaluation of intestinal fibrosis in Crohn's disease (17,18). However, the value of MT imaging in the uterus was uncertain. The tissue compositions of different structures of normal uterus and uterine lesions of different histological origin are various. We speculate that their contents of macromolecular substances may be different; hence, the MTR values may be different. As a consequence, the purpose of this study was to preliminarily evaluate the value of MT imaging in the study of normal uterine structures and common uterine lesions and to explore the correlations between age and MTR values of the different uterine structures or different uterine lesions. Study Population This prospective study has been approved by our hospital ethics committee and the informed consent of all patients. A total of 160 consecutive patients with suspected uterine lesions were recruited from January 2021 to November 2021. All patients underwent routine MRI and MT imaging scanning. Five patients who did not have a pathological diagnosis were excluded. The remaining 155 patients received operation and pathological examination after MR scanning within 2 weeks. According to whether the maximum diameter of the lesion was less than 5 mm, the patients were divided into microscopic lesion group and lesion group. The lesions of the microscopic lesion group were virtually detected only by microscopy. Because we need to measure MTR values of normal endometrium in the microscopic lesion group, 10 patients with endometrial thickness less than 5 mm were excluded. Finally, only 43 patients were included in the microscopic lesion group. The study population flowchart was presented in Figure 1. MRI Protocol Pelvic MRI scanning was performed on a 3.0T MRI scanner (Magnetom Prisma, Siemens Healthineers, Erlangen, Germany) with an eight-channel phased-array abdominal coil. All patients were told to abstain from food and drink for at least 4 h before MRI examination. To reduce the air in the rectum and sigmoid, patients were prepared with 10 ml of glycerin enema administration into the rectum 30 min before MR scanning. All patients were scanned in a supine, feet-first position with a properly inflated bladder. The routine MR protocols included T1-weighted imaging (T1WI), T2WI, DWI, and DCE-MRI. Uterus-axial DWI was performed using ZOOMit techniques based on echo planar imaging combined with reduced volume excitation by setting standard b value of 50 and 1,000 s/mm 2 . Sagittal DCE-MRI was performed using three-dimensional volumetric interpolated breath-hold examination sequence by continuous scanning at 10 stages immediately after intravenous injection of contrast agent. The late CE-MRI included axial, sagittal, coronal, and uterus-axial scanning. The contrast agent that we used was gadolinium meglumine (0.2 ml/kg), intravenously injected at a rate of 1.5 ml/s, and then washed with 10 ml of saline at a rate of 2 ml/s. A two-dimensional fast low-angle shot sequence was used to acquire MT imaging data before enhanced scanning, including two scan with (MT on ) and without (MT off ) MT pulse, respectively. The total imaging time of MT imaging was 2 min 42 s. For MT quantification, the MTR map was calculated on the MR scanner workstation using the following formula: MTR = (MT off − MT on ) × 100/MT off . The routine details of scanning parameters were shown in Table 1. More parameters of MT imaging were as follows: saturation pulse, Gaussian radio frequency (RF) pulse; amplitude, 375 Hz; length, 9.984 ms; and off-resonance frequency, 1.2 kHz. Image Quality Evaluation and Measurement All MTR maps were transferred to a workstation (Syngo.via Client 4.2) for measurements. One radiologist with 25 years of experience in diagnosing gynecological MR images reviewed and evaluated all the MTR maps' quality by a three-point Likertscale: score 1, poor image quality with obvious artifacts, the lesions cannot be detected or distinguished from surrounding structure; score 2, good image quality with few artifacts, the lesions can be identified by reference to other MR images; and score 3, excellent image quality without artifacts, the lesions can be easily detected on MTR maps. Two readers with 6 years of experience in pelvic MRI independently measured MTR values on MTR maps in patients with good and excellent image quality (score 2 and score 3). Referring to other routine MR images, a rounded sizeable region of interest (ROI) was drawn on the maximum area of the lesion (lesion group) or of the normal uterine structures including myometrium, endometrium, and cervix (microscopic lesion group). The mean MTR values were recorded. For myometrium, ROIs were drawn covering the junctional zone and outer myometrium. For cervix, ROIs were drawn covering the cervical stroma and muscularis. Inter-reader agreement was evaluated. The placements of ROIs showed in Figure 2. Statistical Analysis Statistical analyses were performed with SPSS software, version 26.0 (SPSS, Inc., Chicago, IL) for Windows. As continuous variable, MTR value was expressed as arithmetic means and standard deviation. Inter-reader agreement was evaluated using the intraclass correlation coefficient. The Shapiro-Wilk test or Kolmogorov-Smirnov test was used to test the normality of the data distribution. The data in each group were not normally distributed, and non-parametric test was performed. The Kruskal-Wallis H test was used to compare MTR values among the three groups with a value of P < 0.05. The Mann-Whitney U test with Bonferroni correction was further used for pairwise comparisons, and the adjusted significant level was 0.017 (0.05/3). Receiver operating characteristic curves were performed to diagnose or distinguish the uterine diseases and to determine the optimal threshold values. Pearson correlation analyses were performed to evaluate the correlations between age and the MTR values. A P-value less than 0.05 was considered to be correlated. A value of r > 0 indicates a positive correlation between the two variables; otherwise, a negative correlation exists. RESULTS MTR image quality of 102 patients with 127 uterine lesions (37 lesions of endometrial cancer, 10 lesions of benign endometrial Table 2. Table 3. The data of all lesions and structures measured by the two observers had a good consistency. We randomly selected MTR values measured by one of the observers as the final evaluation indices. MTR values in different lesions and normal uterine structures were shown in Table 4 and Figures 2-4. MTR values among normal endometrium (7.14 ± 0.21), myometrium (10.18 ± 0.22), and cervix (9.51 ± 0.23) were statistically significant differences (P < 0.05). MTR values of normal endometrium were significantly lower than those of normal myometrium and normal cervix (both P < 0.001). In addition, MTR values of normal myometrium were significantly higher than those of normal cervix (P = 0.008). There was no significant difference among proliferative phase (7.31 ± 0.35), secretory phase (7.16 ± 0.54), and senile endometrium (7.04 ± 0.26) (P = 0.89) or among normal myometrium, leiomyoma (10.54 ± 0.23), and adenomyosis (10.27 ± 0.47) (P = 0.48). There were significant differences between endometrial cancer (8.29 ± 0.26) and normal endometrium (P = 0.001) and between cervical cancer (7.71 ± 0.25) and normal cervix (P ≤ 0.001). Myometrial lesions (10.47 ± 1.18) had significantly higher MTR values than endometrial lesions (8.22 ± 1.46) and cervical cancer (both P < 0.001). MTR values of endometrial cancer were significantly higher than those of cervical cancer (P = 0.02). Receiver operating characteristic curves and their related parameters were displayed in Figure 5 and Table 5. Area under the curve (AUC), optimal threshold, sensitivity, and specificity for As shown in Table 6, there was a critical positive correlation between age and MTR values in endometrial cancer (r = 0.81, P = 0.04). The correlations between age and the MTR values of other uterine lesions or normal uterine structures were not discovered (all P > 0.05). DISCUSSION In this study, we explored the value of MT imaging to characterize normal uterine structures and common uterine lesions by measuring MTR values. The results showed that the MTR values were significantly different among normal uterine structures, among uterine lesions of different origin, or between some uterine lesions and corresponding normal structures. MTR values were found to be effective in the diagnosis and differential diagnosis of certain uterine diseases. It might provide a preoperative basis for neoplastic histologic origin in the uterus. Tissue contrast mechanism of conventional MRI is relying on density, T1 and T2 relaxation properties of free water protons, and diffusion properties of water molecules (19). It has a high sensitivity in detecting pathological tissue, but pathological specificity is poor (10). Except for leiomyoma and adenomyosis, almost all common uterine lesions show low signal intensity on T1WI and high signal intensity on T2WI (9). Malignant uterine tumors present high signal intensity on DWI due to high cell density and limited diffusion of water molecules (5,9), whereas benign uterine tumors almost appear low signal (6,9). DCE-MRI is associated with tumor vessel permeability and microvessel density (20,21). Therefore, it is difficult to distinguish uterine cancers with poor blood supply by using conventional MR imaging alone. MT imaging can probe the protons bound to macromolecules and reflect the amount and complexity of immobile macromolecules in tissue and thus 7.04 ± 0.26 MTR, magnetization transfer ratio; P, comparison among three groups or between two groups with a value of P < 0.05; P1, comparison between the first disease or structure and the second that in each group; P2, comparison between the first disease or structure and the third that in each group; P3, comparison between the second disease or structure and the third that in each group; P1-P3, all using an adjusted significant level, a' = 0.017. *, statistically significant difference. (24). The smooth muscle and fibration will increase the MTR values (17). The normal cervix consists of muscularis, stroma, and mucosa but contains only 10%-15% smooth muscle cells in cervical tissue (25). Therefore, the MTR values of normal cervix were lower than those of normal myometrium. The normal endometrium is made up of epithelial cells and lamina propria, lacking smooth muscle and fiber (26), which leads to the lowest MTR values. MT imaging parameter might be an indicator of reflecting tissue integrity (16). This study found the significant differences between endometrial cancer and normal endometrium and between cervical cancer and normal cervix, which was consistent with the previous study (23). The invasive growth of cervical cancer would inevitably lead to destruction of normal cervical tissue, lead to decreased cervical fibrostroma and smooth muscle content, and then reduce the macromolecular substance content, potentially leading to lower MTR values of cervical cancer than those of normal cervix. Moreover, the MTR values of cervical cancer after radiotherapy would decrease, owing to tissue edema (23). However, we found that the MTR values of endometrial cancer were significantly higher than that of normal endometrium. One possible reason is that the proliferative growth of endometrial cancer would result in increased cellular density. An increase in the amount of tumor cells would lead to an increase in the cell membrane, and the content of macromolecules in the cell membrane would increase, thus potentially leading to increased MTR values of endometrial cancer. On the other hand, the aggressive growth of tumors would lead to changes of metabolic substances (27). Those metabolites included immobile macromolecular substances and mobile proteins and peptides (14). Endometrial cancer cells were more metabolically active than normal endometrial cells, potentially resulting in higher MTR values. The MTR values of endometrial cancer were significantly higher than those of cervical cancer in this study. The possible cause is the differences in histological types. Endometrioid adenocarcinoma is the most common subtype of endometrial cancer, and cervical cancer is mainly squamous cell carcinoma. Adenocarcinoma originates from endometrial cells with abundant glandular structures and has the ability to secrete mucins (28), potentially leading to higher MTR values. A systematic review and meta-analysis (10) confirmed that the pooled sensitivity and specificity for MRI in predicting origin of indeterminate uterocervical cancers were 0.884 and 0.395, respectively. Of which, T2WI and DCE-MRI were the most popular sequences, and DWI sequence and apparent diffusion coefficient values were also valuable. This study discovered the sensitivity and specificity were 0.68 and 0.71, respectively, by using MTR values to distinguish endometrial cancer from cervical cancer. Although sensitivity was reduced, specificity was significantly improved. In consequence, MT imaging with the non-invasive molecular level may potentially provide supplementary information in detecting and distinguishing uterine cancers. Different from the study of Kobayashi et al. (23), no significant differences were found between the MTR values of endometrial cancer and those of the benign endometrial lesions in this study. The possible reason was that the benign endometrial lesions included four cases of endometrial atypical hyperplasia considered as precancerosis of endometrial cancer. Garcia et al. (16) demonstrated the differences in MTR values between glioblastoma multiforme and meningioma, which depicted that MTR values had the potential for differentiating different tumor types. Our study also found the MTR values could differentiate myometrial lesions from endometrial or cervical lesions. Adenomyosis and leiomyoma are common benign uterine lesions originating from myometrium, which is rich in smooth muscle cells. Hence, myometrial lesions had significantly higher MTR values than endometrial or cervical lesions. Boss et al. (29) found that a leiomyoma exhibited high MTR values during whole-body MRI, and the incidental finding was in conformance with our results. In addition to smooth muscle cells, myometrial lesions such as uterine leiomyoma are also composed of a large amount of extracellular matrix with proteoglycan (24). The macromolecular proteoglycan composition can increase the MTR values. Although myometrial lesions are not often mistaken for endometrial or cervical lesions on conventional imaging (e.g., T2WI), challenges still exist. For instance, adenomyosis may appear hypointense on contrast-enhanced MRI similar to endometrial cancer, uterine leiomyoma may distort the normal uterine anatomy, and some endometrial cancer is isointense to the myometrium on T2WI (30). Our study suggested that MT imaging could help to overcome some pitfalls of conventional MRI by the molecular level. Our consequences also support the idea put forward by another researcher that imaging signatures may predict pathology (31). Munro and DCE-MRI. They revealed that DCE-MRI was sensitive to the vascular changes thought to accompany successful GnRH analog treatment of leiomyoma. However, there was no apparent treatment effect by MT imaging, although baseline MTR was negatively associated with initial uterine and fibroid volume. Therefore, compared with other functional MRI imaging, MT imaging has some shortcomings and needed to be further explored. A previous study suggested that, compared with MT imaging, amide proton transfer (APT) imaging could better reflect tumor biological behavior by detecting mobile proteins and peptides (14). Recently, Zhang et al. (33) found that the content of mobile protein of different structures of normal uterus was different by utilization of APT imaging. Another study found that APT MRI could provide molecular-scale information for distinguishing endometrial cancer from leiomyoma, adenomyosis, and normal uterine myometrium (34). They found that the AUC, sensitivity, and specificity for differentiating endometrial cancer from leiomyoma and adenomyosis were 0.87 and 0.85, 83.3% and 76.7%, and 83.3% and 81.6%, respectively. The AUC, sensitivity, and specificity were 0.89, 0.97, and 0.71, respectively, for MTR values to distinguish endometrial lesions from myometrial lesions in our study. Both imaging methods showed high identification performance, whereas the total imaging time of APT imaging was as long as 7 min 33 s. The total imaging time of MT imaging was 2 min 42 s in this study. Perhaps MT imaging will serve as a more applicable clinical approach in evaluating normal uterus and uterine lesions. However, to achieve this potential value, multicenter studies with a large sample size are required in the future. This study had several limitations. First, as a preliminary study, the sample size was relatively small. In addition, other rare uterine tumors, such as uterine sarcoma, were not included in our study. Future large prospective studies with more uterine lesions are needed. In addition, the insufficient sample size makes it impossible for this study to further study cancer lesions, such as invasiveness and lymph node metastasis. We will continue to collect cases to prepare for the study of the histopathological characteristics of cancer lesions. Second, because of the limitation of anatomical details on MT imaging, this study only included normal myometrium, endometrium, and cervix and did not measure MTR value of fine uterine anatomy like junctional zone. The improvement of MT imaging quality needs to be further investigated. Third, to obtain pathology as a standard reference, the normal myometrium, endometrium, and cervix that we measured were not from normal volunteers but from patients with carcinoma in situ. We will include normal volunteers to verify our results in future studies. Fourth, B1 correction was not performed due to lack of B1 correction setting in the MT sequence of MRI scanner that we used. Uneven B1 field might lead to uneven image signal, though the images with poor quality such as motion artifacts were excluded in this study. Finally, single-slice evaluation might introduce sampling bias and not reflect the intralesion heterogeneity. On the basis of improving MT imaging quality, volumes of interest will be delineated in our future research. In conclusion, MTR values could distinguish normal uterine anatomies including myometrium, endometrium, and cervix; diagnose and differentiate uterine cancer; and differentiate myometrial lesions from endometrial or cervical lesions. MT imaging may be a promising imaging technique for the assessment of normal uterine structure and uterine lesions by providing molecular-scale information. A next step improvement in MT imaging technology and validation at molecular level may help address current challenges. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Medical Ethics Committee of the First People's Hospital of Yunnan Province. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS QB designed the study, performed the statistical analysis, and wrote the manuscript. QL and SW modified and optimized MT imaging scanning parameters. QL, JY, JYY, JD, and FD scanned MT imaging. QL collected patient data. QB and YW revised the manuscript. YZ guaranteed the integrity of the entire study. All authors approved the submitted version of the manuscript.
2022-07-15T13:16:48.709Z
2022-07-14T00:00:00.000
{ "year": 2022, "sha1": "d520660bb9809c3923354ea0aa02ce7047118635", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d520660bb9809c3923354ea0aa02ce7047118635", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233347207
pes2o/s2orc
v3-fos-license
Particle acceleration and multimessenger emission from starburst-driven galactic winds The enhanced star forming activity, typical of starburst galaxies, powers strong galactic winds expanding on kiloparsec (kpc) scales and characterized by bubble structures. Here we discuss the possibility that particle acceleration may take place at the termination shock of such winds. We calculate the spectrum of accelerated particles and their maximum energy, that turns out to range up to a few hundred petaelectronvolt (PeV) for typical values of the parameters. Cosmic rays accelerated at the termination shock are advected towards the edge of the bubble excavated by the wind and eventually escape into extragalactic space. We also calculate the flux of gamma rays and neutrinos produced by hadronic interactions in the bubble as well as the diffuse flux resulting from the superposition of the contribution of starburst galaxies on cosmological scales. Finally, we compute the diffuse flux of cosmic rays from starburst bubbles and compare it with existing data. INTRODUCTION Starburst galaxies (SBGs) are unique astrophysical objects characterized by an intense star formation rate (SFR), and a correspondingly higher rate of supernova (SN) explosions. Since SNe and winds of young stars are believed to be acceleration sites of cosmic rays (CRs), SBGs are likely to be powerful cosmic-ray factories. The star forming activity is often located in sub-kpc sized regions, known as starburst nuclei (SBNi) (Kennicutt 1998), with rather extreme conditions: high gas density ( 10 2 cm −3 ), intense infrared-optical luminosity ( RAD 10 3 eV cm −3 ) and strong magnetic fields ( 10 2 G) are inferred in SBNi (Gao & Solomon 2004;Mannucci et al. 2003;Förster Schreiber et al. 2001; Thompson et al. 2006;Papadopoulos et al. 2011). The level of turbulence is also expected to be very high because of the repeated SN explosions and stellar winds. This turbulence is likely to slow down the spatial transport of charged high-energy (HE) particles, which therefore lose most of their energy inside SBNi. We refer to this mode of transport as calorimetric, and its implications have been discussed in detail by Yoast-Hull et al. (2013); Peretti et al. (2019); Krumholz et al. (2020). Multiwavelength observational campaigns from radio to hard X-rays (see e.g. Williams & Bower 2010;Carilli 1996;Wik et al. 2014), and especially the spectra inferred from observations in the gamma-ray range, indicate that the transport of HE particles is strongly regulated by energy losses (see e.g. Ackermann et al. 2012;Peng et al. 2016; ★ E-mail: peretti@nbi.ku.dk Abdalla et al. 2018;Ajello et al. 2020;Kornecki et al. 2020Kornecki et al. , 2021Werhahn et al. 2021). A peculiar aspect of SBGs is represented by the amount of target material for nuclear interactions, potentially leading to copious production of neutrinos and gamma rays. The contribution of SBGs to the neutrino flux measured by the IceCube Observatory (IceCube Collaboration: Aartsen et al. 2013;Abbasi et al. 2020) has been extensively discussed by many authors (Loeb & Waxman 2006;Tamborra et al. 2014;Bechtol et al. 2017;Sudoh et al. 2018;Palladino et al. 2018;Peretti et al. 2020;Ajello et al. 2020;Ambrosone et al. 2020Ambrosone et al. , 2021, together with the compatibility of the predictions with existing constraints imposed by gamma-ray observations (Ackermann et al. 2012;Lisanti et al. 2016). The seriousness of these constraints stimulated the search for powerful hidden CR accelerators in environments highly opaque to gamma rays and yet transparent to neutrinos (Capanema et al. 2021) like the inner core of Active Galactic Nuclei (AGNi; see e.g. Murase et al. 2016Murase et al. , 2020 or to reconsider the contribution from an extended region around the Galaxy (see e.g., Taylor et al. 2014;Blasi & Amato 2019;Recchia et al. 2021). Starburst winds have indeed been suggested to accelerate particles above PeV energies (Dorfi & Breitschwerdt 2012;Bustard et al. 2017) and subsequently produce photons through non-thermal processes (Romero et al. 2018;Buckman et al. 2020;Müller et al. 2020). These phenomena, together with the calorimetric transport of CRs and the intense photon backgrounds in SBNi, led to the careful investigation of the emission and absorption of gamma rays in the central regions of SBGs, and the correlated neutrino emission (Peretti et al. 2019(Peretti et al. , 2020. Despite the potential importance of these astrophysical objects for a variety of phenomena, the modeling of the processes of acceleration and interaction of CRs in SBGs remains rather poor and yet it is crucial if to assess their role as sources of high energy radiation and CRs in a reliable way. As stated above, particles are not only accelerated in the nuclei of SBGs, but also in the (kpc-sized) wind structures expanding from the SBN region to the circumgalactic medium (CGM). While in our previous works on SBGs (Peretti et al. 2019(Peretti et al. , 2020 we focused our attention on phenomena occurring inside the SBN, here we discuss the starburst winds as potential additional sites for particle acceleration and interactions. Starburst winds are inferred to be powered by the mechanical energy and heat produced by SNe and young stars possibly combined with some contribution due to the radiation pressure (see e.g. Zhang 2018). The intense activity heats and pressurizes the ISM (see Westmoquette et al. 2009a,b, for detailed observation of M82) creating a hot cavity and eventually inflating a powerful thermally-driven wind bubble (see Veilleux et al. 2005). Starburst winds are characterized by high mass-loss rate ranging from a few M yr −1 for moderate starbursts up to 10 2 M yr −1 in Ultra Luminous Infrared Galaxies (ULIRGs) (see Cicone et al. 2014, for details) or starburst coexisting with (or replaced by) active galactic nuclei (AGNi) (see e.g. Lamastra et al. 2016;Wang & Loeb 2017;Liu et al. 2018;Lamastra et al. 2019). Measurements of the wind speed are often based on detection of spectral lines associated to the warm and cold phases of the ISM embedded in the wind bubble and indicate velocities of the order of hundreds of km s −1 . On the other hand, theoretical models and X-ray observations show that the hot phase of the wind has a much higher velocity of the order of 10 3 km s −1 (see e.g. Strickland & Heckman 2009). These fast outflows easily break out of their galactic disks and expand into the surrounding galactic halos (see Chevalier & Clegg 1985, hereafter CC85). Wind bubbles are characterized by an innermost region of fast and cool wind powered by a central engine. The fast wind region extends up to the wind termination shock (also referred to as the wind shock), where the wind plasma is slowed down and heated up. A forward shock expands into the circumgalactic medium, typically with transonic velocity. Between the two shocks the contact discontinuity separates the shocked wind from the shocked swept-up halo medium (see e.g. Koo & McKee 1992a). The starburst activity can last for hundreds of millions of years (Myr) thus potentially producing an approximately steady injection of particles during this time (see Di Matteo et al. 2008;McQuinn et al. 2009;Bustard et al. 2017). Here we investigate the process of diffusive shock acceleration (DSA) of particles at the wind termination shock of starburst-driven winds, and estimate the associated production of gamma rays and neutrinos produced in the entire bubble excavated by the wind, and the flux of protons escaping such bubble. We adopt the semi-analytic approach to CR transport at the termination shock, as developed by Morlino et al. (2021) (hereafter MBPC21) for the case of winds associated to star clusters. This theoretical approach allows us to establish a direct connection between the environmental conditions in the wind and the particle acceleration process, with special attention for the maximum energy of accelerated particles. Moreover the transport of the non-thermal particles in the entire wind bubble is described rigorously, taking into account diffusion, advection, adiabatic losses and gains, as well as catastrophic energy losses. This enables us to calculate the cumulative contribution of starburst winds to the diffuse gamma-ray and neutrino fluxes exploring the associated proton flux that we could observe at Earth as CRs above the knee. Our investigation shows that: 1) protons can be accelerated up to hundreds of PeV at the starburst wind termination shock; 2) gamma rays and neutrinos are produced as secondary products of and interactions in these systems, possibly leading to detectable spectral features; 3) the contribution of starbursts to the diffuse neutrino flux can be dominant without exceeding the diffuse gamma-ray flux observed by Fermi-LAT; 4) accelerated particles escaping starburst systems can provide a sizeable contribution to the light CR component observed above the knee. The structure of the article is as follows: in § 2 we provide a description of the wind bubble. In § 3 we describe the modelling of acceleration and transport in the system, and provide the main details of our semi-analytical approach to CR transport. In § 4 we discuss the solution of the transport equation and the corresponding maximum energy as a function of the relevant parameters. We also show the associated gamma-ray and neutrino fluxes and the flux of CR protons escaping the bubble, for some benchmark cases. In § 5 we explore the multimessenger potential of the combined contribution of wind bubbles in the context of the diffuse fluxes observed at Earth. In § 6 we summarize our results and draw our conclusions. EVOLUTION AND PROPERTIES OF THE WIND BUBBLE The typical lifetime of a starburst event is of order ∼ 200 − 300 Myr (Di Matteo et al. 2008): at formation the structure is fueled by energy and mass released by young OB and Wolf-Rayet stars for about 6 Myr. After this initial stage, the first core collapse SN explosions are expected to take place. The energy and mass that they release dominates over the ones due to the young stars activity. In the minimal assumption of an instantaneous starburst trigger, the activity would run out in about 40 Myr when 8 stars end their life (see also Veilleux et al. 2005). In practice, the actual duration of a starburst is determined by the star forming activity, which can last up to few hundred million years, as mentioned earlier. Such time scale is much longer than the typical duration of the processes of particle acceleration and transport in the bubble produced by the starburst activity, so that from this point of view, SBGs and their wind superbubbles can be considered as steady state systems for HE particles (see also Zirakashvili & Völk 2006;Bustard et al. 2017, for related discussions). The engine of a starburst-driven galactic wind is the activity of SNe and massive stars which heat and pressurize the interstellar medium (ISM) excavating a hot bubble where temperature and pressure are ∼ 10 8 K and / ∼ 10 7 K cm −3 (as also discussed in CC85). Once the starburst event has started, the bubble expands above and below the galactic disk due to the pressure unbalance between its interior and the unshocked host galaxy ISM and eventually reaches the scale height of the disk, breaking out into the galactic halo. Inside the disk, instead, the bubble remains confined by the ISM pressure (see Tenorio-Tagle & Muñoz-Tuñón 1997;Cooper et al. 2007). As shown in recent numerical simulations (Fielding et al. 2018;Schneider et al. 2020), the clustered activity of SNe typical of SBNi is strong enough to drive and sustain a powerful galactic outflow. In this framework, CRs could also contribute as a supplementary ingredient powering an outflow in very active star forming galaxies as discussed by Hanasz et al. (2013). However, their importance in contributing to the wind launching is highly uncertain due to the possible impact of the dense and turbulent environment on their transport (see e.g. Krumholz et al. 2020) and their severe energy losses in the core of SBGs (see e.g. Peretti et al. 2019;Kornecki et al. 2021;Werhahn et al. 2021). On the other hand, in the case of a less intense and spatially extended star formation, typical of the spiral arms of mild star forming galaxies, where energy losses are usually negligible, the additional contribution of cosmic rays (see e.g. Breitschwerdt et al. 1991;Everett et al. 2008;Recchia et al. 2016;Pfrommer et al. 2017;Girichidis et al. 2021) and radiation pressure may be necessary to launch a galactic outflow (see e.g. Zhang 2018). The dynamics of starburst winds (Strickland & Stevens 2000;Strickland et al. 2002) is qualitatively similar to that of stellar winds and winds of star clusters (Castor et al. 1975;Weaver et al. 1977;Koo & McKee 1992a,b) when the galactic ISM is roughly homogeneous (Strickland et al. 2002). However, when the medium is inhomogeneous, as expected in realistic cases (Westmoquette et al. 2009a,b), the hot gas follows the path of least resistance out of the disk, resulting into a non homogeneous outflow. Once in the halo, the hot gas expands freely and the geometry can be reasonably assumed to be spherical (see Cooper et al. 2007). For our purposes, the assumption of a spherical geometry is well motivated by the fact that accelerated particles probe large distances, averaging out any spatial inhomogeneities. Radiative losses can affect the wind dynamics and several theoretical and numerical works investigated the possible role of such losses, leading to a wide range of possible scenarios (see Bustard et al. 2016;Zhang 2018, and references therein). If the starburst wind is approximately adiabatic (as shown in numerical simulations, see e.g. Fielding et al. 2017;Schneider et al. 2020), its behavior is in good agreement with the analytic model developed in CC85, and adopted in this work. The first stage of the evolution of the wind bubble is characterized by a free expansion which ends when the mass of the swept-up ambient medium becomes comparable to the mass injected in the form of a wind ( free 1 Myr for an average halo density ℎ ≈ 10 −3 cm −3 ). The wind is supersonic, so that it is preceded by a forward shock, while a reverse shock is launched towards the interior, the so-called termination shock. During the free expansion phase, the two shocks move outwards but staying very close to each other. The shocked wind and the shocked ISM are separated by a contact discontinuity. When the accumulated mass eventually becomes larger than the mass added in the form of a wind, the outflow decelerates appreciably. If the CGM is assumed to be spatially homogeneous, the radius of the forward shock changes in time as FS ∝ 3/5 , while the termination shock follows the trend sh ∝ 2/5 (see Weaver et al. 1977;Koo & McKee 1992a). The bubble eventually reaches a pressure confined state, typically after a few tens of Myrs. This late stage of the evolution is characterized by a pressure balance between the cool wind ram pressure and the pressure of the undisturbed halo medium ℎ (which, in turn, is in equilibrium with the pressure of the shocked wind). At this point, the wind shock is stalled while the contact discontinuity and the forward shock keep slowly expanding in the CGM. As detailed in Lochhaas et al. (2018) (see also Strickland & Stevens 2000), the dynamics of the wind bubble depends on the density profile of the CGM gas. The structure of the starburst-driven wind bubble can be pictured as onion-like (see top panel of Figure 1). The SBN, responsible for The SBN, from which the wind is launched, is located in the center of the galactic disk. The blue (red) arrow corresponds to the cool (shocked) wind region. The wind shock ( sh ) separates the two regions. The forward shock (at FS ) bounds the system from the undisturbed halo region (credit: I. Peretti). Bottom panel: wind profile (thick red) and particle density profile (dot-dot-dashed blue), where Y is the density or velocity, and 0 is the normalizing density or velocity. The plot is in arbitrary units for illustrative purposes. The location of the wind shock is assumed to be at 10 SBN for illustrative purposes. launching and powering the outflow, is located at the center of the system. The wind speed increases approaching the boundary of the SBN, where it becomes supersonic and quickly reaches its terminal velocity ( ∞ ). At this point the wind velocity remains basically constant (see CC85 and lower panel of Fig 1), up to the termination shock (located at sh ), where the wind is slowed down and heated up. As we discuss below, this configuration is very interesting from the point of view of particle acceleration, in that the upstream region is in the direction of the SBN, hence particle escape from the upstream region is inhibited and becomes possible only through the external boundary of the wind bubble. The medium in which a galactic wind bubble expands affects the spatial structure of the bubble. Galactic halos are inferred to be characterized by a hot diffuse gas component where typically ℎ 10 −2 cm −3 and ℎ ∼ 10 6 − 10 7 K (Anderson et al. 2015;Tumlinson et al. 2017). Hence, in a starburst CGM the thermal pressure is expected to be ℎ / 10 5 K cm −3 (where is the Boltzmann constant). In evolved wind bubbles, the balance between the thermal pressure in the halo and the wind ram pressure, 2 , sets the position of the termination shock: where ( 0 ) is the wind mass loss rate (in units of 1 M yr −1 ), ∞,8 is the terminal wind speed in units of 10 8 cm s −1 and ℎ,4 is the halo pressure in units of 10 4 cm −3 K. These three parameters characterize the global properties of the system (see also Veilleux et al. 2005;Strickland & Heckman 2009, for aditional details and connection to the core activity). While the termination shock is approximately stalled, the forward shock continues to expand as: where 43 = [ 2 ∞ /2]/10 43 erg s −1 is the wind power, h,−3 is the halo density in units of 10 −3 cm −3 and 7 is the time in units of 10 Myr (see also Koo & McKee 1992a). It follows that the typical Mach number of the forward shock is of order unity, starting at times of order ∼ 10 Myr. This is the main reason why efficient particle acceleration is not expected to take place at the forward shock. At the termination shock the conditions are more favorable. The temperature of the plasma at the wind shock sets the local sound speed. The adiabatic expansion cools the gas as ∝ −4/3 , so that assuming a SBN size SBN ∼ 200 pc and sh as given by Equation (1), one can expect a temperature ≈ 10 6 K at the wind shock when the SBN is as hot as SBN ≈ 10 8 K. Therefore, the sound speed of the free expanding wind at the shock is ≈ 10 2 T 1/2 6 km s −1 . As a consequence the Mach number of the plasma at the wind shock is of order ∼ 10 making it the only plausible site for particle acceleration in the wind bubble system. For the innermost regions of the system (the SBN and the cool wind) we adopt a smooth parametrization of the model of CC85 for the velocity profile (see bottom panel Figure 1). The model describes a wind where the velocity increases toward the edge of the SBN. At the SBN boundary the wind becomes supersonic and quickly reaches ∞ , while beyond the termination shock, the gas gets heated and slowed down. At the termination shock we adopt the jump condition appropriate for a strong shock so that 1 = ∞ and 2 = 1 /4. Moreover, for adiabatic expansion, the shocked wind moves with a velocity that drops with distance as ∼ −2 , namely 2 =constant. The wind plasma is assumed to be fully ionized, while the density in the SBN is assumed to be dominated by dense molecular gas. Hence the particle density in the system (blue dot-dot-dashed curve in Figure 1) can be approximated as: For the purpose of estimating the diffusion coefficient for high energy particles in the bubble, we assume that a fraction B (in MBPC21 we have used = /2) of the kinetic energy density of the free expanding wind is converted at any given radius into turbulent magnetic field energy density. We also assume that at the termination shock the perpendicular components of the magnetic field are compressed by a factor 4, which implies that the strength of the magnetic field downstream is enhanced by a factor √ 11 and remains spatially constant in the downstream region. Overall, the strength of the magnetic field can be written as: where, the radial dependence of the upstream wind profile ( ), has a negligible impact on the magnetic field in the corresponding region. Assuming that the turbulent field gets organized according to a power spectrum ( ) ∝ − , the corresponding diffusion coefficient due to resonant particle scattering can be estimated as: where L is the Larmor radius, the particle velocity and = 5/3 (3/2) for Kolmogorov (Kraichnan) turbulence. Bohm diffusion would correspond to = 1. The quantity denotes the energy containing scale of the turbulence. For momenta > * , where * is defined such that ( * ) = , the diffusion coefficient changes its energy dependence due to lesser power on larger scales and can be written as (Subedi et al. 2017;Dundovic et al. 2020): In this work we adopt a Kraichnan spectrum of the turbulence, = 3/2 as the reference scenario and we assume to be comparable with the size of the SBN, namely ∼ 10 2 pc. MODEL In this section we provide a detailed description of the theoretical model. In § 3.1, we present the solution of the CR transport equation of particles accelerated at the wind shock of the starburst-driven wind bubble. Together with the solution we additionally describe the flux of escaping particles. In § 3.2 we describe the calculation of gamma rays and neutrinos from pp and p interactions. Particle acceleration at the termination shock Particle acceleration is assumed to take place at the termination shock. For the sake of simplicity we adopt a spherical symmetry neglecting the deformation induced by the surrounding medium. The bubble is assumed to be already evolved through the deceleration phase, so that the shock location is given by Equation (1). The transport of non-thermal particles in the bubble is determined by diffusion, adiabatic energy losses and gains, advection with the wind and catastrophic energy losses, that are dominated by pp inelastic collisions in the SBN, for those particles that have high enough energy to diffuse against the wind and reach the central region. The transport equation that we solve can be written as follows: where = ( , ) is the particle distribution function, ( , ) is the diffusion coefficient (in general space dependent), ( ) is the wind profile, ( , ) is the injection term and Λ( , ) is the rate of energy losses. Assuming that particle injection only takes place at the location of the termination shock and is limited to a single momentum inj , we can write: where 1 and 1 are the density and wind speed immediately upstream of the shock, and inj is the fraction of particles involved in the acceleration process. We take inj such that the pressure of accelerated particles is limited to a fraction, ∼ 10% of the wind ram pressure at the shock. Notice that, as long as the shock compression factor is larger than 2.5 (meaning that the spectrum is harder than −5 ), the value of inj does not play any relevant role in the normalization of Equation (8). The loss term takes into account energy losses for proton-proton collisions: where is the particle speed, ( ) = ( )/ is the target density in the wind and pp is the cross section (Kelner et al. 2006). We neglect losses due to interactions since, as we show below, the maximum energy that particles reach is barely enough to exceed the kinematic threshold for this process, using optical (OPT) and ultraviolet (UV) photons as targets. Equation (7) is solved by following the technical procedure put forward in MBPC21 for the case of winds from star clusters. We refer the reader to that paper for details, while here we only summarize the main equations that allow us to obtain the solution of the problem by iterations. We also discuss the differences with respect to MBPC21, mainly due to the presence of energy losses. The method starts from determining the solution of the transport equation upstream and downstream separately and then impose the continuity of the solution at the shock location. The solution in the upstream region reads: where sh is the particle distribution function at the shock and 1 is an effective velocity felt by particles upstream, due to the combination of spherical symmetry and energy losses: The functions 1 and 1 describe adiabatic energy losses-gains and catastrophic energy losses, respectively, and are reported in Appendix B In the downstream region, the solution is made easier by the fact that the flow is divergence-free (namely 2 is constant) and energy losses due to pp scatterings are negligible. This simplification allows us to write where and esc ≈ FS is the location where particles escape from the system and assumed to be equal to the forward shock radius. Integrating the transport equation in a narrow region around the termination shock we find an equation for sh ( ), after using the solution upstream and downstream to evaluate the spatial derivatives on the two sides of the shock. Here Γ 1 and Γ 2 describe the departure from the standard solution − that would have been obtained at a plane infinite shock, due to a variety of factors: The function Γ 1 reflects the effects of spherical symmetry and losses upstream, and is appreciably different from unity at energies close to the maximum energy, namely at energies where 1 / 1 becomes comparable to sh (see also Berezhko & Völk 1997). For the particles that are energetic enough to reach the SBN, energy losses in the dense gas become important both for CR transport (if is large enough) and production of secondary radiation (see Bustard et al. 2017;Merten et al. 2018, for related discussions). However, in all cases that we have studied this phenomenon never leads to observable consequences. Notice that Equation (14) expresses the solution in a recursive form, because both 1 and 1 are function of . The actual solution is obtained using an iterative technique as described in MBPC21. The spectral modification due to the transport in the downstream region is contained in the function Γ 2 , which becomes important when the diffusion length of particles ( ∼ 2 / 2 ) becomes comparable to the size of the shocked wind region ( esc − sh ). The escape flux at the bubble boundary, defined as esc = − ( esc ), can be easily derived from Equation (12): . and the total flux of escaping particles is esc = 4 2 esc esc . The escape flux modifies the solution at the shock only very mildly and only for very high particle energies. On the other hand the spatial extent of the downstream region (shocked wind), which in turn depends on the age of the bubble, reflects rather strongly on the gamma-ray and neutrino signal from a SBG. The assumption of stationarity adopted in the equation requires that the acceleration process is much faster than the time for dynamical evolution of the system. This is typically the case, but as a consistency check, we always verify that the acceleration time defined as: be shorter than the lifetime of the system (see e.g., Blasi 2013). Production of secondaries As discussed in § 3.1 (see also Appendix A for additional details), and interactions in the downstream region take place with typical timescales larger than Gyr, so that their dynamical impact on the CR transport can be neglected. However, the luminosity of the wind bubble can be a sizable fraction of the SBN's luminosity due to the large spatial extent of the system (see also Romero et al. 2018;Müller et al. 2020, for related discussions). We thus compute the gamma-ray and neutrino emission resulting from the interaction of CRs with i) particles in the plasma through pp interactions and ii) thermal photons, as produced by stars and dust in the galaxy and illuminating the wind bubble itself ( interactions). The calculation of gamma-rays produced through pp interactions has been performed using the NAIMA package (Kafexhiu et al. 2014) which implements the procedure described in (Kelner et al. 2006) while the gamma rays produced through interactions are computed following (Kelner & Aharonian 2008). The gamma-ray absorption inside the SBN is taken into account as in (Peretti et al. 2019), where the background photon field is assumed to be constant in the SBN volume. On the other hand, the size of the system and the −2 dependence of the photon field imply negligible absorption effects for gamma rays produced in the wind bubble. Finally, the gamma-ray absorption on the EBL on cosmological distances is computed adopting the EBL model of Franceschini & Rodighiero (2017). The single flavor neutrino flux is computed assuming equipartition among flavors, ( , , ) = (1 : 1 : 1), due to flavor oscillations during propagation to the Earth. The production of neutrinos in pp interactions is estimated by rescaling the gamma-ray luminosity as: ( ) ≈ ( )/2, where ≈ 2 . The neutrinos produced in the interactions are computed following Kelner & Aharonian (2008). EMISSION FROM INDIVIDUAL STARBURSTS In this section we discuss the results of the calculation of the spectra of accelerated particles and high energy gamma rays and neutrinos ( §4.1) for an individual SBG and how the properties of the bubble and of the accelerated particles change when changing parameters ( §4.2). Particles and spectra We discuss two stereotypical models of SBGs so as to illustrate how the results change by changing the properties of the SBN. The two benchmark cases are labelled as B0 and B1 and correspond to the parameters' values reported in Tab. 1. The B0 prototype is reminiscent of local mild SBGs such as M82 and NGC253. We assume the photon field of NGC253 (Galliano et al. 2008) as representative of the prototype B0. Observations and numerical simulations of M82 suggest a terminal (wind) velocity ∼ 2000 km s −1 (Strickland & Heckman 2009;Melioli et al. 2013), with a mass loss rate up to 3 M yr −1 . Similar terminal wind speed but higher mass loss rate are inferred for NGC253 (Strickland et al. 2002;Bolatto et al. 2013). The B1 configuration represents a somewhat more powerful wind that can be expected in objects for which the nuclear activity and temperature is higher (such as LIRGs) than what is inferred for M82 and NGC253 (see e.g., Bustard et al. 2017). For the B1 prototype we assume that the photon background is somewhat larger than B0 and for reference we assume the SED of NGC1068 (Galliano et al. 2008). In Table 1 we also show the maximum energy of accelerated particles, max , and the single flavor neutrino energy flux produced in the wind bubble at 25 TeV, defined as˜, as observed from a distance of 3.9 Mpc. Both these quantities are outputs of our calculations. The positions of the termination shock, Equation (1), and of the edge of the bubble, Equation (2), for the two prototypes are calculated fixing the age of the system to * age = 250 Myr and by assuming a value of the pressure in the external medium, ℎ / . Results for the cases B0 and B1 are reported in Figures 2 and 3, respectively. The top panels show the particle spectrum at the shock, the escaping flux and the particle spectrum in the cold wind region as computed at different radii (0.75 sh and 0.50 sh ). The vertical purple line identifies the position of the maximum momentum max of accelerated particles, defined as the value at which the spectrum ( ) is reduced by . The bottom panels of the same Figures show the corresponding spectra of gamma rays and neutrinos resulting from pp and p interactions computed for the cases of a strong shock ( = 4) and assuming that the source is located at a distance of ∼ 3.9 Mpc (appropriate for M82). The red thick (thin) solid line shows the gamma emission from the wind region after (before) correcting for absorption on the EBL during transport from the source to Earth. Notice that the same plots report also the contribution of gamma rays and neutrinos produced by the interaction of CRs accelerated by SNRs inside the SBN and interacting inside the nucleus, assuming a source spectrum ∼ −4.2 , as inferred for M82 by Peretti et al. (2019), with a maximum energy 1 PeV. The thick (thin) line refers to the flux after (before) correction for absorption en route. The flux of muon neutrinos from the wind region is shown as a blue thick dash-dotted line. Such flux is dominated by the contribution of pion production in pp interactions downstream of the termination shock. In Figure 3, due to the larger luminosity of the SBG, the contribution to the neutrino flux due to photomeson production (dash dot-dotted orange line) becomes visible in the plot. Such flux is present only in the highest energy region because of the kinematic threshold of the process of photopion production. A few comments on the spectrum of accelerated particles (top panels in Figure 2 and 3) are in order: as it would be the case for standard DSA, the spectrum of accelerated particles is a power law when the momentum is much smaller than the maximum one. On the other hand, as discussed in MBPC21, spherical symmetry induces a dependence of the spectrum on the diffusion coefficient that is most marked around max . This is because particles can feel an effective plasma velocity which is smaller than when their diffusion length becomes comparable with sh . Particles with high energies can travel farther away from the shock and feel its curvature in a more prominent way. The deviation from the standard powerlaw is more visible for weak energy dependence of the diffusion coefficient. In other words, the deviation from a power-law would start at lower energies for Kolmogorov diffusion, while it would occur closer to max for Bohm diffusion (see discussion in MBPC21). These subtle effects also reflect in the spectrum of secondary gamma rays and neutrinos. The maximum energy reached by accelerated particles varies between tens of PeV for the prototype B0 to 100 PeV for B1. As previously mentioned, here we assumed = 4, but in § 6 we discuss the case of softer slopes as might arise due to the motion of scattering centers in the downstream plasma (see Caprioli et al. 2020, for details). By looking at the particle spectrum in the inner region, one can conclude that only particles at the maximum energy can diffuse efficiently against the wind and populate the inner region of the system. Nevertheless, it appears clear that, unless an additional acceleration mechanism is present in the system, the number of particles that can successfully diffuse back to the SBN is strongly suppressed due to the geometry of the system. Indeed, in order to successfully diffuse Wind γ Wind ν µ SBN γ SBN ν µ Figure 2. Particle spectrum and HE multimessenger spectra at Earth assuming = 3.9 Mpc for the benchmark prototype B0. Top panel: proton spectrum at the shock (thick black line) compared to the solution at 0.75 sh (red dashed line) and 0.5 sh (blue dot-dashed line). The escape flux is also shown (green dotted line). Bottom panel: gamma-ray and neutrino flux from the wind (thick red and dot-dashed blue lines) compared to the emission from the SBN core (green dashed and pink dotted lines). The effect of EBL absorption is taken into account assuming a distance of 3.9 Mpc. For comparison, and for a qualitative view of the absorption in the source, the gamma-ray components are shown when the EBL absorption is neglected (thin lines). upwind towards the SBN, particles need to have a momentum b such that the diffusion length becomes larger than the upstream region, namely ( b )/ 1 sh . Finally, we notice that the spectrum of the escaping flux, as also discussed in MBPC21, does not differ strongly from the solution at the shock in terms of spectral slope and maximum energy. The gamma-ray emission from the SBG is dominated by the emission of the SBN for TeV. However, depending on the total power of the system and the conditions of the external medium where the bubble is expanding into, the emission from the wind region may become dominant at high enough energy and be identifiable as an extension of the spectrum up to the energy for which there is a substantial absorption on the EBL. In the scenario where accelerators in the SBN cannot exceed ∼PeV, all neutrinos with energy 10 2 TeV are produced in the wind and the luminosity increases with since this parameter directly affects the target density for pp interactions (see also Tab. 1). The slope of the neutrino spectra below ∼ 10 TeV is slightly harder than −2 due to the energy dependence of the cross section for pp inelastic collisions, . Above ∼ 10 TeV the spectral slope gets gradually softer due to the shape of the parent proton population. The hadronic emission from the wind is dominated by the pp interaction taking place in the shocked wind region whereas the contribution from the free wind region might be relevant only for ex- r / R sh =1 r / R sh =0.75 r / R sh =0.5 j esc / u 2 10 -13 10 -12 10 -11 10 -10 10 -9 10 2 10 3 10 4 10 5 10 6 10 7 10 8 treme values of , or possibly during some early stages of the bubble evolution. The photomeson contribution is found to be always subdominant compared to the pp and is irrelevant if max 10 2 PeV, because of the kinematic threshold for this channel. Finally, for some massive winds characterized by 10 M yr −1 , the gamma-ray emission from the wind might become comparable with the SBN component even below the TeV range. Exploring the parameter space In the discussion above we identified two main prototypical examples of SBGs, but clearly the zoo of these astrophysical objects cannot be reduced to just two cases. Here we provide a brief overview of what is expected to happen in different realizations of such systems. We do so by exploring a grid of different configurations of the main macroscopic wind properties, mass-loss rate ( ) and terminal wind speed ( ∞ ), and later by focusing on some specific parameter variations and the associated outcome. The corresponding relevant quantities are summarized in two pairs of plots (Figures 4 and 5) and in Table 2. In what follows we focus on the effects of different conditions on: 1) maximum energy and 2) luminosity. In our parameter investigation we define a range for the mass-loss rate, 0.1 < /[ yr −1 ] < 50 and for the terminal wind speed, 0.5 < ∞ /[10 3 km s −1 ] < 3. In order to keep track of the temporal evolution, we additionally select two characteristic times at which we take a snapshot of the system: age,1 = 100 Myr and age,2 = 250 Myr. In Figures 4 and 5 we show the results obtained at age,1 and age,2 , respectively, under the assumption of ℎ / = 5·10 4 K cm −3 . The upper panels illustrate the changes in the maximum energy max and the lower panels show the single flavor neutrino flux at 25 TeV, . In general, it can be observed that the higher the power of the system ( 2 ∞ ), the higher the maximum energy. In particular, as discussed in §3 (see also Morlino et al. 2021, for additional details), the most stringent condition on the maximum energy is typically set by the transport in the upstream region as ( (1) max ) = sh ∞ . Such a conditions can be re-expressed as which leads to (1) max ∝ 3 ∞ for the assumed Kraichnan-like turbulence. Notice that although the maximum energy in Eq. (19) identifies an energy where the flux drops most prominently, as discussed above, the spherical symmetry of the problem leads to a gradual spectral steepening at energies below max . This effect is embedded in the two functions Γ 1 and Γ 2 described in § 3. At odds with the case of the maximum energy, the neutrino luminosity has a very mild dependence on the terminal wind speed provided that it is above the threshold to accelerate efficiently >PeV particles, whereas it has approximately a linear dependence on the mass-loss rate. The latter scaling is due to the direct connection between and the target density. By comparing the results obtained at age,1 with those obtained at age,2 we observe that the age of the system does not have a dominant impact on the maximum energy as expected from Eq. (19): the acceleration time is much shorter than the dynamical time of these systems. The slight difference can be understood by the interplay between the two functions, Γ 1 and Γ 2 regulating the HE cut-off. In fact, an older system is characterized by a less stringent constraint produced by Γ 2 , while the one set by Γ 1 is practically unmodified. On the other hand the luminosity is found to increase with time due to the increase of target material accumulated in the downstream region. In order to evaluate the impact of changing other relevant parameters' values, we now focus on a set of limited cases listed in Table 2 and discuss quantitatively their numerical outcomes. We first change the total luminosity of the system: L1 corresponds to a strong wind as the one that can be found in LIRGs; L2 corresponds to a mild star forming source. In line with what we discussed above, these two situations illustrate that, maintaining the same halo conditions, the maximum energy increases with the power of the wind. The location of the wind termination shock as well as of the forward shock is moved farther away from the center when the power is larger. Consequently, the most powerful sources naturally lead to a larger volume of the bubble and higher gamma-ray and neutrino luminosity. In scenarios P1, P2 and P3, the total power is as in B0, but the surrounding pressure in the halo varies by 3 orders of magnitude. This again impacts the location of the termination shock sh which in turn affects the maximum energy even though this latter quantity varies only by a factor of ∼ 2. In particular, in agreement with Eq. (19), the smaller the halo pressure, the higher the maximum energy . Although Eq. (19) is informative on the dependence of the maximum energy on the CGM pressure, the actual scaling of max on ℎ is not straightforward due to the role played by the functions Γ 1 and Γ 2 in shaping the spectrum close to max , and due to the transition of the diffusion coefficient to the ∼ 2 regime, when L ≈ . The last effect is in fact occurring at energies close to the actual max . Scenario P3 corresponds to a relatively extreme situation since the wind evolves in a very low pressure environment compared to what might be expected in a starburst halo. Under these conditions, the system would need ∼ 5 Gyr to reach the pressure-confined state. Consequently, both the forward and the wind shocks are still in their expansion phase after 250 Myr. In this case, the wind shock radius cannot be computed under the pressure confined assumption, so that we adopt equation (4.2) of Koo & McKee (1992a). In this scenario the maximum energy is somewhat close to the scenario labelled as P2, but the luminosity is smaller due to the lower ram pressure at the shock which, in turn, results from the larger shock distance from the center. Even if the impact on the maximum energy is marginal, the value of the circumgalactic pressure strongly impacts on the gamma-ray and neutrino luminosity. This is a direct consequence of the assumed proportionality between the CR energy density and the free wind ram pressure which is, in turn, roughly equal to the external pressure. Comparing cases P1 with P3, where ℎ is 10 3 smaller, the neutrino luminosity decreases by ∼ 140 times. The proportionality is not exactly linear because the spatial distribution of both CRs and gas in the two cases is different. By comparing scenario L2 with P1 and L1 with P2, one can notice that sources with similar age and size can strongly differ both in maximum energy and luminosity. The former result can be easily understood given the dependence of the maximum energy on the ∞ (see Equations (15), (16) and Eq. (19)). The luminosity, on the other hand, is set by the combination of the pressure at the shock, which determines the total number of accelerated particles, and the target density (which in turn depends on ). Finally, T1 and T2 correspond to B0 at different times age , 100 Myr and 300 Myr, illustrating that the slow evolution in time of the system does not have a strong impact on the maximum energy. However, sources become more luminous while ageing, due to the target material that accumulates and larger volume of the shocked wind region where pp interactions are taking place. This supports numerically what has been discussed above based upon the contour plots parameter investigation. DIFFUSE FLUXES OF COSMIC RAYS, GAMMA RAYS AND NEUTRINOS In this section we illustrate our calculations of the diffuse flux of gamma rays, neutrinos and cosmic rays due to the collective emission of starburst galactic winds distributed in redshift. In § 5.1 we evaluate the starburst contribution to the diffuse fluxes of gamma rays and neutrinos and compare them to those observed by Fermi-LAT (Ackermann et al. 2015) and IceCube (Abbasi et al. 2020) respectively. In § 5.2 we explore the associated flux of CR protons accelerated at the termination shock and eventually escaping the bubble. Gamma rays and neutrinos We work under the assumption that starburst winds are ubiquitous in SBGs and we count sources following the star formation rate function (SFRF) approach previously adopted by Peretti et al. (2020) and defined for redshift up to = 4.2 (see also Gruppioni et al. 2015, for additional details). Differently from the case of SBNi, where, as discussed by Peretti et al. (2020), the luminosity scales with the SFR, the dependence of the wind properties on the SFR is highly non trivial and difficult to constrain. Therefore, in the following we rely on the assumption that on average all winds above a given SFR value, min , can be described in terms of a single prototype. The diffuse flux can be computed as where ( , ) is the flux density of the particle specie = { , }, Φ SFRF is the SFRF, Ω is the comoving volume element per redshift interval and solid angle Ω. In a spatially flat space-time C ( ) = L ( )/(1 + ) and ( ) = √︁ Ω M (1 + ) 3 + Ω Λ . The quantity is assumed to vanish in the neutrino case while, in the case of gamma rays, it represents the opacity due to the presence of the EBL and cosmic microwave background (CMB) (Franceschini & Rodighiero 2017). The contribution of the electromagnetic cascade is computed as in Peretti et al. (2020) (see also Berezinsky & Kalashev 2016, for additional details). Finally, min represents the minimum star formation rate that we adopt as a free parameter considering the value of * ∼ 1 M yr −1 (Peretti et al. 2020) as a firm lower limit. The assumption of min as free parameter is dictated by the poorly constrained ratio between the mass-loss rate of the wind and the star formation rate in the SBN, R = / (see e.g., Veilleux et al. 2005, for a detailed discussion). In general R 2 (see also Bustard et al. 2016;Zhang 2018, for detailed discussions), hence we fix R = 2. Therefore, min increases with the wind mass-loss rate. We calculate the diffuse emission from the SBN and from its wind in two scenarios, that we refer to here as I and II, in which B0 and B1 are respectively used as a prototype (see Table 1). Following our criterion on R, we adopt a min of 2.5 and 5 M yr −1 for cases I and II, respectively. In Figure 6 we show the spectra of diffuse -rays and for the two scenarios I and II (top and bottom panels, respectively). In both cases the central SBN provides the main contribution to IceCube (Abbasi et al. 2020) data. The color code is the same for all panels: total gamma rays and single flavor neutrinos are shown as thick red lines and blue filled squares respectively. Direct gamma-ray component from the SBN and wind (dashed violet and two-dot-dashed orange respectively) are shown separately with their associated cascade spectra (dot-dashed magenta and three-dot-dashed respectively). The neutrino components from SBNi (green empty circles) and from the winds (grey empty triangles) are shown separately. the total gamma-ray diffuse flux (dashed violet line and thick red line respectively). The latter lies below the diffuse flux measured by Fermi-LAT and never exceeds the upper limits imposed by the superposition of point-like sources (e.g. Lisanti et al. 2016). As described above, the wind region also contributes to the gammaray emission, and the corresponding diffuse flux is shown as an orange two-dot-dashed line in Figure 6. The cascade components (three-dot-dashed brown and dot-dashed magenta for SBNi and winds respectively) are always subdominant and change their relative contribution depending on the scenario. The neutrino flux from SBNi (empty green circles) drops considerably above ∼ 50 TeV, as a result of the proton maximum energy at sources in the SBN being ∼ 1 PeV. The flux of neutrinos produced in the wind (empty gray triangles) through pp collisions extends to 300 TeV and dominates the diffuse emission at such energies, at least at the level of ∼ 10 −9 GeV cm −2 s −1 sr −1 in the most pessimistic scenario (case I) (this lower limit corresponds to ∼ 10% of the Ice-Cube flux of through-going muons reported by Haack & Wiebusch 2018). We finally notice that, if star forming galaxies were dominating the diffuse gamma-ray flux as suggested by Linden (2017); Roth et al. (2021), the associated neutrino flux would correspondingly increase. Cosmic rays CR protons accelerated at the termination shock of the SBG wind eventually escape the system from the outer edge of the bubble. Since energy losses do not affect the spectrum of these particles in a significant way, the escape spectrum is similar to the spectrum of particles accelerated at the termination shock. The diffuse flux of protons contributed by SBG winds, calculated using Equation (20) which neglects any propagation effects due to the intergalactic magnetic fields, is shown in Figure 7 for the scenarios I and II introduced earlier. Notice that since the maximum energy of accelerated particles is few hundred PeV, below the threshold for Bethe-Heitler pair production, the transport of these CRs on cosmological scales is dominated by adiabatic losses alone as due to the expansion of the Universe. In Figure 7, the predicted proton fluxes are compared with data of the all-particle spectrum as well as on the light component alone, as collected by IceTop (Aartsen et al. 2019), Tunka (Epimakhov et al. 2013;Prosin et al. 2016) and Kascade-Grande (Arteaga-Velázquez et al. 2018). This shows that if indeed particle acceleration at winds termination shock does take place, so as to contribute to the high energy neutrino flux, a sizeable contribution to the protons CR flux measured at the Earth should be expected. Notice that here we only estimated the flux of protons from SBGs, but it is reasonable to expect that heavier nuclei are also accelerated, if present in the wind. Such nuclei would contribute to the total CR flux at higher energies. We also observe that our results on the starburst contribution to the CR spectrum are qualitatively supported by Zhang et al. (2020) where, however, different assumptions were adopted for both the acceleration and transport of high-energy particles in galactic winds. A comment on the spectral shape of CRs from SBGs is in order: one can see in Figure 7 that the spectrum expected at the Earth is similar to that originated at individual wind bubbles, as a consequence of the fact that adiabatic losses do not change the spectral shape. On the other hand, such a straightforward connection can be made here only because of the assumption that all SBGs can be considered as similar to one of the two prototypical sources adopted here. In general this is not the case, and one should expect that the higher the wind luminosity the higher the maximum energy of the accelerated particles, but the lower the number of such objects in the Universe. As a result, qualitatively, one might expect that the diffuse flux of CR protons (as well as neutrinos) might become steeper at energies higher than the maximum energy associated to the least luminous of the winds, as discussed in a generic case by Kachelrieß & Semikoz (2006). We finally observe that, based on our calculations, it is difficult to accelerate protons above ∼ 10 18 eV in the wind of normal SBGs. DISCUSSION AND CONCLUSION The theory of particle acceleration at the termination shock of winds originating in star clusters, as developed by Morlino et al. (2021), has been adapted here to the description of particle acceleration at the termination shock of starburst winds. At such shock the wind from the SBN is slowed down and heated up, so as to reach approximate pressure balance with the galactic halo in which the wind was originally expanding. In fact a weak forward shock moves slowly through the halo medium, but its Mach number is too low to be of relevance for particle acceleration. We have assumed a stationary spherical geometry for the wind blown bubble. Even though numerical simulations might show a variety of possible deviations from such an assumption, particle acceleration and transport are not particularly affected by such details. The theoretical approach is used to calculate the spectrum of accelerated particles and their spatial distribution inside the wind bubble, as well as their escape flux from the edge of the bubble. We discussed two prototypical SBG models, assumed to represent respectively a galaxy like M82 or NGC253 and a LIRG, and for each the flux of gamma rays and neutrinos produced due to CR interactions in the SBN and in the wind bubble has been calculated. The absorption of the gamma rays both inside the nucleus and en route to the Earth has been taken into account. The maximum energy of accelerated particles at the termination shock varies between a few tens PeV and 200 PeV for the two prototypes of SBGs considered here and in a range from a few PeV up to a few hundred PeV exploring a wider range of parameters. This implies that the corresponding neutrino flux extends up to 1-10 PeV, while the neutrino flux from the SBN is expected to extend up to a few tens of TeV, if CRs are accelerated by SNRs as in the Milky Way. Given the fact that the termination shock is strong, for the parameters adopted here, the spectrum of accelerated particles at max is close to −4 . Some theoretical arguments can be put forward, for instance based on a finite velocity of scattering centers in the downstream region, to argue that slightly steeper spectra are possible (see e.g. Caprioli et al. 2020). The diffuse gamma-ray flux due to the superposition of SBGs is dominated by the contribution of the central SBNi for energies 1 TeV. This flux does not exceed the upper limits imposed by Fermi-LAT based on the contribution of point-like sources (e.g. Lisanti et al. 2016). The wind region also contributes to the gamma-ray emission and such contribution can become comparable with that of the SBNi for 1 TeV, if the more luminous prototype is adopted in the calculation of the diffuse flux. The neutrino flux from SBNi drops considerably above ∼ 50 TeV, as a result of the proton maximum energy at sources in the SBN. On the contrary, the flux of neutrinos produced in the wind through pp collisions extends to 300 TeV and dominates the diffuse emission at such energies. The diffuse flux in this energy region is compatible with the IceCube data. The observational confirmation that particle acceleration at the termination shock and production of gamma rays and neutrinos in the wind bubble do take place can be achieved to some extent with upcoming observational facilities, as we discuss below. The study of starburst-driven galactic winds is generally performed via atomic and molecular line shifts and measurements of the X-ray luminosity (Veilleux et al. 2005;Strickland & Heckman 2009), but so far, detection in the gamma-ray domain are rather limited, and unable to resolve the SBN emission from a possible contribution from the wind bubble. A gamma-ray survey would be ideal to probe the model discussed in this work and would provide key information on its acceleration properties and luminosity. However, the most useful information would come from direct detection of the gammaray emission from the wind region. In the VHE range, the nearest starbursts, M82 and NGC253, could already be resolved by current instruments. In fact, a bubble of size ∼ 50 kpc, at a distance of ∼ 3 − 4 Mpc, corresponds to an angular size ∼ 1 • , typically resolved by imaging air Cherenkov telescopes (IACTs) (Park & VER-ITAS Collaboration 2015;Aleksić et al. 2016;Zorn 2019). However, given the total volume integrated luminosity of the order of (10 TeV) ∼ 10 41 GeV s −1 , expected for these sources, this task remains challenging. Next generation IACTs, such as ASTRI and CTA, with improved angular resolution and sensitivity, will open promising perspectives for a morphological study of these sources (Vercellone 2016;Acharya et al. 2019). A second method for probing the gamma-ray emission of the starburst wind consists in a spectral detection of the source at energy 10 TeV. The main reason for that is because gammagamma absorption on the IR is expected to be important above a few TeV in the SBNi. Differently, the emission from the wind comes basically unabsorbed. The observation of non-thermal radio/X-ray emission at large distances from the galactic disk can also be adopted to trace both the acceleration of primary electrons and the presence of secondaries produced via pp and p interactions. A multiwavelength modeling focused on the leptonic emission is left for future investigation. The detectability of SBGs as isolated neutrino sources is disfavored for the standard parameters adopted in this work. However, very young systems or scenarios involving high mass loss rates, 10 M yr −1 , can possibly produce fluxes close to the sensitivity level of km-squared detectors (Aiello et al. 2019;Aartsen et al. 2021). Differently from a single isolated source, the combined contribution of SBGs might provide interesting indications with higher statistical significance. Finally, we checked that the flux of CR protons accelerated at the termination shock and eventually propagating to the Earth is not in conflict with present day observation of the protons spectrum. In fact the diffuse flux of CRs from starburst wind bubbles is tantalizingly close to the observed flux, and limited to energies a few hundreds PeV, although it cannot be excluded that ultra-luminous SBGs or SBG with AGN activity may lead to the production of CRs with somewhat larger energies. However, the role of SBGs in contributing to the observed CR flux at ∼ 10 17 eV needs some additional support to be more robust than an order of magnitude estimation. Finally, in the context of the model developed in this work, we do not expect regular starbursts to be able to produce protons at energies larger than a few hundred PeV. We cannot exclude that higher energies may be reached in galaxies with SB activity hosting AGN jets, where particle acceleration would be regulated by different physical processes. DATA AVAILABILITY No data has been analyzed or produced in this work. The phenomenological predictions performed in this work are compared with the data produced and analyzed in available publications. In particular, Fermi-LAT gamma-ray data can be found in Ackermann et al. (2015) and IceCube neutrino data are published in Abbasi et al. (2020). Cosmic-ray data measured by IceTop, Kascade-GRANDE and Tunka can be found respectively in Aartsen et al. (2019), Arteaga-Velázquez et al. (2018) and Epimakhov et al. (2013); Prosin et al. (2016).
2021-04-23T01:16:05.577Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "34f2f681b5622d3f85e39d7983188d5dd0c0aae6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.10978", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "34f2f681b5622d3f85e39d7983188d5dd0c0aae6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237500730
pes2o/s2orc
v3-fos-license
Large-scale decrease in the social salience of climate change during the COVID-19 pandemic There are concerns that climate change attention is waning as competing global threats intensify. To investigate this possibility, we analyzed all link shares and reshares on Meta’s Facebook platform (e.g., shares and reshares of news articles) in the United States from August 2019 to December 2020 (containing billions of aggregated and de-identified shares and reshares). We then identified all link shares and reshares on “climate change” and “global warming” from this repository to develop a social media salience index–the Climate SMSI score–and found an 80% decrease in climate change content sharing and resharing as COVID-19 spread during the spring of 2020. Climate change salience then briefly rebounded in the autumn of 2020 during a period of record-setting wildfires and droughts in the United States before returning to low content sharing and resharing levels. This fluctuating pattern suggests new climate communication strategies–focused on “systemic sustainability”–are necessary in an age of competing global crises. Introduction Climate change and COVID-19 represent two global crises unfolding on different time scales, and there are growing concerns that immediate threats such as COVID-19 are taking public interest away from the long-term challenges of climate change (i.e., the "distraction effect"; [1]). Indeed, research indicates that humans have finite supplies of "surplus compassion" and limited "carrying capacity" for multiple mass communication topics due to cognitive limitations [2]. Mapping the social salience of climate change (i.e., the topic's prominence in society) is therefore germane across disciplines-from social influence research in economics and psychology [3,4] to the exploration of social tipping points and synchronization in complex systems research and network science [5,6]. Large-scale social salience insights also contribute to important practical goals such as predicting and managing fluctuations in "moral indignation, political celebration, ideological fervor, happiness, and value judgments" (p. 1411; [7]). However, though large-scale climate change salience is an important concern, especially when threats such as COVID-19 emerge and potentially compete for attention [1], there is little to no empirical research on the topic given the chronic problem of limited access to (highquality) large-scale temporal data [8]. Here, we address this shortcoming using the largest known dataset capturing climate change salience. Specifically, to explore whether the spread of the pandemic is associated with shifts in large-scale climate change salience, our analysis draws upon the total number of link shares and reshares on Meta's Facebook Platform (e.g., shares and reshares of news articles) in the United States from August 2019 to December 2020 (containing billions of aggregated and de-identified shares and reshares). This is a novel and robust approach compared to explicit measures of salience (e.g., surveys) which can lead to biases such as social desirability. For example, participants may report high levels of climate change concern because it is perceived as socially favorable, but their actual level of engagement is low. Instead, our approach "passively" measures actual large-scale behavior across a diverse, near population-level sample. Materials and methods We downloaded the data through the Data for Good program at Meta. The data is available to academics and nonprofits through the Meta Data for Good Data License Agreement (for contact information see: https://dataforgood.fb.com/tools/climate-conversation-maps/). To create the data, Meta pulled the daily volume of link shares and reshares in 21 languages. The 21 languages were selected with the consideration of both the ground population and the number of active Meta users using them on Meta's Facebook platform. The languages were English, Spanish, French, Arabic, Portuguese, Hindi, Russian, Japanese, Filipino, Vietnamese, German, Turkish, Burmese, Korean, Italian, Thai, Indonesian, Bengali, Romanian, Chinese, and Polish. We subsequently decided to use all available languages in the dataset given the diversity of the United States. Also, though the data in the entire repository is global, we focused on the United States given the large percentage of Facebook platform users in the country and the intensity of the pandemic in the region. The subset of shares and reshares of links that contained in their title or blurb the keywords "climate change" or "global warming" in those languages were flagged as "climate (re)shares". The volume of climate link shares and reshares were aggregated on the Global Administrative Areas Database (GADM; http://www.gadm.org/) level 1 admin polygons (US state-equivalent) based on the predicted home location of the (re)sharers provided by Meta. In addition to volume, we also calculated the percentage of climate link shares and reshares relative to total link shares and reshares. The repository dates to August 2019 and updates daily. For more information including the methodology Meta uses to make the data available for research, please refer to the dataset page from Data for Good at Meta (https://dataforgood.fb.com/tools/climate-conversation-maps/). The code used to analyze the data is freely available upon request and the research was approved by Meta internal review (for an overview of the composition and remit of research review at Meta see: https://research.fb.com/blog/2016/06/research-review-at-facebook/). Several steps were also taken for privacy preservation when Meta created the dataset. First, only the shares and reshares of links-not original posts-were counted in the data as climate (re)shares. Second polygons having less than ten unique users (re)sharing climate change content were filtered out. Third, only aggregated and anonymized link shares and reshares are available in the dataset (i.e., Meta removed individual-level, personal data to avoid any threat to personal privacy). Results and discussion We evaluated changes in the salience of climate change in the United States through a "Climate Social Media Salience Index" (Climate SMSI). The Climate SMSI is constructed as the ratio between all Facebook link shares and reshares containing "climate change" or "global warming" relatively to the total number of all Facebook link shares and reshares made in a certain region in a day. Fig 1, shown below, plots the Climate SMSI for all US macro-regions over the time frame provided by the extracted dataset. Shown in gray is the cumulative number of COVID-19 cases in the United States as reported by The New York Times [9]. The plot shows a clear shift in link shares and reshares related to climate change as the pressure of COVID-19 increased during March 2020. The shift was staggering. If we look at August 1, 2019 to March 23rd, 2020, we saw a median of 0.25% of all daily Facebook link shares and reshares by US users referencing climate change. The median during the period March 24th to August 8 was only 0.05%-an 80% drop in link shares and reshares related to climate change. The data also shows uniform shifts across US regions: from 0.37% to 0.07% in the Western United States, 0.26% to 0.06% in the Northeast, 0.23% to 0.04% in the North Central Region, and 0.21% to 0.04% in the South. Interestingly, there is also a second spike in climate link shares and reshares during the autumn of 2020. This is likely the result of a record-setting wildfire season as well as extreme heat waves and droughts-particularly in the Western United States-increasing the salience of climate change [10]. Finally, though all regions experienced a second climate salience spike, it is interesting to note that it was largest in the West (i.e., the light green line in Fig 1). This is perhaps due to the region's proximity to the threat. We also compared our relative Climate SMSI score (i.e., climate change link shares and reshares over all link shares and reshares) to an absolute measure of just the number of climate change link shares and reshares on Facebook in the United States during our time frame of interest. We conducted this analysis to explore whether climate change salience actually decreased or if attention to climate change remained the same and the overall number of Facebook link shares and reshares simply increased due to a content boost from the pandemic. As seen in Fig 2, the absolute measure of climate change salience is similar to the Climate SMSI score. In both measures, there is a clear decrease in climate change link shares and reshares during the outbreak of the pandemic and a rebound in the autumn of 2020. Thus, our findings are an important step for understanding large-scale shifts in relative and absolute climate change salience. For example, this study captured the tectonic decrease in large-scale climate change salience during the COVID-19 outbreak which may otherwise go overlooked using traditional survey methods (e.g., the social desirability problem mentioned above where respondents report climate concern but engage very little with the topic). The time series data showed a clear decrease in Facebook link shares and reshares related to climate change as the pressure of COVID-19 increased during March 2020. Again, the shift was staggering in both absolute climate change link shares and reshares as well as climate change link shares and reshares relative to all content shared and reshared (i.e., the Climate SMSI). Collectively, these measures suggest climate change salience can dramatically increase and decrease as global threats ripple through society. This salience shifting implies that climate change (as a standalone social topic) is competing with other issues vying for large-scale attention. Scholars therefore need to consider how perturbations such as pandemics, civil unrest, and financial crises interact and interfere with large-scale climate change salience. Developing models of green behavior and transitions, for instance, will suffer from decreased accuracy without such considerations. A better understanding of large-scale climate change salience also has practical implications. Practitioners may benefit from aligning multiple global challenges such as COVID-19 and climate change to avoid issue competition [11]. As for future work, researchers should start exploring causal mechanisms and the complex feedback loop between media producers and consumers. For example, media outlets may produce less climate content to consume as large-scale attention shifts towards an alternative threat such as a pandemic and vice versa (i.e., consumers share less climate content, producers provide less content to share, or both). Also, our data focused on the social salience of climate change over time. Though this provides valuable information regarding broad engagement with climate change-noting that greater engagement can lead to greater acceptance of climate science [12]-researchers will want to investigate the valence (i.e., positive and negative orientations) of large-scale salience. Looking at salience says something about the trend of attention in society while valence relates to nuanced social factors such as voting behavior. In short, looking at salience and valence respectively asks if something is grabbing society's attention and in what direction. Conclusion The aim of the present study was to investigate possible shifts in large-scale climate change salience during the spread of COVID-19. The primary question was whether populations stay focused on climate change when competing global threats emerge. Our results suggest that climate change salience can indeed fluctuate when there are multiple mass communication topics on the public agenda. Investigating the dynamics of large-scale salience is therefore an important step for ensuring climate change is not lost in a sea of global threats. If society is continually distracted with seemingly independent shocks from pandemics, civil unrest, and financial crises, then efforts to introduce lasting climate action will suffer. Instead, scholars and practitioners must communicate the interconnectedness between these problems to create a unified message of "systemic sustainability". Society needs to appreciate how diverse challenges like the loss of biodiversity, pandemics, hyper-urbanization, and extreme concentrations of wealth are connected. The outcome is consistent large-scale interest in climate change-rooted in systemic sustainability-rather than fluctuating and competing spikes in attention over time.
2021-09-14T10:09:22.682Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "99f810f8ee5d0c3bc297e8cc75c0f118c02fd0ca", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0256082&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4849b62316c3f33247cf95e287edaab93779b0e6", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
237615368
pes2o/s2orc
v3-fos-license
Ileal perforation as an initial manifestation of systemic lupus erythematosus: A case report Introduction and importance Lupus enteritis is uncommon in patients with SLE and usually presents with anorexia, vomiting, and abdominal pain. Intestinal perforation as an initial manifestation of SLE is rare and can have a grave prognosis if not timely diagnosed. Case history We report an unusual case of a 22-year-old regularly menstruating female who presented with features of perforation peritonitis as an initial manifestation of lupus enteritis. Intraoperatively, a gangrenous ileal segment with multiple perforations was present. Thus, with an intraoperative diagnosis of perforation peritonitis, a gangrenous segment of the small bowel was resected and a double-barrel jejuno-ileostomy was created. Discussion Lupus enteritis manifesting initially as bowel perforation can be an uncommon cause of acute abdomen. A plain chest X-ray can show gas under the diaphragm suggesting bowel perforation. A contrast-enhanced CT scan of the abdomen is the gold standard in diagnosing lupus enteritis with a good prognosis on steroids. Conclusion Primary closure, resection, and anastomosis of small gut or diverting stoma are required for management of perforation. A high degree of clinical suspicion is required for early diagnosis thus preventing the grave prognosis of such an entity. Introduction Systemic lupus erythematosus (SLE) is a multisystem disorder mostly affecting women. Any part of the gastrointestinal tract can be affected with varying manifestations and more than 50% of SLE patients can have gastrointestinal symptoms in their disease course [1]. Most common symptoms include nausea, vomiting, and anorexia while symptoms like abdominal pain, diarrhea, and abdominal distension can be a manifestation of serious underlying GI involvement, infections, and/or treatment complications [2]. Lupus enteritis occurs in only about 0.2% to 5.8% of patients with SLE and is the vasculitis or inflammation of the small bowel [3]. Though Lupus enteritis has an excellent prognosis, however, intestinal necrosis leading to intestinal perforation can develop if not managed timely which is potentially fatal [1,4]. Here, we report a case of SLE presenting initially with ileal perforation managed by exploratory laparotomy and resection of a perforated portion. This case has been reported in line with SCARE criteria [5]. Case presentation A 22-year P 2+1 L 1 Mongolian female regularly menstruating, nonalcoholic and non-smoker without any prior surgical history or any family history of malignancy presented to our center with complaints of continuous mild non-radiating pain over the periumbilical region for 12 days with associated symptoms such as nausea, vomiting, fever and abdominal distension for the last 2 days. She denied constipation or obstipation, decreased appetite, cough, hematochezia/melena, significant weight loss, and trauma to the abdomen. The patient had a normal bowel and bladder habit and no active tuberculosis. On examination, she was ill-looking with a blood pressure (BP) of 100/60 mm Hg, pulse rate of 100 bpm, respiratory rate of 24breaths per minute, and oxygen saturation of 94% in the room air. On per abdomen examination, the whole of the abdomen was distended with tender and generalized rigidity on palpation suggesting peritonitis. On per rectal examination, the rectum was filled with stool normal in contour with no blood. She was anemic (hemoglobin-8.2 g/dl, PCV-29 g %) with raised total leukocyte count (TC-14000/mm 3 ) and normal platelet count of 200,000/mm 3 . A plain chest X-ray revealed gas under the diaphragm. Thus, the diagnosis of bowel perforation, and peritonitis was made and proceeded for Emergency Laparotomy. The peritoneal cavity was accessed via a midline incision. Intraoperatively, a gangrenous segment of the small bowel was noted 120 cm distal to duodenojejunal flexure to 15 cm proximal from the ileocolic junction with 2 perforations of size 1.5 cm each on the on the antimesenteric border of gangrenous segment located approximately 45 and 50 cm proximal to ileocolic junction. There was approximately 500 ml purulent fluid in the peritoneal cavity along with inter-loop adhesions of ileum ( Figs. 1 and 2). Although the cause of perforation was noted to be bowel gangrene, the cause of ischemia leading to gangrene could not be established intraoperatively. There were no bands, adhesions, volvulus that could lead to segmental ischemia of bowel. Similarly, there were no tubercular deposits, bowel or mesenteric thickening, mesenteric lymphadenopathy that would suggest tuberculosis or serosal fat wrapping that would suggest Crohn's disease as a possible cause of perforation. The gangrenous segment of the small bowel was resected and double barrel jejuno-ileostomy was created because of the severe peritoneal contamination, edematous bowel, low albumin level (24 g/l) and threatened viability of the resected margins. One unit of whole blood was transfused intraoperatively. Otherwise, the operative procedure was uneventful and patient remained hemodynamically stable throughout the procedure. Postoperatively, she was clinically stable, the stoma was healthy and functioning well, and was tolerating a soft diet orally. However, on the 7th postoperative day, she suddenly developed bluish-blackish discoloration of fingertips and toes suggestive of the Raynaud phenomenon. The case was then evaluated by the rheumatology team. Thromboembolic etiologies were ruled out after normal findings on echocardiography, Doppler USG, and CT angiography. However, anti-ds DNA, hsCRP, anti-CCP antibody, and anticardiolipin antibody were positive suggesting SLE as the likely cause of vasculitis. She had also subsequently developed autoimmune hemolytic anemia evident by anemia, incompatibility on blood cross-matching, and positive Direct Coombs test. Later, histopathology also supported segmental small intestine gangrene with perforation due to thrombotic phenomenon. Following findings were noted on histopathology: Perforated sites showed inflammatory granulation tissue with plenty of acute and chronic inflammatory cells. Gangrenous areas had necrosis, congestion and dilated blood vessels. Occluded vessels with thrombus were noted on mesenteric sections. Although the cause of gangrene and perforation was not evident initially, the clinical course on the postoperative period was suggestive of SLE with lupus vasculitis as the most likely cause of segmental small bowel gangrene leading to perforation. Hence, the final diagnosis of active SLE with Ileal perforation due to lupus vasculitis was established. She was then started on steroids. She had gradual clinical improvement and was discharged on the 21st postoperative day with plans to start immunosuppressants on follow-up and ileostomy reversal after 2 months. Since the patient hailed from the remote region of Nepal, further follow-up was advised at the regional health facility. At discharge, she was clinically stable, tolerating a normal diet, and stoma output was controlled with daily wound care. Discussion The commonly found gastrointestinal disorders associated with SLE patients are protein-losing enteropathy, lupus mesenteric vasculitis, acute pancreatitis, intestinal pseudo-obstruction, inflammatory bowel disease, and celiac disease [6]. Lupus enteritis as the sole initial manifestation of active SLE is rarely found only in 0.2-5.8% of patients without previous diagnosis [7]. The cause behind lupus mesenteric vasculitis is the formation of immune complexes deposition in blood vessels by circulation autoantibodies leading to thrombosis and inflammation of vessels supplying the intestine. Thus, lack of blood supply to the area of the intestine can cause ulceration, infarction, and eventually perforation [8,9]. In our case, the patient was an unknown case of SLE diagnosed only after surgery and further evaluation. Lupus enteritis complicating to infarction and finally, intestinal perforation is the most probable reason for the abovementioned complaints. Non-specific symptoms like abdominal pain (97%), ascites (78%), nausea (49%), vomiting (42%), diarrhea (32%), and fever (20%) are common symptoms with lupus enteritis [10]. Similar vague symptoms also occurred in our patient. Lupus nephritis is an additional concern in patients with lupus enteritis present in about 65% of cases [10,11]. Signs of lupus nephritis were not present in our patient. Under laboratory investigation as reported in previous cases, positive ANA is seen in 100%, positive ds-DNA in 80%, low complement levels in 70%, and positive anti-Smith antibodies in 20% of cases. Lymphopenia, hypocomplementemia, and normal C reactive protein are some of the other findings [12]. In contrast, our patient had negative ANA and high levels of C reactive protein. In about 5% of cases, negative ANA can be found. Such patients have clinical manifestations like skin rashes, photosensitivity, Reynaud phenomenon, and serositis [13]. This finding is similar to our patient. A contrast-enhanced CT (CECT) scan of the abdomen is the gold standard technique for diagnosing lupus enteritis [10,11]. Submucosal edema of the jejunum and ileum, leading to classic findings of circumferential bowel wall thickening (target sign) and dilation of intestinal segments, and engorgement of mesenteric vessels (comb sign) are seen in lupus enteritis [14]. Since our patient presented with features of perforation peritonitis, she was rushed to emergency operation theatre as CECT abdomen is not done in cases suspected of perforation peritonitis. The management of ileal perforations includes primary closure, resection, and anastomosis of small gut or diverting stoma, depending on the site and number of perforations, severity of peritonitis, and condition of the patient [15]. Lupus enteritis can have multiple or singular forms of lesions so attention should be given during exploration [13]. In our patient also similar procedure was done and a double barrel ileostomy was made. SLE patients are more prone to the risk of surgical intervention compared with those without SLE [16]. So, proper attention is required for such patients. Lupus enteritis is found to have a good prognosis with steroids but complications like perforations can cause mortality up to 2.7% [10]. Takashi et al. found half of the patients (6 of 11 patients) with SLE had intestinal perforation as a complication and died [17]. Though the prognosis of SLE patients with intestinal perforation is poor but early diagnosis and surgical treatment are useful for the management of the disease [18]. Conclusion Intestinal perforation can be an uncommon presentation of mesenteric vasculitis in a patient with SLE which can have a grave prognosis if there is a delay in diagnosis and management. Hence a high clinical suspicion can aid in early diagnosis in patients with acute abdomen and associated clinical features of SLE. It rarely can be the initial manifestation of SLE. Consent for publication Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Provenance and peer review Not commissioned, externally peer-reviewed. Ethical approval Not required. Funding None. Research registration number Not applicable. All the authors read and approved the final manuscript. Declaration of competing interest None.
2021-09-25T06:17:00.131Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "82612a8dc3f3f978ec75786fa5b1104c80037a22", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.106409", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0601ee1c43903180ef9ed26af90f968ecdc437d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227182126
pes2o/s2orc
v3-fos-license
Towards better heartbeat segmentation with deep learning classification The confidence of medical equipment is intimately related to false alarms. The higher the number of false events occurs, the less truthful is the equipment. In this sense, reducing (or suppressing) false positive alarms is hugely desirable. In this work, we propose a feasible and real-time approach that works as a validation method for a heartbeat segmentation third-party algorithm. The approach is based on convolutional neural networks (CNNs), which may be embedded in dedicated hardware. Our proposal aims to detect the pattern of a single heartbeat and classifies them into two classes: a heartbeat and not a heartbeat. For this, a seven-layer convolution network is employed for both data representation and classification. We evaluate our approach in two well-settled databases in the literature on the raw heartbeat signal. The first database is a conventional on-the-person database called MIT-BIH, and the second is one less uncontrolled off-the-person type database known as CYBHi. To evaluate the feasibility and the performance of the proposed approach, we use as a baseline the Pam-Tompkins algorithm, which is a well-known method in the literature and still used in the industry. We compare the baseline against the proposed approach: a CNN model validating the heartbeats detected by a third-party algorithm. In this work, the third-party algorithm is the same as the baseline for comparison purposes. The results support the feasibility of our approach showing that our method can enhance the positive prediction of the Pan-Tompkins algorithm from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$97.84\%$$\end{document}97.84%/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90.28\%$$\end{document}90.28% to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$100.00\%$$\end{document}100.00%/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$96.77\%$$\end{document}96.77% by slightly decreasing the sensitivity from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$95.79\%$$\end{document}95.79%/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$96.95\%$$\end{document}96.95% to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$92.98\%/$$\end{document}92.98%/ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$95.71\%$$\end{document}95.71% on the MIT-BIH/CYBHi databases. Scientific Reports | (2020) 10:20701 | https://doi.org/10.1038/s41598-020-77745-0 www.nature.com/scientificreports/ Arrhythmia detection is a straightforward application of the ECG signal. Therefore it relies heavily on the quality of the signal and also on the QRS detection algorithm (segmentation). Applications based on the ECG signal are commonly divided into four stages: pre-processing (filtering), ECG signal segmentation (QRS complex detection), signal representation using pattern recognition techniques, and classification algorithms. A failure in the segmentation stage propagates the error to the subsequent stages and directly affects the classification efficiency. Furthermore, the correct segmentation of the ECG signal and the identification of fiducial points are of paramount importance to reduce false alarms. However, many works in the literature 15,16 focus on reducing false alarms in the classification stage, neglecting the error propagated by false alarms in the segmentation stage. Thus the motivation of this work arises: to reduce false alarms in the segmentation stage by using state-of-the-art pattern recognition techniques (a.k.a deep learning). It is important to note that convolutional neural networks (CNNs) have been applied to classify electrocardiogram (ECG) heartbeats in the diagnosis of arrhythmia [17][18][19] , which is a underlying subject to the scope of this work. Several authors have worked on this problem (reducing false alarms during the segmentation stage), and one promising approach is the signal quality assessment, such as in Behar et al. 15 . The authors used machine learning to decide whether the signal is of good or bad quality. In according to Behar et al. 15 , the ECG signal is manually annotated in two classes (good quality signal and bad quality signal), down-sampled to 125 Hz, and seven quality indexes, which were used as a feature vector to train a support vector machine (SVM) classifier. The experiments were conducted in three databases, the Physionet, from the Computing in Cardiology (CinC) Challenge 2011 20 , the MIT-BIH arrhythmia database 21 , and the MIMIC II 22 database. Behar et al. 15 reported improvements in reducing false alarm for ectopic beats, tachycardia rhythms, atrial fibrillation, and on sinus rhythm. Other authors employed a multi-modal approach, as in 16 , in which multiple ECG leads were used along with the invasive blood pressure wave. Quality indices, which resemble handcrafted feature extraction, were used in conjunction with a Kalman filtering algorithm. The method is evaluated with the MIMIC II 22 database, and external noises were artificially added to the signals. Also, in 23 , a multi-modal approach is used to combine multiple ECG leads with pulse-oximeter (PPG) and arterial blood pressure (ABP) curves. A peak detection algorithm was proposed for each type of curve and improved by a quality assessment method. According to the authors, results showed a robust peak detection algorithm. The approach was evaluated on the Physionet Challenge 2015 database 12 . In this work, a different approach is proposed based on deep learning techniques. The approach consists of a deep learning model validating the QRS complexes patterns detected by a third-party algorithm. Rather than relying on signal quality or the noise associated with it, we detect the ECG wave pattern, i.e., we detect (or validate) a heartbeat only by its shape. One advantage of this approach is to benefit from hardware accelerators for deep learning. Nowadays, there are many off-the-shelf deep learning accelerators, which means easy and effective integration with real equipment. Besides that, the proposed approach could be constantly improved by means of online learning. As the third-party algorithm, we select the well-known Pan-Tompkins algorithm 24 , since it is prevalent both in industry and academy. Moreover, it does not require significant computing resources. In summary, the main contributions of this work are: • An efficient method for heartbeat pattern classification that operates in real-time to improve heartbeat segmentation. • A CNN architecture for heartbeat classification. • A proposal of a cyber-physical embedded system for heartbeat segmentation. This work extends the one presented in the 23rd Iberoamerican Congress on Pattern Recognition (CIARP 2018) 25 as follows: • It presents an improved methodology, in particular, regarding the criterion for the selection of negative samples for training the deep learning model. 24 . • It adds a proposal to employ our approach in an embedded system context. The obtained results show the effectiveness of the proposed approach to improve the QRS detection algorithms. Our approach enhances the Pan-Tompkins algorithm 24 positive prediction from 97.84 to 100.00% in the MIT-BIH database and 91.81% to 96.36% in CYBHi. Though, there is a trade-off regarding sensitivity, and once there is a reduction from 95.79 to 92.98% in the MIT-BIH database and 95.86% to 95.43% in CYBHi. In that sense, the proposed approach is feasible for real applications, since it allows the reduction of the false positive rate. The computational cost for the CNN inference has become increasingly attractive, since it is possible to embed the model in dedicated hardware, such as the Nvidia Jetson TX2 (available on https ://devel oper. nvidi a.com/embed ded/jetso n-tx2) and Field Programmable Gate Array (FPGA) 26 , for instance. This scenario facilitates the process of including this approach in Cyber-Physical/embedded systems, which is the case of medical equipment 27 . Methods In this section, we present the methodology used to train CNN for ECG heartbeat recognition. Our method aims to validate the response of a well-known QRS complex detector from the literature. One may treat the QRS complex detector as an R-peak detection or heartbeat detection. The proposed approach is seen in Fig. 2 and can be divided into six main steps: (1) database split, (2) preprocessing, (3) train CNN, (4) R-peak detector, (5) validation of the R-peaks detected, and (6) evaluation. The database split is the process of separating it into train and test subsets. The pre-processing depends on the nature of the data and consists of dividing the original signal into several segments and apply data augmentation techniques. Step 3 is conducted using the training database to train a CNN. The R-peak detector consists of using some algorithm to detect the R-peak. The validation is given by authenticating whether the signal is a heartbeat (QRS complex) or not. In the last step (step 6), we report the metrics used to compare the algorithms. Database split and pre-processing. This step aims to divide the database into training and testing partitions. The first one is used exclusively to train a CNN and the latter for the testing phase. This process is necessary to avoid over-fitting and an overestimation of the proposed approach. The pre-processing stage includes several steps and an adjustment of the input data size. Furthermore, since CNN requires a specific input size, all the segments must have a specific shape and a fixed sample window in time. Then, the input has been standardized to have 300 samples, in a 360 Hz sample frequency signal, resulting in 833 ms length. As a result, for a database sampled in 1MHz, the correspondent samples in 833 ms (833 samples) must be reshaped to 300 samples. Thus any 833 ms/300 samples-length segments are feed-forward into the network without any specific filter pre-processing. Data augmentation. This step also includes the application of data augmentation techniques for positive and negative samples. CNN benefits from this technique once it increases the amount of data and helps 25 , to construct the positive samples, simple data augmentation is applied by considering the centralized R-peak and heartbeat signal shifted by exactly ±5 samples. For the negative samples, the heartbeat is shifted by exactly ±30 , ±50 , ±80 , and ±120 . A similar scenario presented in 25 is considered: binary classification between segments with a heartbeat (positive samples) and without (negative samples). However, in this work, a different data augmentation approach is used to feed the deep learning model and this model applied with a different purpose. For the positive samples we use: 1. Centralized R-peak. 2. Shifted R-peak by ± 5 samples. 3. Shifted R-peak by ± 10 samples. 4. Shifted R-peak by ± 15 samples. 5. Centralized R-peak with P-wave (375 ms before the R-peak) attenuated by 30%. 6. Centralized R-peak with T-wave (375 ms after the R-peak) attenuated by 30%. 7. Centralized R-peak with a reduction of 20% over the entire segment. 8. Centralized R-peak with a reduction of 40% over the entire segment. For the negative samples, all data between two R-peaks have used: 50 samples after the first R-peak and 50 samples before the second R-peak. This range marks the beginning and ending points of the sliding window, which is shifted by five-step stride (there is an overlapping among the samples within the two R-peaks). Figure 3 illustrates the data augmentation applied to the positive samples (sliding window, and wave manipulation) and Fig. 4 illustrates the construction of the negative ones. Since the QRS complex is the wave with the greatest amplitude within a heartbeat, it is less susceptible to noise. In contrast, the T and P waves have smaller amplitudes and usually a longer period of time and thus are more affected by all sources of noise. Thus, we propose a data augmentation attenuating the T and P waves, in order to force the model to be more immune to changes in the patterns of these waves. R-peak detector. In this step, a third-party algorithm is used to detect the R-peak along with the segment. Essentially, the QRS-detector method in this stage should be fast and have low computational power consumption. This stage is an essential step for our approach, once the amount of R-peak segments detected impacts on the time required by our approach to finish the process. For each R-peak detected, the CNN trained is used to infer if it is a real heartbeat or not. www.nature.com/scientificreports/ The process starts with the ECG signal as the input for the R-peak detector. Then, this ECG signal is processed by the algorithm. At this point, each R-peak detector may apply a specific pre-processing that best fits its needs. The response of the method is the sample with an R-peak location. Some methods, such as the Pan-Tompkins 24 , may return a delay, which gives a range in where each R-peak may be. In 24 , the authors designed the method using integer arithmetic, aiming a reduction in the computing consumption power to be as lowest as possible. A digital band-pass filter is applied by composing high and low-pass filters to reduce the impact of the noise over the signal, followed by one differentiation step and further a squaring step to intensify the slope and reduce the false-positives caused by the T waves. To detect the R-peak, Pan and Tompkins 24 applied a sliding window along with an adaptive threshold, which results in an efficient and robust approach to discard noises. Therefore, it reduces the false-positive samples. To reduce the false-negative samples, those authors used a scheme with a dual-threshold, in which one is twice smaller than the other, and both have a continuous adaptation according to the current signal state. Pan and Tompkins 24 outlined a strategy based on the periodicity of the R-peaks on an ECG record. For the case in which an R-peak is not found within 166% of the current average interval, the maximal point in this interval, which lies between two thresholds, is considered as an R-peak, and as a consequence, a heartbeat or QRS complex. The authors highlight that this technique is only feasible for individuals with a regular heartbeat (without arrhythmia). For arrhythmic individuals, the authors proposed a reduction of both thresholds by half, to raise the sensitivity of R-Peak detection. The authors also added two essential constraints regarding R-peak detection: (1) the next R-peak must occur at least 200 ms at a physiological point of view, and (2) an R-peak detection approach needs to adapt parameters to each patient continuously. CNN training. The CNN model/architecture used here is the same used in our preliminary work 25 . It is composed of four convolutional layers, two fully-connected layers, a dropout layer to reduce over-fitting, and a final fully-connected layer with two neurons for binary classification: (1) this segment has an R-peak centered in the segment, and (2) segment without R-peak centered. Figure 5 shows such CNN architecture. Different from our previous work 25 , in this work, the deep learning model is used as a second judge for a wellknown R-peak detector algorithm. The present approach aims to enhance the result from the R-peak detection algorithm, aligned with state-of-the-art trends 28 . Figure 5. CNN used to validate the R-peaks 25 in which the convolution layers conv1, conv2, conv3 and conv4 use filters size equal to 1x49, 1x25, 1x9 and 1x9, respectively, and stride equal to one. All pooling layers (pool1, pool2, pool3 and pool4) uses max operation with filter size and stride equal to two. The padding is equal to zero for all convolutional and pooling layers. Scientific Reports | (2020) 10:20701 | https://doi.org/10.1038/s41598-020-77745-0 www.nature.com/scientificreports/ Beforehand, to train CNN, a set of data is separated and labeled, usually by a human expert. This data is then used to generate positive and negative samples. Those samples are used to train the CNN as a simple binary classification problem: the output is a heartbeat or no heartbeat. Validation of R-peaks detected. In this step, any algorithm presented in the literature which aims an R-peak detection can be used. However, this step needs three inputs: an ECG signal, the R-peaks location detected by an R-peak detector algorithm, and a machine learning model. The output of this step is a set of all R-peaks locations in which the machine learning model agrees with the R-peak detector. In this step, an 833-ms window centered in each R-peak detected is feed-forwarded through the CNN. The CNN confirms whether it is an R-peak in the center of the segment or not. Evaluation. To evaluate one database, a set of data is reserved as a testing partition. With the R-peaks validated by the machine learning model, the metrics used to compare the approaches are calculated. We compare the right and wrong detections of both approaches: the third-party algorithm by itself and the proposed approach, with the CNN as a validator. A correct heartbeat detection is considered when an R-peak is within the center of a segment with a tolerance of the shifts used in data augmentation described. A wrong detection occurs when an R-peak is not in this range. Results In this section, the experiments are described in detail. Also, the results reached with the proposed approach are presented as well as the discussion. Experiment details. Database. To report the results presented in this section, we used two databases to train a CNN model: CYBHi 11 (off-the-person) and MIT-BIH 29 (on-the-person). To conduct fair experiments, we split both databases into two sets without patient intersection. As the MIT-BIH database has a group of signals with healthy individuals and another group with individuals who have cardiac problems (arrhythmia), we decided to use only the first group to avoid impacts on the R-peak detectors algorithms and, therefore, in the final metrics reported. The healthy group has a total of 23 records, and each record received a numerical identification on the dataset. The records are: 100, 102, 104, 106, 108, 112, 114, 116, 118, 122, 124, 101, 103, 105, 107, 109, 111, 113, 115, 117, 119, 121, 123. All heartbeats (approximately 110,000) available with the MIT-BIH database have its R-peak location annotated by two cardiologists in a separate manner and disagreements were resolved by a third person. The annotations are available on the Physionet website. As CNN model benefits of more data, we decided to use the odd records to train and even for testing. We stress that each record belongs to a single subject and that there is no overlap of subjects on both training and test sets. The CYBHi has more registers when compared to the MIT-BIH database with 126 records. As the database is captured with an off-the-person device, it suffers more with noise. The data acquisition is made using two differential lead electrodes at hand palms and fingers, as shown in Fig. 6. We have discarded 12 records from the CYBHi because our specialists weren't able to detect the heartbeats due to excess of noise. Therefore, we have no ground truth to train the model. Figure 7 presents a 10-s-segment which should have approximately 10 R-peaks, however, it is hard to detect them and, subsequently, label them. The remaining 114 records are used for training and testing. For the construction of the CYBHi database, the data acquisition happens in two distinct 2-min sessions with 63 subjects, into the range of 3 months. For each one of 63 subjects, two sessions were acquired in two different setups: Short-term signals and Longterm signals. Only the latest one is used in this work since it is a more challenging scenario 30 . The CYBHi database's authors did not provide the R-peak location annotation. Thus, this annotation was made by the researchers www.nature.com/scientificreports/ of this work and will be provided along with the source code. According to Luz and Menotti 31 , the data from a patient must be on the training set or in testing, not both. Upon this fact, we ignored the natural division of CYBHi database and used both sessions of an individual only to train the CNN or to test. We selected half of the subjects to training and half for the test set, randomly, and for reproducibility, the records are made available at https ://githu b.com/ufopc silab /qrs-bette r-heart beat-segme ntati on. Resulting data augmentation. Table 1 presents the total of training samples with and without Data Augmentation (see detail on how data augmentation is performed in "Methods" section). As one may see, the number of negative samples (No R-peak) is the same independently whether the Data Augmentation is used or not. The main difference is the number of positive samples (R-peak), which turns possible to train a CNN model. To train the models, we allocate 70% of a register's data (data of one individual) for the training partition and the remaining 30% to the validation partition which is used only for network optimization. Thus, from the total number of images presented in Table 1, 70% is used for training and 30% for validation. The list of records selected for both training (train/validation) and test partitions, for both databases, are available at https ://githu b.com/ufopc silab /qrs-bette r-heart beat-segme ntati on. R-peak detector. For this work, we evaluate our method with an implementation of the Pan-Tompkins 24 algorithm as the third-party one for the R-peak detector. We used the MATLAB implementation available in 32 to run our experiments. CNN training. As CNN input size, we use 833 ms, which means a 300-sample-size for the MIT-BIH database and 833-sample-size for CYBHi. Both represent 833 ms of the record. Since the CNN input size is fixed, it is necessary to conduct a down-sampling of the CYBHi signal in order to keep the same network architecture. We use a polynomial interpolation to perform the down-sampling. The same offsets used for data augmentation described in "Methods" section are used for both databases (MIT-BIH and CYBHi). The CNN is trained for 30 epochs with learning rate equals to 0.01 for the first three epochs, followed by 0.005 for seven more epochs, 0.001 for another 10 epochs and, finally, 0.0001 for the remaining 10 epochs. Also, stochastic gradient descent with momentum (0.9) is used for network weights optimization, softmax-loss as the activation function and the binary-cross-entropy as the cost function. Figure 8 shows the train and validation over the 30 training epochs on the CYBHi database. As one can see, the train and validation error drops fast in the early epochs and stabilizes after five/ten epochs. Since in this training phase, we have data from the same patient (individual) both in the 70% data reserved for training and the 30% data reserved for validation. Validation of R-peaks detected. The third-party algorithm Pan-Tomp-kins 24 used as an R-peak detector returns two responses: (1) R-peaks detected, and (2) a delay. The R-peaks are the center of the R-peaks, while the delay defines a window in which the R-peaks may be located. Those detections may have some missing R-peaks, or even wrong R-peaks detected. Those wrong R-peaks could be harmful to real applications and should be discarded. The validation, with a pattern recognition model, can be a workaround for this issue. The validation occurs once the Pan-Tompkins algorithm finds an R-peak, and the output of the CNN model feed-forwarded agrees that it is a heartbeat. 33 . We also report F-Score as a harmonic average of Se and +P . The measures described are defined as: We treat this problem as a binary classification, in which an R-peak detected is a positive class, while segments without R-peak information is a negative class. Based on this, True-Positive (TP) is a segment well detected, False-Positive (FP) is an erroneous segment detected as R-peak. The False-Negative (FN) is a right R-peak segment that is falsely discarded. It is worthwhile to note that our proposal can only improve the +P along with a small degradation on Se. Table 2, the results are presented for both databases with the metrics already described. We compare the standard R-peak detector algorithm against our proposed methodology. Analysis. In Results presented in Table 2 show the gain in positive predictive from the CNN as a validator. Nevertheless, a reduction in the sensitivity metric is perceived. However, the F-Score is maintained equivalent over the two databases. The figures representing our analysis are highlighted in bold in Table 2. Our approach enhances the Pan-Tompkins algorithm positive prediction from 97.84 to 100.00% in the MIT-BIH database and 90.28% to 96.77% in the CYBHi. Although a reduction in Sensitive is observed in both databases, in which the Pan-Tompkins approach reaches 95.79% and 96.95% and our approach 92.98% and 95.71% for MIT-BIH and CYBHi, respectively. A reduction in the F-Score metric occurs in MIT-BIH, from 0.97 (Pan-Tompkins) to 0.96. While, in the CYBHi, the opposite occurs, once the F-Score enhanced from 0.93 to 0.96. In Fig. 9, we show examples to illustrate the effects of our proposal. The heartbeats in Fig. 9a,b are samples from MIT-BIH and CYBHi databases, respectively that were wrongly classified as FPs by the baseline approach and now are correctly classified (rejected) as TNs. Conversely, in Fig. 9c,d, we show samples from MIT-BIH and CYBHi databases that were classified as one true heartbeat (TPs) by the baseline method and corrected to Nonheartbeat (FN) by our approach. By increasing the positive prediction (diminishing the FP rate), the beneficial effects that our proposal promotes is to provide reliable samples to further analysis, such as arrhythmia classification. Contrasting to that, by diminishing the sensitivity of the heartbeat segmentation (increasing the FN rate), our approach may exclude true samples, which can be prohibited in some applications. Such trade-off should be adjusted according to the application. www.nature.com/scientificreports/ Our hypothesis for the sensitivity reduction is the high-frequency noise altering the morphology of the signal. Our model relies on the morphology of the signal to determine whether the segment is a heartbeat or not. Thus, high-frequency noises alter the shape of the curve, especially the P and T waves, which are temporarily wide (see Fig. 10). The Pam-Tompkins is a peak detection algorithm and does not rely on the signal morphology, different from our approach. If the morphology changes due to the high frequency, it has a negative impact on our approach, which is not seen in the Pam-Tompkins approach. The CYBHi database signal morphology is changed due to the noise, as seen in Fig. 10b, making heartbeat segmentation difficult. As the MIT-BIH database acquisition happened in a more controlled scenario, this problem is reduced, and the +P metric is greater than the CYBHi database. As seen in Fig. 10a, it is possible to verify how abrupt the changes are in the signal of the same subject in the same record. A different perspective is presented in Fig. 10b, which shows the average variance of false-negative samples against the average true-positive (right detected) samples of a specific subject. This small average variance change impacts the final result, mainly in the sensitivity metric. Based on the results in Table 2, one also may infer that the CNN architecture used is capable of generalizing and learning for both databases. The outstanding results confirm this hypothesis. The popularization of deep learning, especially CNNs, has led to a fast increase in the development of specific hardware for inference acceleration. Thus, deep learning methods are an attractive option to be embedded in real products. Once the deep learning model has passed the training stage, it can be used in inference mode (for production), which in our case means classifying a sequence of one dimension input sample as a heartbeat or not. The trained model can be embedded in hardware, and the inference accelerated with the aid of special circuits based on FPGA or GPU 26 . Today, GPUs still are state-of-the-art in inference throughput 26 . In this work, we export our model to TensorFlow in order to allow compatibility with the NVIDIA Jetson TX, TX2, and Nano (see Fig. 11). The NVIDIA Jetson Nano board uses a 128-core Maxwell GPU and 4GB of RAM and can make inferences more than 20 times the speed of the most common CPU 34 . It facilitates the point in which the medical equipment 35 can communicate with the board via USB bus, WiFi (TCP/IP), or even RS-232 standard, which favor the integration with real products. In order to evaluate the computational cost (time consumption) of the proposed CNN, we repeat the inference process 100 times to evaluate the average time consumed by the network running in a CPU (Intel i7 8th generation), a GPU and an NVIDIA Jetson Nano. The total time consumed by the CPU is 3.291 s, with an average of 0.033 s, while in GPU, the total time is 1.001 s with an average equal to 0.01 s. In the NVIDIA Jetson Nano, one can observe a total of 3.339 s with an average of 0.033 s. The NVIDIA Jetson Nano has equivalent performance to an Intel i7 with higher power efficiency, approximately 10 times less energy is required 36 . Furthermore, the worst-case scenario between two R-peaks is at least an interval of 200 ms 24 , which is greater than the proposed CNN model inference time required (33 ms average). Upon those facts, the proposed approach is a feasible scenario for the real world. www.nature.com/scientificreports/ Figure 12 presents the filters from the first layer of the proposed architecture for both databases, MIT and CYBHi. Both filters are initialized with the same seed. The filters are similar, but the filters from the CYBHi database (Fig. 12b) have a more extensive range when compared to the MIT ones (Fig. 12a). One of the possible reasons is due to the noisy nature of CYBHi signals. One can see how noisily are the signals from the CYBHi database in Fig. 13e,g when compared to signals of Fig. 13a,c from a controlled database, such as the MIT database. It is notable that several filters are sensible to a noisy ECG, as shown in Fig. 13f,h. Besides, the same behavior is observed in the output of the filters from the positive samples (Fig. 13b,f) and negative samples (Fig. 13d,h). In the first scenario, a peak is observed around the center of the activation map of the filters. For the latter scenario, the peak is located on the edges of the signal. The MIT-BIH DB has almost twice as many positive samples (QRSs) than the CYBHi database (see Table 1). With more data, the model learn better filters. Since the architecture is the same for both databases, the model trained with the CYBHi base suffers twice. Discussion In this work, we proposed the use of a CNN aiming R-peak detection from a different perspective. Instead of using techniques based on a signal quality index, filters, or using other signals to validate the occurrence of a heartbeat (multimodal approach), we applied machine learning techniques, more specifically CNNs, to recognize the pattern of a heartbeat. Our proposal aimed to improve the detection of a traditional algorithm for R-peak detection and act as a validator method for R-peaks (or heartbeats). In that manner, we avoided a sliding window over the entire signal and, as a consequence, a reduction of the computation cost involving the entire machine learning inference process. Since correct segmentation is critical for medical equipment, the positive prediction should be considered over the sensitivity. The reported results supported this scenario, in which our approach enhanced the Pan-Tompkins R-peak detector positive prediction on two distinct databases. However, it is worth highlighting that there is a trade-off between positive prediction and sensitivity. The low positive prediction could compromise the application by emitting wrong alarms, for instance. On the other hand, low sensitivity may result in a scenario where necessary alarms are not emitted. One path for future work is the design and application of filters to avoid the high frequencies noises in the ECG signal, especially for off-the-person databases. Since the filter design needs an in-depth knowledge of the signal, a different approach is to apply a machine learning technique to learn which filter best fits for each signal. Another research path is the fine-tuning of a pre-trained deep learning model to enhance the generalization of the proposed approach without losing positive prediction. The proposed method is trained to detect a pattern of a normal heartbeat. However, in a real environment, irregular or arrhythmic beats may appear and they may have a morphology completely different from the morphology of a standard QRS complex. Thus, another future investigation path would be to explore models capable of classifying other classes (types of heartbeat). License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-11-28T14:06:05.803Z
2020-11-26T00:00:00.000
{ "year": 2020, "sha1": "b28b35c438f2a3c3f6f5392f3306a985eccd89ac", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-77745-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4884684c6a784204d70ecc9d95c1e31f3ff2568e", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
256636119
pes2o/s2orc
v3-fos-license
Ice-volume-forced erosion of the Chinese Loess Plateau global Quaternary stratotype site The International Commission on Stratigraphy (ICS) utilises benchmark chronostratigraphies to divide geologic time. The reliability of these records is fundamental to understand past global change. Here we use the most detailed luminescence dating age model yet published to show that the ICS chronology for the Quaternary terrestrial type section at Jingbian, desert marginal Chinese Loess Plateau, is inaccurate. There are large hiatuses and depositional changes expressed across a dynamic gully landform at the site, which demonstrates rapid environmental shifts at the East Asian desert margin. We propose a new independent age model and reconstruct monsoon climate and desert expansion/contraction for the last ~250 ka. Our record demonstrates the dominant influence of ice volume on desert expansion, dust dynamics and sediment preservation, and further shows that East Asian Summer Monsoon (EASM) variation closely matches that of ice volume, but lags insolation by ~5 ka. These observations show that the EASM at the monsoon margin does not respond directly to precessional forcing. A basic requirement for reconstructing past environmental change is accurate understanding of sediment age. Here, the authors show that the interpretation of a benchmark archive in China has been inaccurate, and that ice volume primarily controls desert dynamics, sediment preservation, and precipitation at the site. T he margins of deserts are highly sensitive to climate change and human influences 1 . Small changes in vegetation, climate or land use drive major changes in sand dune and dust activity 2,3 , which in turn have major impacts on local populations, global dust emissions and climate forcing 4,5 . By extension, sedimentary records from the margins of deserts are highly sensitive indicators of past environmental change in these crucial areas 6 . The desert margin of the Chinese Loess Plateau (CLP; one of the world's most important terrestrial climate archives) is especially significant in this regard. In addition to recording East Asian Monsoon climate and Asian aeolian dust dynamics in loess and palaeosol units (systems that alter global climate and now affect billions of people), the area also records expansion and contraction of a desert sand sea that has experienced significant Holocene and recent desertification [7][8][9] . This is of particular relevance, given that the loess-palaeosol climate proxy record from the CLP desert marginal Jingbian site ( Fig. 1) has been adopted as the terrestrial stratotype for the International Commission on Stratigraphy (ICS) global benchmark Quaternary chronostratigraphic scheme 10,11 , plotted on the orbitally tuned CHILOPARTS time series 12 . The ICS chart for the Quaternary is the reference point for climatic evolution over the past 2.7 Ma, including marine isotope stages, the Antarctic isotope record, and the CLP and Lake Baikal sequences 13 ; it underpins our fundamental understanding of past global environmental change and enables correlation of stratigraphic records worldwide. Its accuracy is therefore of central importance in past climate research. Nevertheless, our understanding of how processes in desert marginal environments impact the preserved sedimentary record is limited, and the longer-term driving forces behind sand activity remain debated due to the limited preservation potential of dune sediments 9 . Sandy desert areas are known to be highly complex and dynamic environments, with the location of deposition and erosion shifting rapidly and across small distances in response to forcing by winds and precipitation 14 . Jingbian lies in an area that has been covered by expanded sand dunes in the past 15 . Such processes could therefore severely compromise the completeness of the stratigraphic record and undermine the integrity of correlation based, non-independent chronostratigraphic models such as the one used in the ICS scheme. Furthermore, detailed optically stimulated luminescence (OSL) dating of more central CLP sites over the last glacial has shown that age models derived from correlation-based methods contain significant inaccuracies of up to 10 ka 16 . More fundamentally, a recent proposal argues that the CLP is a highly dynamic environment which leads to substantial internal aeolian recycling of pre-deposited material and a reduction in CLP area size 17,18 . Such sediment recycling would undermine routine desert marginal CLP palaeoenvironmental reconstruction as well as the basis of understanding of past monsoon, dust and desert dynamics in this region. By implication this hypothesis also calls into question the accuracy of the ICS Jingbian chronostratigraphy. It is thus crucial that the past loess and desert record at Jingbian is independently constrained. Here we develop a fully independent age model for the Jingbian section over the last~250 ka using a combination of the quartz OSL and K-feldspar post-IR Infra-Red Stimulated Luminescence (IRSL) techniques 19,20 applied at high sampling resolution. This model shows that Jingbian is characterised by numerous hiatuses of up to~60 ka that are highly spatially variable across a heavily eroded gully section (Fig. 2). This radically changes the palaeoclimatic interpretation of the sedimentary sequence preserved at the site, supports a revised model for development of the CLP, provides new insights into East Asian Summer and Winter Monsoon (EASM/EAWM) dynamics, and requires a major revision of the ICS chronostratigraphic scheme for Jingbian. Results A luminescence-based chronostratigraphy for the past 250 ka. Our new luminescence age model is based on 220 ages on Indian Summer Monsoon (ISM). The data set is provided by Data Center for Resources and Environmental Sciences, Chinese Academy of Sciences (RESDC) (http://www.resdc.cn). The base map is a coloured DEM map derived from SRTM 90 m data 84 and the inset map is based on http://www.arcgis. com/home/item.html?id=c3265f30461440c2999add34bcae8e0a. A detailed aerial photograph of the Jingbian site with the studied loess profiles (A, B, C, D, E) marked is given in Supplementary Fig. 1 samples taken with a vertical spacing of between 5 and 40 cm at 5 Jingbian sections dug at the ICS stratotype location (Fig. 2). It constitutes the largest and most detailed luminescence data set to date, and to our knowledge is the most comprehensive geochronological analysis yet undertaken at a single site. Details on site location, sampling, luminescence dating methodology, age depth modelling and proxy analyses are given in Methods. There are two striking features of the age-depth models for the Jingbian sections (Fig. 2). Large jumps in ages are found in many sections, indicative of large hiatuses in the record of up to 60 ka. Crucially, these substantial gaps are not observable in the visual or proxy stratigraphy and have not been demonstrated previously at Chinese loess sites, yet have major implications for the chronostratigraphic model and climate reconstruction. In addition, while the age ranges of some of the sections overlap, the nature of the preserved record at each section is inconsistent, indicating a highly spatially variable relationship between age, depth, sediment type and preservation. As with many CLP-desert marginal sites, Jingbian is located in a relatively flat plateau landscape with the sections exposed in a deeply incised gully system (Figs. 1 and 2, Supplementary Fig. 1). Some sections show hiatuses where others concurrently show deposition, and yet other sections exhibit extremely high accumulation rate phases of short duration (Fig. 2). No single section preserves the full sequence covered at the site, as shown in our composite climate records (Fig. 3). As such, these gully sequences require consideration as dynamic landforms, where gully geomorphology and local morphological context must be taken into account, together with the stratigraphic sequence. One consequence of this is that while at many CLP sites the Holocene record has been partly disturbed by human activity 21 , a uniquely undisturbed 2 m Holocene sequence is preserved at Jingbian (see Fig. 2, section D), protected by unconformable deposition within the gully system and dated by 31 luminescence ages. Thus, the luminescence results reveal that the gully must pre-date the Holocene and that the gully landform itself is recorded in the stratigraphic record at the site. While this dynamism adds to the complexity of interpreting these stratigraphic sequences, our independent dating demonstrates that a detailed composite environmental history can be obtained through luminescence dating of multiple overlapping sections (Fig. 3). This is also reflected in the detailed record of the last interglacial in section E (Fig. 2). Ice-volume-forced processes in a desert marginal environment. When our climate proxy and stratigraphic records are plotted on our new age model against 65°N July insolation 22 , marine oxygen isotope stratigraphy LR04 stack 23 and Lake Baikal biogenic silica 13 (Fig. 3), some striking patterns become apparent. Notably, there is a near total lack of preserved record during the last two glacial phases (MIS 2-4 and 6), but with preserved material from the glacial stage MIS 8, as well as interglacials MIS 1, 5 and 7. The large hiatuses appear to terminate close to or following the rapid shift away from peak Northern Hemisphere ice volume at the end of the MIS 2-4 and 6 glacial stages (Terminations I and II). During less positive marine δ 18 O isotope stages when Northern Hemisphere ice volume was lower, loess sediments are generally preserved. During MIS 7 and the second half of MIS 8, there is relatively low amplitude variability in ice volume and full preservation of the loess record, including in the comparatively low ice volume glaciation of MIS 8. Two palaeosols associated with the two ice volume minima of MIS 7 are also preserved, separated by a loess unit representing the deep stadial during MIS 7, while MIS 5 and 1, which have no such deep stadials, are only represented as palaeosols at Jingbian. Based on this pattern and its relationship to the δ 18 O record, we propose an explanation of the mechanisms behind desert marginal sediment accumulation, preservation and erosion, and hence the controls on desert dynamics. The greatly enhanced maximal extent of Northern Hemisphere ice limits during peak MIS 2 and 6 is known to have strengthened the Siberian High and moved the polar front southwards, enhancing cold air outbreaks, and strengthening winds and aridity 24 . The associated water stress would have reduced vegetation stabilisation of dunes while cold air outbreaks would have driven seasonally strong winds, promoting deflation and sediment movement; these erosive processes changed Jingbian from a depocentre into a dust source and account for the hiatuses at the site. On shorter timescales, the polar front, modulated by Atlantic Meridional Overturning, has been shown to drive strengthened EAWM circulation and dust deposition on the Loess Plateau 25 , while the strength of the Siberian High is tied to ice volume and snow cover over multiple timescales 24,26 , supporting this model. While the MIS 2-4 and 6 hiatuses cover most of these glacials (Fig. 3), accumulation may still have occurred locally, but the strong erosional events during peak glaciation would have removed previously deposited material. Sand was deposited at the end of both hiatuses, indicating both an expansion of the Mu Us desert and some dune stability. As no sand was preserved during the prior glacial episodes, a highly mobile sand sea is implied, close by or covering the site and providing a plentiful supply of saltating impactor grains to promote deflation of existing deposited material. Thus, the two major hiatuses at Jingbian during MIS 2-4 and 6 are interpreted as erosional unconformities resulting from enhanced dune mobility driving erosion of underlying strata. During glacial MIS 8 and the stadials within MIS 7 and 5, glaciation did not extend as far as during MIS 2 and 6 ( Fig. 3) and so cold air outbreaks, winter monsoon intensity and aridity was not sufficient to drive such dune expansion and dust deflation. Our revised age model and resulting sedimentary history has a fundamental impact on the interpretation of the global benchmark record at Jingbian. In traditional Loess Plateau chronostratigraphic models, loess/sand units and palaeosols are considered of glacial and interglacial age, respectively. Here we propose a different model for Jingbian. In our view, palaeosol units are indeed indicative of interglacial phases of enhanced EASM (high magnetic susceptibility; MS) and weaker EAWM (finer grain size). However, rather than representative of glacial phases, loess units in the upper part of Jingbian appear to be mainly associated with stadials within interglacials, during which relatively increased ice volume drives cold air outbreaks, aridity and enhanced EAWM circulation, with associated silt transport and dust trapping at Jingbian. Deep glacial phases are removed from the record due to erosion, and sand units occur over more restricted time intervals, both within interglacials and glacials, with both indicating enhanced dune activity and expansion of the Mu Us (Fig. 3). Although the proxy records show general antiphase behaviour of the EASM with the EAWM (Fig. 3), sand accumulation can also occur even during enhanced summer monsoon conditions (e.g., MIS 5e). This suggests that sediment availability, EAWM/Siberian High driven winter aridity, and cold air outbreaks and enhanced wind strength drive dune mobility, desert expansion and sand deposition at desert marginal sites 15 . This is in contrast to the idea that dune expansion and deposition is controlled by summer monsoon-driven moisture availability 10 . Recent identification of relict dune sediment from the LGM preserved in isolated frost wedges in the Mu Us 9 confirms intense aeolian activity at this time, but also argues for the domination of net erosion due to high winds and aridity. This explains the lack of dune record from the last glacial in the Mu Us 8 and supports our deep glacial erosional unconformity model. Thus, desert sand dune activity in this part of China is controlled by the intensity of EAWM circulation in Asia, in turn driven by ice volume in the Northern Hemisphere through the Siberian High. Our findings suggest that during peak ice volume phases, this climatically driven erosion in the Mu Us also extended onto the edge of the CLP, driving development of multiple unconformities in one of the global benchmark Quaternary sediment records. This clearly limits the use of Jingbian as a benchmark site for the Quaternary stratigraphic column, and we suggest that a more central CLP site may be more appropriate for use in the Quaternary chronostratigraphic subdivision. Currently, our results demonstrate that the present ICS scheme for Jingbian is incorrect and should be revised. In addition, our new chronostratigraphic model has a number of significant implications for understanding the CLP, desert sand and atmospheric dust dynamics, as well as monsoon climate. Jingbian lies just south of an escarpment marking the boundary between the Ordos Platform (including the Mu Us desert) and the northern margin of the CLP. Based on the presence of yardangs and windgaps cut into Quaternary strata north of this boundary, as well as on loess provenance data, it has recently been proposed that the escarpment has retreated south and east due to wind erosion during peak glacials in a process of 'aeolian cannibalism' of pre-deposited loess material 17,18 . Our finding that large amounts of sand and dust are eroded during peak glacial conditions at Jingbian supports the reinterpretation of the CLP as a dynamic landform, with deposits undergoing reworking and recycling along the boundary with the Mu Us desert. This is the first direct, independent evidence to support 'aeolian cannibalism' of pre-existing loess 17, 18 alongside reworked Yellow River alluvial sediments 27 as the source of Quaternary loess to the central CLP and may indicate that indeed the CLP is being reduced in size due to peak glacial wind erosion. That this reworking at Jingbian occurs during those glacial phases (the most recent) with greatest ice volume is also consistent with a long-term increase in aeolian dust CLP accumulation rates over the Quaternary 28 . As glacial stage ice volumes increase and cold air surges penetrate further south, generating large, erosive NW to SE tracking dust storms 29, 30 over the Mu Us, Yellow River alluvial platform and northern CLP, material is reworked and incorporated into younger CLP deposits further south. As such, this apparent long-term accumulation rate increase may be more tied to increasing ice volume and loess recycling rather than to changing aridity or dust source alone. Kang et al. 31 and Stevens et al. 32 noted that independently dated central CLP sites show enhanced dust accumulation during the peak of the last glacial (23-19 ka). This general peak in last glacial dust activity coincides with the hiatuses at Jingbian (Fig. 3), and with erosive activity in the Mu Us 9 . We propose that enhanced ice volume may then also be the driver of enhanced Asian dustiness during short phases of the late Quaternary, and erosion of desert marginal loess will likely directly contribute to increased atmospheric dust loading downwind on the central CLP. EASM, ice volume and lagged response to insolation forcing. EASM-driven MS peaks preserved at Jingbian show a remarkable match with reductions in ice volume (Fig. 3). MS also shows variability at the same frequency as precessional cycles in the Northern Hemisphere summer insolation record, but systematically lags behind July insolation at 65°N (Fig. 4). Multiple independent records and models support a role for precessional forcing in driving Asian summer monsoon intensity [33][34][35][36] and monsoon variation generally is regarded as a function of low latitude solar insolation 37 . However, over geologic timescales the degree to which there is a direct, singular forcing response of the monsoon to precession, or one where multiple factors such as CO 2 and sea level modulate a lagged EASM response, is unclear. Some authors advocate a direct response with no lag, based often on speleothem δ 18 O records 33,34 , while others argue for a c. 8 ka lag compared to absolute annual maximum insolation, based mainly on marine records 35,38 . As previous studies only focus on the last glacial termination 39, 40 , our results permit the first independently dated analysis of multiple precessional cycle phase lags between EASM proxies and insolation forcing in the loess record, and provide an independent test of these conflicting hypotheses. A clear, consistent phase lag between 21st July 65°N isolation and the Jingbian EASM is seen across all transitions in our dataset (Fig. 4, Supplementary Table 2). The insolation lag calculation and the effect of different life-time averaged water content assumptions on this lag are outlined in Methods. While the size varies due to age model uncertainty, the average lag is 4.9 ka (s.e.m. = 0.7 ka, n = 9), which would increase to~7 ka if the target reference curve for phase measurement is taken as the absolute maximum insolation curve, as suggested by Clemens et al 35 . This is within uncertainties of the lag proposed from marine records such as the Arabian Sea summer monsoon stack 41 , and contrasts sharply with results from speleothem δ 18 O 34 . We argue that the observed lag is not related to delays in MS signal acquisition; both theoretical models and empirical evidence point to rapid oxidation/reduction response of iron oxides and formation of superparamagnetic minerals that enhance MS [42][43][44][45] . Although transmission of the forcing signal through the climate system may account for some of the lag, we also note that there is remarkable similarity between our independently dated MS record and global ice volume as represented in the LR04 stack 23 (Fig. 3). The only exception is during MIS 7 where a peak in ice volume has no MS/EASM equivalent peak in our Jingbian record (Fig. 3). However this may be an artefact of preservation; the missing peak occurs at the point of increased sand content bracketed by a deeper glacial phase (Fig. 3). Given the larger absolute age uncertainty at this time point and the more scattered ages in the section this data set comes from (B, Fig. 2), it is quite possible an undetected erosional event has removed this peak. While low latitude insolation directly drives monsoon variability at the precessional band 37 , the lagged MS record shows there cannot be a direct response at Jingbian; there must be other factors that heavily modulate the monsoon response in the region. This seems plausible given that the summer monsoon only penetrates as far north as Jingbian due to factors such as land-sea configuration 37 . As such, changes in this configuration due to ice volume would be expected to alter summer monsoon patterns at the site. The match between the MS record and the LR04 stack implies a response of the monsoon at Jingbian to insolation forcing that is similar to the response of the Northern Hemisphere ice sheets, potentially controlled by combined eccentricity, obliquity and precession, or alternatively that Northern Hemisphere ice volume dominates the forcing of the EASM 28,36 . We suggest that variation in the EASM at Jingbian over the last 250 ka can be explained by combined insolation, ice volume, and CO 2 forcing, supported by results from δ 13 C of loess organic matter, recent climate model simulations and many marine records 35,38,40 . Coupled, ocean-atmosphere-sea ice-land surface climate modelling of the last glacial monsoon suggests that atmospheric CO 2 driven high latitude temperature changes drive latitudinal shifts in zonal circulation and the Intertropical Convergence Zone (ITCZ), in turn affecting monsoon precipitation 40 . These shifts would also have affected meridional temperature gradients, snow and ice cover on high ground, ice sheet dynamics, and hence global sea level (land-sea configuration), which in turn will also directly modulate summer monsoon circulation 37,[46][47][48][49] . Additional temperature forcing is driven by insolation at high latitude 40 . In monsoon marginal areas like Jingbian, such factors are likely to be the dominant control on summer monsoon dynamics, even if direct precessional forcing dominates monsoon intensity in core monsoon areas 37 . Variations in sea-level and CO 2 forcing will alter the spatial extent and coverage of the summer monsoon, which will cause significant changes to precipitation levels at monsoon marginal sites, consistent with our record at Jingbian. As such, the previously widely accepted hypothesis of dominant direct low-latitude precessional forcing of EASM precipitation patterns seems increasingly implausible at the monsoon margin. Variation in monsoon proxies in various archives is consistent with this geographic effect with regard to monsoon forcing 38 , with high latitude forcing exerting a dominant control on monsoon precipitation patterns in monsoon marginal areas. Our data apparently conflict with some speleothem δ 18 O records of summer monsoon rainfall 34 . However, reinterpretations of speleothem δ 18 O data suggest that either this proxy is not solely influenced by summer monsoon intensity 50 or that δ 18 O is a function of integrated rainfall amounts between monsoon source and the cave site 51 . If the latter is true it would imply this integrated rainfall was a function of low latitude precessional forcing. However, this is still consistent with our model as we would expect that integrated summer monsoon rainfall prior to precipitation at cave sites close to the southern part of the CLP would be dominated by low latitude precessional forcing, as this rainfall occurs dominantly in the core monsoon region. However, the extent of summer monsoon rainfall closer to the monsoon margins like on the north CLP, would still be dominantly controlled by the spatial extent and coverage of the summer monsoon, itself modulated by ice volume-sea level-CO 2 forcing. Methods Study site. The study site is located in Jingbian County and comprises five sections (A, B, C, D, E; where >1 m of material was removed to freshly expose the sediment) (see Supplementary Fig. 1 where locations of individual sections are also given). The elevations of the individual sections were measured to within a few cm using differential GPS and our coordinates measured at section A were 37°29'52.8"N, 108°54'14.4"E. It should be noted that these are different to the coordinates given for the Jingbian site by Ding et al. 10 . However, as we outline below, there appears to be an error in the site coordinates quoted in Ding et al. 10 and we here demonstrate that in fact we are working on the same site; the ICS stratotype section. Firstly, coordinates for the stratotype site position subsequently given to us by E. Derbyshire are 37°29'58.74"N and 108°54'2.72"E (E. Derbyshire, personal communication 2015), with an elevation of~1700 m above sea level (a.s.l.). Note that these coordinates refer to the position of a pylon/mast immediately to the west of the gully and are different to the coordinates given for the site by Ding et al. 10 in which Derbyshire is a co-author. The coordinates provided by Derbyshire are also~330 m from our differential GPS measured location of section A (see above), consistent with the position of the section on the east side of the gully~300 m from the pylon ( Supplementary Fig. 1). Furthermore, Ding et al. 52 first presented the Jingbian section, which was subsequently analysed in Ding et al. 10 . Here they noted that the section was located near the settlement of Guojialiang. Indeed, the nearest settlement to both our sampling site and the revised coordinates provided by Derbyshire is Guojialiang. However, the coordinates given in Ding et al. 10 provide a locatioñ 40 km from Guojialiang, inside the Mu Us desert sand field, with this location also inconsistent with the site descriptions given in Ding et al. 10,52 and lacking any obvious gully exposure. Finally, during our fieldwork in, a local farmer confirmed that a group of Chinese scientists had worked previously at our sections D and E, and we could distinguish prior sampling (presumably for grain size and/or MS) at many sections within the gully. We therefore conclude that the site coordinates given in Ding et al. 10 are erroneous. Given the revised coordinates from Derbyshire and the match of our sections with the site descriptions and nearby settlements in Ding et al. 10,52 , we are very confident that we were working at the same site as is described in Ding et al. 10 and therefore the ICS stratotype site. Luminescence dating. Samples for luminescence dating were collected by hammering stainless steel tubes (diameter 2.5 or 5 cm; length up to 25 cm) with a vertical spacing of 5-40 cm into freshly cleaned sediment profiles. The tubes were opened under subdued orange light at the Nordic Laboratory for Luminescence Dating (Aarhus University, DTU Risø campus, Denmark). The outer~5 cm of each tube end was removed and reserved for dose rate analysis (see below). The inner material was wet-sieved to extract the 63-90 and 90-180 µm grain size fractions. These fractions were treated with HCl and H 2 O 2 to remove carbonates and organic material, respectively. The fractions were etched for 20 min in 10% HF to remove coatings and the outer alpha irradiated layer. After washing in 10% HCl, the fractions were dried and quartz and K-feldspar rich extracts (K content = 12.70 ± 0.10%, n = 5) were separated using a heavy liquid solution (LST 'Fastfloat') with density 2.58 g cm −3 . For the samples from the D section, the quartz-rich fraction was subjected to concentrated HF treatment for 60 min to remove any remaining feldspar. The purity of the quartz OSL signal was confirmed by the absence of a significant IRSL signal using the OSL IR depletion ratio 53 feldspar rich fractions were mounted as multi-grain aliquots containing hundreds of grains on stainless steel cups. All luminescence measurements were carried out using Risø TL/OSL DA-20 luminescence readers equipped with calibrated 90 Sr/ 90 Y beta sources delivering between~0.10 and~0.20 Gy s −1 to multi-grain aliquots in stainless steel cups. Quartz grains were stimulated using blue LEDs (470 nm;~80 mW cm 2 ) and the OSL signal was detected through 7.5 mm of U-340 glass filter. Feldspar grains were stimulated using IR LEDs (870 nm;~140 mW cm −2 ) with the IRSL signal detected through a blue filter pack (combination of 2 mm BG-39 and 4 mm CN-7-59 glass filters). Single aliquot regenerative-dose (SAR) protocols 54 were used to determine the quartz OSL and K-feldspar post-IR IRSL equivalent doses (Supplementary Table 1). For the quartz measurements, a preheat of 260°C (duration: 10 s) and cut-heat to 220°C was used; each SAR cycle ended with a high temperature (280°C) blue light stimulation for 40 s. Natural, regenerative and test dose signals were measured at 125°C for 40 s. The initial 0.00-0.32 s of the signal minus an early background (0.32-0.64 s) was used for dose calculation. Feldspar aliquots were preheated at 320°C for 60 s for natural, regenerative and test dose signals. They were then stimulated twice with infra-red light for 200 s. The first IR stimulation temperature was 200°C (IR signal) and the subsequent IR stimulation temperature was 290°C (post-IR IRSL signal, pIRIR 200,290 ). The IR clean-out at the end of each SAR cycle was carried out at 325°C for 200 s. The first 2 s of the post-IR IRSL signal minus a background estimated from the last 50 s was used for dose calculation. It is well-known that quartz OSL from Chinese loess is dominated by the fast component and generally behaves well in a SAR protocol 32, 55-57 . However, typically age underestimation is observed when doses >~150 Gy are measured in loess using quartz SAR OSL 56,58,59 . Therefore, we restricted the use of the quartz OSL signal to samples from the upper 480 cm in section D. Below this limit the quartz SAR OSL D e values are ≥160 Gy and these results were not used for age depth modelling. Figure 5a, b shows the results of a preheat plateau test on sample D38141. It can be seen that over a wide temperature interval quartz D e is independent of preheat temperature, recycling ratio is close to unity and recuperation is low. A dose recovery test 60 using the SAR protocol outlined in Supplementary Table 1a was carried out on 10 samples from section D (D38102, −04, −10, −13, −18, −24, −40, −54; 6 aliquots per sample) with given doses ranging between 10 and 50 Gy. Prior to giving the laboratory dose, the natural quartz OSL signal was reset by two blue light stimulations (100 s each) separated by a 10,000 s pause to allow any photo-transferred charge in the 110°C TL trap to decay. The results of the dose recovery test are shown as a histogram and a measured to given dose plot in Fig. 5c, d, respectively. It can be seen from these results that our SAR protocol (preheat 260°C/10 s, cut-heat 220°C) is able to measure a quartz dose given prior to any heat treatment with an acceptable degree of accuracy. Figure 6a, b illustrates the relationship between the individual aliquot D e values (normalised to the sample mean D e ) and the recycling ratio and the OSL IR depletion ratio for 54 samples from section D. There does not appear to be any trend in these relationships indicating that the D e value cannot be improved by rejecting aliquots with relatively poor recycling ratios (e.g., deviating >10% from unity) and that the quartz D e values are insensitive to the levels of feldspar contamination present in these extracts. The quartz D e values for section D are tabulated in Supplementary Data 1. Since the discovery of more stable post-IR IRSL signals 61 compared to conventional IRSL signals measured at ambient temperature, several SAR protocols have been developed to use IRSL to date beyond the quartz OSL dating range 19,[62][63][64] . Section A of the Jingbian site has already been dated using IR stimulation at 290°C after IR stimulation at 200°C (i.e., pIRIR 200,290 ; Supplementary Table 1b) 20 . Here we present more laboratory tests of the pIRIR 200,290 signal from the coarsegrained feldspar extracts. Figure 7a shows a first IR stimulation plateau 19 and a multi-elevated temperature (MET) D e plateau (using the protocol described by Li and Li 64 ) for the deepest sample in section B. The first IR stimulation plateau results suggest that an apparently stable pIRIR signal is reached when the first IR stimulation temperature is ≥170°C. This is consistent with the observations of Li and Li 65 who showed that for Chinese loess samples with D e values >~400 Gy, the pIRIR 200,290 D e values are greater than pIRIR 50,290 D e values. Unfortunately, we did not observe a plateau region in the MET-pIRIR data from this sample and this protocol was not considered further. Based on the first IR stimulation plateau, we chose the pIRIR 200,290 signal as the preferred dating signal for this study. Three other Chinese loess sections have also been successfully dated using the pIRIR 200,290 signal from polymineral coarse silt grains 66 and from sand-sized K-rich feldspar 67,68 . Based on extensive laboratory testing, Yi et al. 68 concluded that in pIRIR dating, it is advisable to check for the dependence of the results on test dose size. Figure 7b presents the dependence of the dose recovery result on test dose size for sample D38135 (sample also used in Buylaert et al. 20 ). The dose recovery test was carried out by adding beta doses on top of the natural dose in the sample. From these data we deduce that small test doses (<~20% of the dose to be measured) should not be used when large (>500 Gy) doses are measured, in agreement with the observations of Yi et al. 68 . Colarossi et al. 69 have shown that in their sample at least part of this effect could be attributed to charge carry-over from L x to T x . Figure 7c presents another dose recovery test on bleached (24 h in Hönle SOL2 lamp) 90-125 µm feldspar-rich grains from sample D38146 (test dose was~40% of dose of interest). The residual dose in this sample after bleaching was 9.9 ± 0.2 Gy (n = 3) and this value was subtracted from the measured doses. It can be seen that for doses up to at least~800 Gy, our chosen SAR pIRIR 200,290 protocol is able to satisfactorily recover a dose given in the laboratory. Based on these results, the test dose size for all our D e measurements was kept between~30% and~70% of the measured dose. Post-IR IRSL signals bleach at a much slower rate than the quartz OSL signal 19 and there appears to be a residual very-hard-to-bleach (or un-bleachable) component present in the pIRIR 200,290 signal which needs to be taken into account 70,71 . Based on a long-term (>80 days) bleaching experiment, Yi et al. 68 concluded that a constant (or very difficult to bleach) residual pIRIR 50 Material from the outer end of the tubes was used for dose rate analysis. Samples were first ignited at 450°C for 24 h, homogenised using a ring-grinder and finally cast in wax in a cup or disc geometry. After storage for >21 days to allow 222 Rn to build up to equilibrium with its parent 226 Ra, they were counted for at least 24 h on one of the six gamma spectrometers from the Nordic Laboratory for Luminescence Dating (Aarhus University). The calibration of the spectrometers is described in Murray et al. 73 . The resulting 238 U, 226 Ra, 232 Th and 40 K concentrations are given in Supplementary Data 1. Note that for some analyses, the data for 238 U is not available due to limited sensitivity of some detectors; in this case the 226 Ra value was used for the entire U series. Radionuclide concentrations were converted into dry beta and gamma dose rates using the conversion factors of Guérin et al. 74 . During calculation of the infinite matrix dry dose rate, we assumed a 222 Rn retention factor of 0.80 ± 0.10 for the 238 U chain; at two standard deviations, this covers a range from no Rn loss to 40% Rn loss. Total dose rates were calculated using life-time average water contents of 10 ± 5 and 15 ± 5% (weight water/dry sediment weight) for loess and soil units, respectively (this assumption is discussed in more detail below). A small cosmic ray contribution to the dose rate was added based on Prescott and Hutton 75 . For K-feldspar grains, we have added an internal beta dose rate based on a K concentration of the feldspar grains of 12.5 ± 0.5% 76 . This assumption has been tested by measuring the K concentration in 5 feldspar rich extracts (one from each section) using an XRF-attachment mounted on the Risø TL/OSL reader. After chemical separation, we are confident that our samples are almost entirely made up of quartz and feldspar. Thus, the XRF instrument is calibrated using a set of standards which are notionally identical, in terms of composition, to end members of the alkali-and plagioclase feldspar series and to quartz; these standards are arranged to fully cover the sample area. This allows us to convert our count rates under the Na, K and Ca X-ray peaks into relative feldspar contributions (i.e. % of total made up of K-feldspar, etc.). The calibration further allows us to attribute a proportion of Si counts to the 3 feldspar contributions, and any remaining Si counts are attributed to quartz. In general, the sum of the 4 components will be less than unity because the sample area may not be fully covered, and so all contributions are normalised to 100%. Once the feldspar analyses have been located on the ternary, the results can be converted to absolute concentrations of K (and Na and Ca if desired) using stoichiometry. The average K content of these five samples is 12.70 ± 0.10%, in excellent agreement with the value proposed by Huntley and Baril 76 (Fig. 8). A Rb concentration of 400 ± 100 ppm was also assumed 77 . There is furthermore a small contribution from U and Th in K-feldspar grains 78 and so an assumed effective internal alpha dose rate contribution from U and Th of 0.06 ± 0.03 Gy ka −1 was also included. A lower internal alpha dose rate contribution of 0.02 ± 0.01 Gy ka −1 was assumed for quartz grains based on the work by Vandenberghe et al. 79 . The D e values, radionuclide concentrations, total dose rates and resulting quartz OSL and feldspar pIRIR 200,290 luminescence ages are given in Supplementary Data 1. Age-depth modelling. Bayesian age-depth modelling was performed using the Bacon code 80 , based on altogether 220 OSL/pIRIR 200,290 data points in sections A-E. Inverse accumulation rates (sedimentation times, yr cm −1 ) were estimated from 3 to 8.8 million Markov Chain Monte Carlo (MCMC) iterations and these rates formed the age-depth models for each section from A to E (see Supplementary Data 1 and Supplementary Fig. 2a-e). Inverse accumulation rates were constrained by non-default prior information: acc.shape = 1.5 and acc.mean = 0.025-1.0 for the gamma distribution, and mem.mean = 0.7 and mem.strength = 4 for the beta distribution describing the memory effects (or autocorrelation) of inverse accumulation rates. In all cases, the modelling thickness was specified as 20 cm and Gaussian error distributions were applied (i.e., normal = TRUE). Age modelling was run to achieve 5 cm final resolution. Proxy analyses. Adjacent to the luminescence sampling tubes, samples for MS and grain size analysis were collected at 5-10 cm depth intervals. MS samples were measured in the laboratory using a Bartington MS2 magnetic susceptibility metre. Approximately 10 g of each sample was oven-dried at 38°C, placed into weakly magnetic plastic boxes and measured three times to obtain an average value. Finally, these average values were normalised by the sample mass in order to obtain the mass-specific MS. Grain-size samples were always collected at 5 cm intervals; about 0.2-0.3 g of bulk material was measured using a Beckman Coulter LS13320 laser diffractometer. The samples were dispersed in 1% ammonium hydroxide for 24 h, and sonication was employed immediately prior to adding the sample to the water column. The settings were verified by means of reproducibility tests of more than 50 sub-samples on both soil and loess layers. Five sub-measurements were conducted and at least three sub-measurements were used for averaging. Dismissal of sub-measurements from the averaging was employed when individual curve data implied bubbles in the system. The low-frequency (470 Hz) MS and sand fraction (>63 µm) results are summarised in Supplementary Data 1. Lag calculation and water content assumption. To compare the July insolation curve (65°N) with the MS records at Jingbian, polynomials were fitted to the data sets with an output resolution of 0.1 ka. Minimum values of the first derivative of the polynomials defined inflection points. Lags of the MS records compared to the insolation curve were calculated as age differences between the respective inflection points (Supplementary Table 2). The effect on the insolation lag of different life-time averaged water content assumptions in luminescence dating is important in this study. Our choice of water content and its uncertainty is first discussed with respect to literature values and the relevance to individual samples is then considered using the section containing the Holocene soil (section D). We then investigate the dependence on different water content assumptions of the apparent lag between our luminescence dated MS record and the insolation record. Firstly, although there is some variability in the published water content values for Chinese loess (see discussion in Stevens et al. 81 ), the values used in this study, of 15 ± 5% w.c. for soil and 10 ± 5% w.c. for loess layers, are similar to previous water content assessments for loess/palaeosols from sites in the N and NW of the CLP 57, 82 . In addition, Chen et al. 83 used a value of 10 ± 5% for a single sample collected in the S8 palaeosol at Jingbian. We next consider the water content required to reduce the EASM lag to 0 ka for section D. For the two Holocene samples (D38132, w.c. 15% and D38136, w.c. 10%), this would require increasing the water content to~30% and~25%, respectively. These water contents are 3 standard deviations from the values used and are close to saturation for sandy loess deposits. However, it is likely that the upper loess-palaeosol units at Jingbian have been well-drained since deposition: the gully is at least 10 ka old since the Holocene soil is inset into the gully system and the current water table is now around 300 m below the sampling level in a >280-m-deep gully system 10 . The river into which the gully flows has incised into Pliocene red clay below the Quaternary loess. The age of this feature is unknown but is likely to be at least multiple glacial-interglacial cycles. It is thus expected that the upper loess-palaeosol units have remained at least several tens of metres above the water table for the majority and probably all of their burial life-time. Thus, we consider it unlikely that the life-time average water content of this site could have approached the levels that would be required to reduce the lag to 0 ka. It is also worth noting that the water content values required for a zero lag would exceed almost all published values for even southern CLP sites, where precipitation levels are double than those at Jingbian. If, on the other hand, our water content estimates are too high, the dose rates would be higher, the luminescence ages lower and the lag with the insolation larger. Thus, in all likely water content scenarios, there is a significant lag between the EASM recorded in loess and insolation. If we now make the additional assumption that the underlying mechanisms causing the insolation lag have not varied systematically with time (which ought to be safe given the lack of an obvious systematic change in ice volume and CO 2 back in time at insolation inflection points), we would in turn expect the insolation lag to have remained constant within some bounds over the past~250 ka. This is precisely what is observed in our data (Supplementary Table 2 and black symbols in Fig. 9). However, increasing the water content by one standard deviation (i.e., from 15% to 20% for soil and from 10% to 15% for loess) causes an increase of 4.7% and 4.1% in the quartz and feldspar ages, respectively. Recalculation of the insolation lag using ages based on these higher water contents introduces a negative trend in the insolation lag versus insolation inflection point graph in Fig. 9 (red symbols). Indeed, using these water contents suggests the physically unrealistic scenario that prior to~130 ka the loess record of monsoon variability formed before the change occurred in the driving force (change in insolation). A similar but positive trend in the size of the lag is observed when the water content is decreased by 5% (green symbols in Fig. 9). In summary, changing the water content by ±5% introduces trends in the lag with time and increases the standard deviation of the lags from the original~2 ka (Supplementary Table 2) to~3 ka. We conclude that our current water content assumption remains the most likely. It does not produce any systematic trend in the insolation lag with time and, if anything, the uncertainty on the water content has been overestimated. Data availability. The data that support the findings of this research can be found in Supplementary Data 1 or upon request from the corresponding author. Reduced w.c.: loess 5%, soil 10% Increased w.c.: loess 15%, soil 20% Current w.c.: loess 10%, soil 15% Average lag: 4.9 ka Fig. 9 Insolation lag as a function of insolation inflection point time for different water content assumptions. Dashed line is drawn at the average of the insolation lag data given in Supplementary Table 2 (black symbols) and shown in Fig. 4. Solid line indicates no lag
2022-06-27T03:31:35.208Z
2018-03-07T00:00:00.000
{ "year": 2018, "sha1": "01d9b0b8acbbdd362ac0be3af3d9c5218709420f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-03329-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4a2804c356cf573d5b8c45baa3a0cf27a05da257", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
56461745
pes2o/s2orc
v3-fos-license
Something Is Like Somebody in Some Way: A Quale Explanation of Verbal Personification This article takes the Ternary Perspective of Sign Representation in semiotics as its theoretical foundation and constructs quale explanation framework of verbal personification to explain the use mechanism of verbal personification and its quale-sense. Verbal personification contains very rich quale-sense, which is expressed by linguistic symbol and may be perceived in mind by “something is like somebody in some way”. Image construction is a critical conscious activity in the forming of verbal personification. Quale explanation is the crucial medium that connects personification and its quale-sense. Quale explanation framework of verbal personification comprehensively interprets the production and comprehension processes of verbal personification. Introduction The use of rhetoric is a conscious activity. It has two kinds of expression deviations: deviation of expression form and that of semantic expression. Personification, one of the semantic rhetoric utterances, belongs to deviation of semantic expression [1], for instance: (1) John saw the anger of the tempest. [1] (2) The window winked at me. [2] In (1), the tempest, or the violent storm, was personified because it could become angry and thus had the ability to express feelings. In (2), the window was attributed the human action "wink". In fact, the inanimate window could never perform any human action. Actions like "wink" could be performed by only human beings. After the window was attributed the human action "wink", it had human flexibility. Quale has much to do with the foundation of human knowledge and language use [3]. It can be understood by means of analyzing knowledge expression [4]. Personification also has a phenomenon which is similar to quale. It can be presented in the form of quale-sense. This article draws on data of personification from literary works, textbooks or other scholars' works and takes the Ternary Perspective of Sign Representation (TPSR) in semiotics put forward by Peirce [5] as its theoretical foundation. It focuses on verbal personification, approaches it and its quale-sense from the perspective of quale explanation and seeks to acquire new findings about the study of personification. Literature Review The study of personification has a long and rich tradition which can date back to Erasmus and Quintilian [6]. Personification and allegory are closely related in art and rhetoric studies. Scholars and art historians even use the term "personification allegory" to describe the procedure and result of creating allegory through personification. Melion and Ramakers [7] hold the view, "talking about personification means talking about allegory." According to Whitman [8], personification, or prosopopoeia, refers to the practice of giving a consciously fictional personality to an abstraction and impersonate it. Paxson [9] holds the view that it is only in recent critical and literary theory that personification, or prosopopoeia, has drawn serious attention. It is a readily spotted figure, through which a human identity or "face" is given to something and was automatically equated with allegory for years. Within cognitive studies, Lakoff and Johson [10] regard personification as one of the most obvious ontological metaphors in which a physical object is specified as being a person or something nonhuman is seen as human. Along this line, Zhang [2] approaches personification from the perspectives of the Conceptual Metaphor Theory and the Event-domain Cognitive Model. Dorst [6] combines the Metaphor Identification Procedure and Steen's five-step procedure to construct an integral model investigating the different linguistic forms, conceptual structures and communicative functions of verbal personification. Bocarova [11] joins Conceptual Metaphor Theory and findings on the neurocorrelates of aesthetic response together to account for the application of personification in art and literature. Piata and Cánovas [12] focus on time personification in poetic discourse and show that time personification is grounded in Abstract Cause Personification template, in which the cause of an event is mapped onto an agent that performs an action that results in the same event. Liao [1] proposes the Doublet-Structure-of-Consciousness Model (DSCM) to account for the production of semantic rhetoric (including personification) under the Consciousness Theory of the philosophy of mind. The above-mentioned researches, which have made a lot of contributions to personification studies, vary greatly in nature as they explain personification from different perspectives. Up to now hardly any scholars have ever approached verbal personification from the perspective of quale explanation, nor have they ever paid attention to the quale-sense hidden behind verbal personification. This article approaches verbal personification and its quale-sense from the perspective of quale explanation, aiming to construct quale explanation framework of verbal personification by combining related theories in semiotics, the philosophy of mind and cognitive grammar to explain the use mechanism of verbal personification. Quale Quale, whose plural form is qualia, has been one of the most hotly discussed topics in the philosophy of mind since 1970's. The philosophy of mind focuses on the study of mind, consciousness and their relationship with the body (especially the brain). Peirce was the first philosopher who used terms "quale" and "qualia" in something like its modern sense in 1866. In his opinion, there is a distinctive quale to every combination of sensation. The chief source of the technical use of the term "qualia" is Lewis's discussion in his book Mind and the World Order: Outline of a Theory of Knowledge. Lewis [13] defines quale as a recognizable qualitative character of the given, which may be repeated in different experiences. Quale is directly intuited, given, and purely subjective, and thus it is a sort of universals and must be distinguished from the objective properties of objects in the external world. Objective properties are what people have knowledge of, but people have no knowledge of quale because knowledge always transcends the immediately given. The reasons why Lewis distinguishes qualia from objective properties are that "objective properties are more complex in nature than qualia, and their existence extends beyond the spacious present." [14]. Property dualism put forward by Davidson [15] holds the view that a substance has two kinds of properties: physical property and mental one. The former refers to the property that the substance has in itself. It can be revealed by observations or experiments and can be reduced to itself. The latter refers to the mental feeling that the traits or characters of the substance act on the mind of perceptual subject. It can never be reduced to the substance itself. The phenomenon represented by quale in the philosophy of mind is similar to the mental property of the substance. As Stubenburg [16] notes, "to be conscious is to have qualia." According to Lycan [17], quale is an intentional represented property, the property which experience represents the world as having. Charlmers [18] claims that quale is a kind of conscious experience or a phenomenal property. It refers to the "qualitative" or "phenomenal" feature of conscious states of mind or the subjective character of mental phenomena presented in things. Conscious experience or the phenomenal property has a physical basis. That is to say, quale systematically depends on the physical property. Crane [14] points out that quale is neutral as for the question of whether it is intentional or non-intentional. Crane [19] holds the view that quale is a non-intentional conscious mental property. "A non-intentional mental state is one which has no intentional structure." The stronger form of intentionalism holds the view that all mental states have intentional mental properties. The weak form of intentionalism insists that all mental states are intentional, but some have quale, a non-intentional conscious property or a higher-order property of states of mind. Feser [20] holds the view that quale is not a physical property of the brain, but a non-physical property inhering in its physical substance. Li et al. [21] note that in people's conscious experience, the "for-me" idiosyncrasy of quale is the origin of cognition. Under the effect of quale, the subject's cognitive behaviors get their orientations and meanings. Jiang [4] points out that the issue of quale is a special one about how human beings understand conscious activities, whose root manifests that there is unspeakable or unrepresentable experiential content in everyone's different experiences. Quale involves the mental feeling of human beings and may be perceived in mind by subjective raw feeling of sort of "what it is like to be" [3]. Quale and Quale-Sense Given that many, varied and conflicting uses to quale have been put [14], it is feasible to apply quale to language studies [22,23]. Xu and Chen [3] put forward the concept of quale-sense in response to the phenomenon of quale in language use. Quale and quale-sense are homologous. The source of quale-sense is the quale of things. According to the conditioned reflex theory, quale belongs to the category of the first signal system. People's perception of quale reflects the sense of phenomenal characters of things, while quale-sense is the sense of phenomenal characters of things represented by linguistic signal system and is related only to the phenomenal character of statements. Quale is mostly reflected on body experience but quale-sense on conceptual experience. Language attributes various senses to people's perceptual organs under the effect of quale-sense of language. Quale-sense is a mental embodiment of readers to the "quale" of "things" depicted in the language. It is based on acquired knowledge, which is essentially unspeakable subjunctive experience or feelings, thus it leads to different readers having infinite explanations of the same thing [24]. Quale-sense is represented by linguistic symbols and may be perceived in mind by "something is like somebody in some way". Quale Explanation and TPSR Quale explanation has much to do with TPSR in semiotics. The perspective, which aims to study the sense reference of linguistic signs, is made up of sign or representamen, object and interpretant. According to the perspective, the use of any linguistic sign must involve the three components. Sign or representamen is something that stands for another thing; object is the external thing that sign or representamen directs at; interpretant is the equivalent that sign or representamen presents in the brain as well as the perception, cognition, explanation or evaluation that is made by sign users on the object that sign directs at. Explanation, the structural model that presents in the brain, accounts for the mental reason of an event. It is a kind of virtual existence that reflects mental activities [25]. If the explanation is quale-oriented, quale explanation is the interpretant [22] [23]. Quale explanation of verbal personification can be briefly described as follows: verbal personification is the sign or representamen, quale explanation the interpretant and quale-sense the object. Quale explanation framework of verbal personification, the theoretical framework of this article, can be illustrated as figure 1. In the above explanation framework, verbal personification, quale explanation and quale-sense have made up of a semantic triangle of verbal personification, in which quale explanation is the medium that connects verbal personification to its quale-sense and quale-sense is the reference sense of verbal personification or the object that it directs at. Verbal personification does not have the ability to direct at something. Instead, linguistic subject has the ability to direct at something and attributes such abilities to linguistic signs. In that case, the study of verbal personification can never ignore linguistic subject, especially his or her mental factors. Linguistic Realization Means of Verbal Personification Linguistic realization plays an important role in the identification of verbal personification [6] and the main linguistic realization means of verbal personification depend heavily on the use of verbs, nouns, pronouns, adjectives and adverbs that are suitable for human beings only to describe something, for instance: (3) The ancient wilderness dreamed, stretched itself all open to the sun, and seemed to sigh with immeasurable content. [26] (4) I ran across a dim photograph of him the other day, going through some old things. He's been dead twenty-five years. His name was Rex … and he was a bull-terrier. (James Thurber) (5) The handsome houses on the street to the college were not fully awake, but they looked very friendly. (Lionel Trilling) (6) A tree whose hungry mouth is prest against the earth's sweet flowing breast. (Joyce Kilmer, Trees) (7) The clock on the wall ticked loudly and lazily, as if it had time to spare. [27] Verbs such as "dream", "stretch" and "sigh" are usually used to describe human beings, but in (3), they were used to describe the ancient wilderness, thus the ancient wilderness was personified and the personification in this example is realized by the use of verbs that are suitable for human beings only to describe the wilderness. In (4), Rex, the bull-terrier, was personified. The personification in this example is realized by the use of the pronouns that are suitable for human beings only to describe the bull-terrier, such as "him", "he" and "his". In (5), the handsome houses were personified by the use of adjectives that are usually used to describe human beings, such as "awake" and "friendly". Example (6) is a verse from Joyce Kilmer's poem Trees. In the example, the tree is personified by the use of both nouns and adjectives that are suitable for human beings only because it can feel thirsty just like a human being and it has a mouth that is pressed against the earth's sweet flowing breast. The earth is also personified because it has a sweet flowing breast. In (7), the author uses the adverbs "loudly" and "lazily" that are suitable for human being only to describe how the clock on the wall ticked, so that the personification in this example is realized by the use of adverbs and the clock on the wall had human characters and states. Semantic Features of Verbal Personification Verbal personification has two main semantic features. One of them is semantic deviation, which is also known as semantic divergence. It stands for the phenomenon that the semantic expression of a sentence is different from that of an ordinary one because semantic selection restrictions are violated in language use. As verbal personifications are realized by the use of nouns, verbs, pronouns, adjectives and adverbs, their presence will not be established until underlying conceptual structures are analyzed. At the linguistic level, the tension between human beings and non-human objects plays a decisive role [6]. An instance of semantic deviation can be attested in (1), in which the personification is realized by the use of the noun phrase "the anger of the tempest". "Anger" is a strong feeling of displeasure and hostility. Usually human beings or something animate can express anger, while the inanimate tempest or the violent storm can never express anger. In that case, semantic deviation occurs between "anger" and "the tempest". The tempest was personified for the sake of description. The other semantic feature of verbal personification is frame-shifting. According to Petruck [28], "a frame is any system of concepts related in such a way that to understand any one concept it is necessary to understand the whole system; introducing any one concept results in all of them becoming available." Coulson [29] holds the view that frame-shifting "is semantic reorganization that occurs when incoming information is inconsistent with an initial interpretation". According to double-scope network in conceptual integration theory put forward by Fauconnier and Turner [30], frame-shifting plays an important role in the semantic construction of verbal personification. The two input spaces are organized by different frames, but some topology is projected from both input spaces into the blended space, which produces emergent structure of its own and finally a richer and more specific structure is produced at the end of the integration. Now consider frame-shifting in the semantic construction of the personification in (2). In this network, the frame of the window is in input 1, in which elements may include: short distance to "me", being very special to "me", and so on. And in input 2 is the frame of "me". Elements in input 2 may include: being close to the window, having a liking for the window for some reason, and so on. The corresponding elements of the two input spaces bear partial mapping relationship and some elements of the two input spaces are projected to the generic space. Projection is an integration of elements from two or more mental spaces. On this basis the elements and an abstract structure in the generic space are produced. Elements in the generic space include the window, "me" and the intimate affection between "us". The abstract structure in the generic space is the existence of intimate affections between the window and "me". Some elements in input spaces and the generic space are discarded while others are projected into the blended space. Meanwhile, frames of both input spaces activate the basic knowledge: inanimate objects can also have human affections or actions. In order to make the window prominent and express its special relationship with "me", the frame of the window was semantically reorganized and the window was endowed with the human action "wink", so a new or emergent structure comes into being in the blended space: the window winked at me. Image Construction of Verbal Personification According to Xu and Chen [3], information in linguistic expressions can be divided into the following three kinds: literal meaning, implicature, and quale-sense. Mental-physical supervenience is a key dimension for the division of the information in linguistic expressions. Supervenience in philosophy refers to a relation used to describe cases in which a system's upper-level properties are determined by its lower-level ones. Mental-physical supervenience holds the view that "every mental phenomenon must be grounded in, or anchored to, some underlying physical base (presumably a neural state). This means that mental states can occur only in systems that can have physical properties; namely physical systems." [31] When linguistic subject's mental feeling is basically determined by physical event, what language expresses is literal meaning; when linguistic subject's mental feeling can get away from the reliance of physical event to a certain degree and acquire some free will, the linguistic expression can get away from literal meaning to a certain degree and extend towards quale-sense. Implicature is something that lies between literal meaning and quale-sense. Quale-sense, which relies on qualitative attributes of the object depicted in language, is neither the literal meaning of the sentence nor implicature as it is non-deducible. What verbal personification expresses is quale-sense. One basic assumption of the philosophy of mind and language study is that what language represents is mental representation [32]. According to dual coding theory put forward by Paivio [33], representation is the means by which information presents in the brain. When people process information in the external world, relevant information is represented in the brain. As for the same thing, its processing varies accordingly if it is represented by different means. As dual coding theory is a theory about symbolic systems and has a hierarchical conceptual structure, the general level divides into verbal and nonverbal symbolic subsystems while the lowest level consists of the representational units of each system called logogens and imagens. Language system is peculiar because "it deals directly with linguistic input and output (in the form of speech or writing) while at the same time serving a symbolic function with respect to nonverbal objects, events and behaviors." Xu [34] discusses the forming of initial shape of the sentence representation from the viewpoint of the emerging of consciousness. The emerging of sentence representation into the initial shape comes from the emergence of the primary consciousness into reflective consciousness and the forming process of initial shape of the sentence representation is called one of "cutting" the event into the usage event. Event, which includes any social or natural event, lays the foundation of sentence representation, while usage event, a symbolic expression assembled in a particular set of circumstances for a particular purpose [35], is what sentence represents. According to Xu [34], usage event may include the following two stages: pre-language usage event and language usage event. The former refers to the linguistic form of mental representation, which is actually a mental language and presents in the linguistic subject's mind in the form of image, while the latter is the sentence representation. Quale is one of the properties of an event and the event is the "presentation of the given. It is one of the recognizable qualitative characters of the given" [13]. Usually, quale explanation is founded on mental images, while quale-sense is based on special event(s) and verbal personification is a usage event, which is suitable to be used to present the event in this particular situation. The reason why linguistic expression can attribute different feelings to the sense organs of human beings lies in the effect of quale-sense of linguistic expression. The study of quale-sense of linguistic expression cannot be separated from the study of image because quale-sense of linguistic expression is elicited and presented by specific image [36]. Vocabulary is the carrier of image. Image refers to the fact that vocabulary is likely to arouse certain degree of mental representation. It used to be about mental representation in cognitive psychology. It describes "the occurrence of a perceptual sensation in the absence of the corresponding perceptual input" [35]. Shepard [37] and Kosslyn [38] have pointed out that image is a valid object of rigorous empirical inquiry as status. It can be sensory, visual or auditory, etc. For instance, when people close their eyes, they can evoke a kind of visual sensation by imagining a scene. "Words evoke concepts and concepts in turn designate referents in the projected text world" [6]. Concepts store in the mind in the form of image. Words that are easy to arouse mental representation are called high image words while those that are hard to arouse mental representation are called low image words. High image words usually evoke specific concepts while low image words express abstract concepts [36]. As quale-sense is embodied in linguistic expression, verbal personification lies heavily on image expression, which can be either high or low. Generally speaking, the image of verbal personification presented in the mind contains at least two concepts. One concept is a non-human thing and the other one(s) may be the human action, characteristic or property. The concepts are semantically deviated because of their collocation, but it is just a phenomenon. The truth is that something has been attributed the human action, characteristic or property for the sake of description. Now consider the case in (3). One of the key concepts in this example is the inanimate ancient wilderness. Other concepts in the example, including "dream", "stretch open" and "sigh with content", usually require a human agent. Semantic deviation occurs between concepts in the image. In order to describe how the ancient wilderness came to life, human actions such as "dream", "stretch open" and "sigh with content" were attributed to it and (3) is a verbal personification. For purposes of thought or expression, image and its derivatives in a third person manner describe people's ability to construe a conceived situation in alternative ways by making use of alternative images [35]. Image construction refers to the structuration of the conceived scene. It is a critical conscious activity and the key point in the forming of a linguistic expression is the forming of mental image [39]. Syntax represents semantics while images symbolize semantics. A syntactic structure, which is closely related to semantics, is the structure of an event. Syntax is event-based and the linguistic expression is image-driven. The use of language is a process of mind from event to usage event, which starts from the linguistic subject's perception of the event(s) in the external world to form primary consciousness. Then it develops into reflective consciousness under the impact of mental-physical supervenience, so perceptual thinking, one of the intuitive thinking patterns, is the basis of image construction. Research on the relationship between image construction and the emergence of the linguistic expression means research on the interface between semantics and syntax [39]. Image construction varies with respect to a lot of parameters. Such variations are referred to as focal adjustments by Langacker [35]. They are focal adjustments of selection, perspective and abstraction respectively. Focal adjustments of selection refer to the process of sifting and extracting information when describing a scene. In the process different cognitive subjects deal with facets of the scene in different ways and that usually leads to different images. Focal adjustments of perspective relate to the positions from which the scene is viewed, and can be understood in terms of figure/ground alignment, viewpoint or subjectivity/objectivity. Focal adjustments of abstraction pertain to the level of specificity at which the situation is portrayed. Different selections, perspectives and abstractions of the cognitive subject lead to different images. The meaning of the linguistic expression, which is not equal to its truth value, is the result of subjective conceptualization. Image construction of verbal personification does not happen at once and involves focal adjustments of selection, perspective and abstraction as well. Selections of different concepts of verbal personification are made on the basis of physical-mental supervenience, which advocates that language use can neither ignore the influence of mental events nor can be fully determined by physical events. Free will plays a role in image construction of verbal personification. Consider the instance in (4). From the example we know that Rex, a bull-terrier, was selected by the author's free will and described as a person. Focal adjustments of perspective have manifested that the whole example is the ground while pronouns "he", "him" and "his" are the figure. In terms of focal adjustments of abstraction, pronouns such as "he", "him" and "his" are used instead of "it", "it" and "its", making the intimate relationship between Rex and the author more prominent, so in the author's mental image a complex conceptual structure about Rex was established. Image construction is a key point in the forming of verbal personification. Verbal Personification and Its Quale-Sense Approached from Quale Explanation In quale explanation framework of verbal personification, quale explanation, the interpretant, is actually founded on mental image(s), resulting from the linguistic subject's mental activity. In the subject's mind, a phenomenal quality of something can never be reduced to its physical property. Through association of similarity, a human action, trait or property is attributed to something. Now consider the case in (5), in which "awake" and "friendly" are adjectives. When "awake" is the predicative of a sentence, its agent is usually somebody, who is not asleep, especially immediately before or after sleeping. The same is true for "friendly". When "friendly" is the predicative of a sentence, its agent is usually someone, who behaves in a kind and pleasant way or acts like a friend. It is obvious that the author was deeply impressed by the handsome houses on the street to the college. Through association of similarity human qualities such as being awake and friendly were attributed to the handsome houses. In the author's mind, the handsome houses were like human beings in their state or manner. Verbal personification is connected to its quale-sense by means of quale explanation. Quale-sense is the manifestation of quale of things depicted in language. It is the linguistic subject's ineffable mental feeling of a conscious object represented by language. It is also the feeling of symbol concepts presented in people's mind. The quale-sense of verbal personification is based on event(s), while verbal personification is a usage event. According to the viewpoint of the emerging of consciousness [34], producing of verbal personification needs to "cut" event into usage event. Quale-sense is the object that the linguistic subject uses verbal personification to direct at, which is also the event that verbal personification relies on. Quale explanation can be regarded as a mental image, and verbal personification is the linguistic usage event. The production process of verbal personification can be described as follows: quale-sense → quale explanation →verbal personification. Quale-sense is the foundation of the use of verbal personification as well as the object that linguistic subject wants to direct at by means of verbal personification. In linguistic subject's mind an image emerges that something is like a human being in that they have the same human action, quality or character. After the image is modified by linguistic symbols, verbal personification comes into being. Now consider the version of verbal personification in (6). The quale-sense of the example is based on the event that a tree has roots which need water to grow and are pressed against the earth to suck up moisture from the soil. Through association of similarity, the image emerges in the poet's mind that the tree is like a human being in that they both have mouths. Another image also emerges that the earth is like a mother in that they both have sweet flowing breasts. After such images are fossilized into linguistic symbols, the linguistic usage event or verbal personification has come into being: A tree whose hungry mouth is prest against the earth's sweet flowing breast. The comprehension process of verbal personification can be described as the following: verbal personification→ quale explanation → quale-sense. When recipients perceive a statement to be a verbal personification, images that something is like a human being in some way emerge in their mind by means of association of similarity. Through quale explanation hearers or readers understand the quale-sense or the object of the verbal personification. Now consider the data in (7). When recipients read or hear the statement and perceive it to be a verbal personification, the image that the way the clock on the wall ticked was like the working manner of a lazy worker in a factory emerges in their mind by means of association of similarity. Through quale explanation, the frame of the clock on the wall was shifted to that of the worker in the factory and recipients finally understand the quale-sense of the verbal personification: the clock on the wall ticked loudly and lazily, just like the worker in the factory talked loudly but worked slowly, as if they had time to spare. Conclusion Personification has attracted the attention of scholars worldwide. Verbal personification is the focal point of study of this article, which takes TPSR in semiotics as its theoretical foundation and approaches verbal personification and its quale-sense from the standpoint of quale explanation. It is found that quale explanation framework of verbal personification comprehensively explains the use mechanism of verbal personification. While explaining how verbal personification is produced and comprehended, enough consideration has been shown for the mental factors of linguistic symbol users, thus the horizon of personification study has been widened. Our analysis does not conflict how personification has been treated in traditional rhetoric, literature, art or cognitive linguistics. Although there seems to have been dispute about quale and even whether there is such a thing as quale at all [14], what is certain is that people can never ignore virtual existence in language studies [25]. Quale explanation, a kind of virtual existence, is a new approach to studies of figurative languages and their quale-sense.
2019-05-12T14:23:32.317Z
2018-08-27T00:00:00.000
{ "year": 2018, "sha1": "a18387a202ce92ff90d00627799d7d83cc8fa171", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.cls.20180403.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7d837ec02b9ba78c021af6aa9fbd37e9b056b8a2", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
239049589
pes2o/s2orc
v3-fos-license
Heritability in Morphological Robot Evolution In the field of evolutionary robotics, choosing the correct encoding is very complicated, especially when robots evolve both behaviours and morphologies at the same time. With the objective of improving our understanding of the mapping process from encodings to functional robots, we introduce the biological notion of heritability, which captures the amount of phenotypic variation caused by genotypic variation. In our analysis we measure the heritability on the first generation of robots evolved from two different encodings, a direct encoding and an indirect encoding. In addition we investigate the interplay between heritability and phenotypic diversity through the course of an entire evolutionary process. In particular, we investigate how direct and indirect genotypes can exhibit preferences for exploration or exploitation throughout the course of evolution. We observe how an exploration or exploitation tradeoff can be more easily understood by examining patterns in heritability and phenotypic diversity. In conclusion, we show how heritability can be a useful tool to better understand the relationship between genotypes and phenotypes, especially helpful when designing more complicated systems where complex individuals and environments can adapt and influence each other. I. INTRODUCTION Evolutionary robotics (ER) employs several elements of biological evolution to obtain creative and novel solutions to practical problems. A key concept in evolutionary biology that could help in designing evolutionary robotic systems and that so far has remained largely unexplored in evolutionary robotics is the notion of heritability [7,23]. Heritability is one of the factors determining the ability of traits to evolve. A common use of heritability is in animal and plant breeding, where the response to selection can be predicted as the product of heritability and the selection differential [15]. Heritability denotes the proportion of additive genetic variance relative to the total phenotypic variance, and can vary between zero and unity. Hence it indicates the degree to which the trait is responsive to selection. Low heritabilities are often associated with a more diffuse genotype-phenotype map, possibly resulting from a strong influence of the environment on the phenotype or interaction among genes in their expression (epistasis). Because low heritability compromises the evolutionary response, the notion of heritability can be a useful addition to evolutionary robotics as an a priori evaluation of the evolutionary potential of a system. One important design choice that has a major impact but is often overlooked when designing an ER system is the choice of encoding. There are several encodings that have been used in the literature available to the designer, and many more if one includes all variations. Despite so, our understanding on how the choice on a particular encoding can influence the evolutionary process is still very superficial. The main goal of this paper is to investigate the applicability of the notion of heritability in an evolutionary robotics system to better understand the relationship between different encodings and the generated phenotypes. Heritability can be defined for any given phenotypic trait of the robots, either related to the robot's morphology, (i.e. size), or the robot's behavior (i.e. speed). Then for any pair of parent robots, we can determine the average value of the given trait and compare it with the value of this trait in the child robot. It is important to note that while the traits we consider are phenotypic, the mechanisms that transfer them to the offspring depend on the genotypes, specifically, the genetic encoding that specifies that trait and the recombination operator that shuffles and combines the parental genotypes into a new one that represents the child. In principle, this means that based on observable phenotypic properties we can get information regarding genotypic processes "under the hood". The main contribution is the adoption of the concept of heritability to Evolutionary Robotics and the demonstration of its utility. There are three important aspects we investigate: • Whether heritability can be used as a predictor of the evolutionary response of a system, specifically whether it is related to the rate of evolutionary change. • A. The Robots The robots evolved in our system is a modular robotic framework based on RoboGen [1]. Each robot is composed of three different types of modules: one Core module (Figure 1a), an arbitrary number of Brick modules (Figure 1b), and an arbitrary number of Joint modules (Figure 1c). The Core module is unique for each robot and represents the robot "head" that, in the original physical incarnation [11], contains the main logic board and the battery. The Core module has four connection points where other modules can be attached. Brick modules represent the "backbone" of the robot. Only through Brick modules, the robot can take up arbitrary shapes. Actuation can only be achieved through the Joint modules, thus Joint modules are the only modules capable of changing the state of the robot in the environment. Joint and Brick modules can be attached to any other module in two different ways, which differ from each other by 90 • for the axis perpendicular to the attachment plane. In [4], we had already introduced this rotational attachment, but it applied only to Joint modules. Allowing the Joint to be attached rotated allows the robot to evolve morphologies that have more variety in terms of degrees of freedom in terms of actuation. In this work, we also introduce rotational attachment for Brick modules, which potentially allows robots that extend also vertically against gravity. In general, the design allows the inclusion of sensors, but for this study, we do not use any. B. Robot Brains The controller of the robots is based on Central Pattern Generators (CPG) after [12,13]. Every joint in the body has a corresponding CPG node that consists of three neurons. Two of these neurons (that we call x and y neurons) are coupled by two-directional connections, one from x to y, and one from y to x. By definition, the weights of these connections have the same value but the opposite sign. The remaining neuron in a CPG node provides the output signal to the servo motor driving the given joint. The corresponding weight is set at 1.0 in each joint, thus a CPG node can be configured by just one parameter regulating the connections between x and y. The overall controller architecture is a network with one CPG node for each joint and a connection between two such nodes if the corresponding joints are neighbors separated by no more than two empty cells (in the Manhattan sense) in the 3D Euclidean grid enclosing the robot. Connected neighbor CPG nodes can synchronize the oscillations of their joints and induce global locomotion patterns. The number of configurable parameters for a robot brain is thus j+c, where j is the number of joints, and c is the number of connections between joints; in Figure 2 we show an example of how the nodes would be connected in a robot made of 8 joints configured as in the "spider" robot from [11]. C. Evolution with tree-based representation For a direct encoding, we are using a tree-based representation, which in implementation is very similar to [9,10]. In a tree-based representation, the genotype is a tree data structure in which each node represents a module of the robot. Modules differentiate into three types: Core, Brick, and Joint modules, as represented in Figure 1. The Core module is always the root of the tree and it can only be present once in the entire genotype. It can have four children. The Brick module is attached on one side to its parent block and has three remaining slots available for child nodes. The Joint module has only one remaining slot for a child module, therefore it does not allow any branching. A Joint has three extra parameters that directly encode the oscillator parameters: frequency, offset, and amplitude. In our tree-based representation, the brain development is limited to only decode parameters for the oscillators of the CPG network; i.e. all connection between oscillating nodes are not activated. The robot module tree can be altered by one of the following mutation operators. The changes primarily revolve around changing the body: adding a random module, deleting a subtree, duplicating a sub-tree, or swapping a sub-tree. Alternatively, brain mutations are facilitated by mutating the joint oscillator parameters to achieve different activation patterns. Parent robot trees can be recombined by inheriting sub-trees from the parents. Some checks and balances ensure that the recombined trees are valid, and do not exceed the maximum limit of modules. D. Evolution with L-system representation For an indirect encoding, we choose a system from our previous work [4], which is composed of an Lindenmayersystem (L-system) [14] that describes body and brain structure and a component based on Hypercube-based Neuro Evolution of Augmenting Topologies (HyperNEAT) [22,6] that encodes the weights of the CPG network. These indirect encodings are capable of creating symmetrical growth structures and repetitions in our bodies. L-Systems are parallel rewriting systems acting on a formal grammar. The grammar is defined as a tuple G = (V, w, R), where V is the Alphabet, w is the Axiom and R is a set of Replacement Rules. L-Systems start from the Axiom w, which is a sequence of symbols from the Alphabet. To develop an L-System grammar, the Axiom is expanded into a longer sentence by replacing symbols using the Replacement Rules in R. The replacement operation can be repeated multiple times on the sentence. In this work, we adapted a system from [17], where each genotype is a grammar with always the same Axiom and Alphabet for all robots. The Alphabet is made of the following symbols: • Robot modules: the Core, the Brick, a Vertical Joint, and a Horizontal Joint. • Mounting commands: add lef t, add f ront, and add right. Mounting commands must be followed by a module symbol otherwise they are ignored. When the final sentence is read, their role is to attach the following module symbol in the sentence to the module indicated by the cursor position, at a new position (left, front, or right) depending on the specific command. • Moving commands: move back, move right, move f ront, and move lef t. The moving commands alter the position of the cursor. The Replacement Rules of our L-System are a set of rules that replace any of the robot module symbols with a sequence of new symbols from the alphabet. In other words, robot module symbols are both terminal and non-terminal symbols in our L-System. Any other symbol is terminal, which means it cannot be replaced further. The Axiom of our L-System is a sentence made of a single symbol: the Core block. Once the L-System grammar develops the Axiom into a sentence, the sentence is used as a sequence of instructions that describe how to build the robot. The system we used here differs from the one we inspired upon, by the addition of an additional constraint on the morphology, i.e. we do not allow a joint to be attached to another joint. This extra addition showed in previous experiments [4] that increase the chances for more complex robots to appear and we decided to use it here to increase the chances for interesting morphologies in both experimental configurations. We also improved the Alphabet with the introduction of a new rotated Brick module, which allows for the morphologies to develop in three dimensions. The brain structure is defined by the body; each joint creates a corresponding CPG oscillator node and connections are made using the rules already explained in subsection II-B. When all connections are defined, each CPG node is positioned on a substrate space with x, y, z, w coordinates and each connection weight value is queried from a CPPN [21], as defined by the HyperNEAT algorithm. In the substrate space, x, y, and z determine the position of the CPG's node corresponding joint, while the w axis determines the front and back neurons in an oscillating CPG node. The mutation and crossover operators are defined by their individual components: for the L-system component, we use the operators defined in [16]. For the HyperNEAT component, we use the operators as defined by HyperNEAT, with the exclusion of the species, i.e. genomes are not divided into species and crossover is possible for each pair of genome in the population. A. Heritability In an evolutionary system the phenotypic variation (V P ) of a population of individuals is an expression of genetic variation (V G ) and environmental factors (E). The genetic variation can be further subdivided into three major components: additive genetic variation (V A ), non additive genetic variation caused by epistatic genes (V N A ) and effects of random mutations (M ). As defined in [23,7], heritability measures the contribution of genes to phenotypic traits. Each phenotypic trait has a different value of heritability. Heritability can be defined as broad-sense heritability (H 2 ) or narrow-sense heritability (h 2 ). Broadsense heritability is the proportion of phenotypic variation that is created by the genetic variation. Narrow-sense heritability is only the proportion of genetic variation that is generated by additive genetic values, not including any effect of dominance or epistasis. Heritability is also an important component of the "response to selection" (R), a value that can be predicted as the product of narrow-sense heritability and selection differential (S) [15]: The value of heritability can be calculated from its theoretical formula, but this requires a deep mathematical understanding of our genotype model. However, if we are only interested in the additive genetic material, an estimate of narrow-sense heritability can easily be derived from population measurements by linearly regressing the average trait value of the offspring against the parental phenotype. An approximation to a linear model is possible because the additive genetic code has a linear response to the resulting phenotype, in contrast to epistatic genetic code which has a much more unpredictable effect on the phenotype. The value for the slope of the linear regression is our numerical estimation for heritability. The value for heritability can vary between 1.0 and 0.0, where h 2 = 1.0 is a 45 • linear regression, representing a perfect match between parents' average trait and the offspring's trait. A value of h 2 = 0.0 instead represents a scenario where the offspring's trait is completely unpredictable given the parents' traits. In biological systems, heritability can isolate those phenotypic features that are expression of genetic material from features that are influenced by the environment. Estimated heritability for life history traits and behaviour are typically low to medium (ranging up to 0.30 [5]), whereas morphological traits are often found to have higher heritability (average h 2 = 0.46 [20]). In evolutionary robotics, the influence of the environment over the development of the individuals is usually very limited, excluding a few exceptions [2,18]. In our case the environment is a flat terrain, therefore its influence is completely absent. The implication is that the phenotypic variation in this system is only an expression of genotypic variation. Through linear regression we can estimate the narrow-sense heritability, which is only an expression of additive genetic variation. The rest of the phenotypic variation can only be an expression of epistatic gene interaction and mutation (Eq. 2). B. Robot Traits To estimate an overall heritability of our system, we chose a wide variety of phenotypic traits that are representative of different aspects of our robot. This work is interested in the overall evolution of modular robots, but the approach is not limited to any particular number or type of traits. The same study can be repeated on any trait, e.g. it would be interesting to study the heritability of "the number of feet in a robot" and what parameters increase the transmission of the trait to the offspring. We recorded a set of many traits derived from the descriptors found in [18]. From the many traits available, we sampled only a significant few that we found to be orthogonal to each other in previous work [3]: some traits that measure the morphological aspect of the robots and some that measure the behavioural aspect. The Morphological traits give us insight on how the robot shapes evolve. In this work we used: • Proportion: considering the 2D bounding box that encompasses the robot when viewed from above, this trait is the ratio between the two sides of this rectangle. • Size: the number of modules in the body. • Number of Limbs: considering the robot as a tree of modules, it is the number of leaf modules. The value is normalized per robot by the number of all possible limbs available. • Coverage: considering the 3D bounding box that encompasses the robot, this trait is the ratio between the area that is occupied by modules and the total area of the rectangle. The Behavioural traits are very important because they give insight in the complex relationship of body and brain. In this work we used: • Speed: Describes the average robot speed (cm/s), and is calculated as if the robot took the shortest path from the • Balance: We use the rotation of the head in the x-y plane to define the balance of the robot. We describe the rotation of the robot with three dimensions: roll φ, pitch θ, and yaw ψ. Thus, we consider the pitch and roll of the robot head, expressed between 0 • and 180 • (because we are not interested in whether the rotation is clockwise or anticlockwise). Perfect balance corresponds to θ = φ = 0 • , so that the higher balance, the less rotated the head is. Formally, balance is defined by Eq. 6. IV. SETUP For both encodings we use the same evolutionary algorithm with a generational population update scheme, that is, an evolutionary algorithm where consecutive populations are nonoverlapping. This means that survivor selection is trivial: no members of population P n survive, the subsequent generation P n+1 consists of offspring of the current one. As for parent selection, we use the tournament selection mechanism with a tournament size of two individuals. This represents a low selection pressure. We run this algorithm with a population size of 100 individuals for 50 generations, amounting to a total of 5000 evaluations as the computational budget for optimizing the robots' makeup. For both encodings, fitness evaluations are done by placing the given robot on a flat surface and running it for 30 seconds. We evolved the robots for movement, using the speed behavioral trait as the value for fitness. We adjusted the mutation rates for the evolutionary run to be quite high, with a probability of 0.59 of having at least a body mutation for the tree-based representation and we used the same probability (0.59) for the mutation chance on the L-System grammar. V. RESULTS ANALYSIS Our first analysis aims at measuring the heritability of various traits in the two encoding schemes at the start of the evolution experiment. To do so, for each encoding/trait pair, we measure heritability using data from all evolutionary runs, but only on the very first generation. Heritability is measured comparing the trait value of an offspring against the average of the parents, therefore we need the offspring from the second generation as well. A linear regression is applied (a) Heritability of tree-based encoding for the speed trait. H 2 = 0.74. Blue line is the resulting linear regression (slope is heritability). Red line is a perfect heritability reference. We can observe that even if points are more concentrated, they have an higher heritabiltity compared to the tree-based encoding. (b) Heritability of L-System encoding for the speed trait. H 2 = 0.35. Blue line is the resulting linear regression (slope is heritability). Red line is a perfect heritability reference. We can observe that even if points are more scattered, they have a lower heritabiltity compared to the tree-based encoding. to the trait values of parents against offspring, and the slope of the resulting linear model is our estimate of heritability. The estimated values from our measurements are reported in Table I. The scatter plots used for estimating heritability can be seen in Figures 3a, 3b, 4a, 4b and Figure 5. As we can see, the Tree-based encoding consistently shows higher values of heritability for all traits. A. Relationship between heritability and initial evolutionary response We aim at analyzing the relationship between heritability and evolutionary response for the two encoding schemes. To achieve this, we analyze one behavioral and one morphological trait using a common scheme, as used in Figure 3 and Figure 4. Namely, in Panels (a) and (b) of the figures we will compare the heritability for direct vs indirect encoding calculated only in the first generation. In Panels (c) and (e) we show the dynamics of the trait being considered. Panel (c) shows the value of the trait over generations while Panel (e) shows its rate of change (or derivative) over generations. To further understand how heritability can explain the trait dynamics over generations, we use Panels (d) to highlight the evolution over generations of the heritability metric (calculated across two consecutive generations) and Panel (f) to highlight the evolution over a generation of the phenotypic diversity of the trait being considered within the population. We first analyze the most important trait, since this is the one that is under selection: speed. In Figure 3c and Figure 3e, we observe that in the first 10 generations of the evolutionary process the Tree-based representation has a higher rate of change in fitness compared to the L-System. This finding is further confirmed by looking at the fitness distribution at Generation 0 for all runs and for the two encodings, shown in Figure 6. Here we see that the L-System even starts with an advantage, represented by the much higher fitness diversity in the initial population and the corresponding presence of higher fitness individuals. Despite this, the L-System experiment evolves initially at a slower rate than the Tree-based experiment (as in Figure 3e, which means that those high-fitness individuals present in the populations are not able to pass their phenotype to their offspring to the same extent as it happens in the Treebased representation. We argue that the concept of heritability can shed light to understand better what we observed above. Indeed, the Tree-based encoding has a higher heritability value than the L-System encoding (see Figure 3a and Figure 3b). Heritability can inform us on how much of the phenotypic trait variation will be passed on from parents to offspring, therefore high heritability at the beginning of the evolutionary process can predict a higher rate of change in the trait under selection, as is happening in our system. Thus, higher heritability at the beginning of the evolutionary process directly facilitates the effect of initial selection, because good parents have a higher probability of creating good offspring. Importantly, the relation between initial heritability and the initial rate of change of a trait is not true only for traits that are specifically under selection. To support this claim, we perform the same analysis for all traits, and we report here only one example. Figure 4e shows the rate of change of a morphological trait that is not under selection (number of limbs). Also here, we observe the same overall pattern: the Tree-based representation has a higher initial rate of change in this trait, consistently with having higher heritability (Table I). B. Heritability during the later phase of evolution The analysis of heritability can not only help to describe the behavior of evolutionary systems in their initial phases, but it can also show interesting patterns during later advanced phases of their process. From the theoretical definition, we expect the estimated value of heritability to be constant over the course of evolution, under the condition that the selection process does not affect the genetic variation in the population. Surprisingly, by computing the estimated value of heritability for each of the generations, shown in the top row of Figure 9, we observe changes of the estimated heritability in our experiments. The change in heritability value across generations is most evident for traits that are not under selection. When looking at Figure 9, we observe in the first generations a tendency for heritability to decrease for the Tree-based representation and to increase for the L-System representation. In the later stages of evolution heritability stabilizes for the tree-based representation and becomes highly unstable for the L-system genotype. The change of heritability over generations can be explained if we also analyze how the phenotypic diversity of the population over the generations varies. The diversity in the L-system population converges to zero quite quickly in all traits (bottom row of Figure 9), except the one we select for: speed (Figure 3f). In Figure 7 we can confirm and overall loss of phenotypic diversity for L-system experiments. An overall decrease in phenotypic diversity in artificial evolutionary systems is often observed when evolution is converging to a solution and it is caused by a corresponding loss of genotypic diversity. A change in genotypic diversity can also explain the change in heritability we measured. A similar pattern can be observed in the tree-based representation, but the changes in heritability and diversity are visible on a smaller scale, for fewer generations and smaller changes in values. Interestingly, we observed an unexpected overall increase in heritability for the L-system, as the value for heritability started relatively low and increased over generations. We hypothesize that selection is responsible for this effect. The mating selection is probably slowly excluding all individuals that present highly unpredictable gene sequences. These highly unpredictable gene sequences can cause very poor offspring to be generated from very fit parents and vice-versa. Slowly the selection process would pick up these highly unpredictable gene sequences in their low-fitness state and select them out of the next generation. This effect throughout many generations would explain a decrease in diversity and an increase in narrow-sense heritability, as only the predictable gene sequences consistently survive across multiple generations and predictable gene sequences have cause high heritability in the population by definition. Another pattern that we can observe is that heritability becomes highly unstable in later stages of evolution for Lsystem experiments; this is especially obvious in Figure 9a. The explanation for this effect can be found by looking at the diversity (Figure 9d). A decrease in diversity for a trait is an indicator of a decrease in overall genotypic diversity, which implies that the evolutionary process is not exploring the search space any more and all solutions are very similar. But to compute an accurate estimation of heritability we need high diversity of the population, otherwise we are computing the linear regression of a concentrated cloud of points, as shown in Figure 8. By contrast, we deduce that the treebased experiments are exploring the evolutionary space and having difficulties in exploiting the solution space. With this knowledge we can estimate that the tree-based representation needs a lower mutation rate and a stronger selection pressure to be able to exploit the solution space. This was to be expected because we choose very high mutation rates and a relaxed selection mechanism. On the contrary L-system seems to drop diversity as early as generation 20, which is pretty early. This is especially noticeable for trait which we don't select for (Figures 9d, 9e, 9f). Our L-system encoding seem to have difficulties to explore even from the early phases of evolution, despite using evolutionary parameters that encourage exploration. This suggest that an evolutionary process using our L-system encoding has a hard time escaping a few local optima found at the beginning, and probably needs some additional elements to encourage exploration. VI. CONCLUDING REMARKS In this paper we introduced the biological notion of heritability as a novel tool to study encodings in evolutionary robotics. Heritability captures the correlation between a quantifiable phenotypical trait measured in the parents and the one measured in the offspring. In our experiments we show that heritability can be a useful tool in evolutionary robotics to support the genotype design process. We used this novel tool to tackle the bootstrapping problem, because it reveals how exploratory a system is during the initial phases of evolution. We observed how towards the course of evolution, changes in heritability could correlate to changes in diversity; i.e. in our tree-based system diversity and heritability seem to stabilize, while, in the L-system experiments, diversity drops and the estimated heritability increases at first following by high instability. We related the different rates of heritability and diversity to different behaviours of exploration and exploitation and how these concepts seem to be intertwined, i.e. we observed a correlation between exploratory behaviours and high narrowsense heritability, and inverse correlation between exploitative behaviours and low heritability, caused by a greater epistasis effect in the encoding. Importantly, this analysis can be performed only within the first few generations (in our case, 50), during the transitory phase of the evolutionary process. Heritability proved to be a helpful tool to evaluate the shape and smoothness of the search-space, considering the landscape of both fitness and other phenotypic traits. Treebased experiments converge to solutions where robots still retain significant morphological diversity, meaning the local optima found by evolution in the search-space is a smooth wide hill. This was expected by a genetic encoding that is mostly made of additive genes. On the contrary, L-system experiments converged to a single morphological solution with little-to-none diversity, and by observing the change in heritability and diversity we can determine that solutions are unlikely to explore other peaks, probably because they are either too distant, too narrow or not good enough compare to the solution found. This is an indication that the L-system is a genetic encoding that contains lots of epistatic effects, meaning many genes need to align to be able to create a positive effect on the phenotype. In possess of this knowledge, we will transition to a treebased direct encoding in future work where the focus will be on evolution of morphological traits interacting with other elements, because differences in phenotypical traits will be more tangible. In other setups, were the interest is in studying complicated gene interaction, including epistatic effects, Lsystem is a good candidate. This paper also highlights how the notion of heritability draws attention to major discrepancies between biological and artificial evolutionary systems. Crucially, in biology computing a value of heritability requires measuring qualities of the traits at the phenotype level. However, phenotypic traits in biology In addition, biological systems have and retain high genotypic variation. In contrast, artificial systems are extremely simplified. In particular, in our evolutionary system, individuals develop before they can have any interaction with the environment. Additionally, when deployed in the environment, individuals have no morphological or behavioural adaptation systems at their disposal, no development. In addition, our environment offers minimal interactions with the environment. Artificial systems are also characterized by a generally lower genotypic diversity and a tendency to decrease even more over the course of artificial evolution, when the population converges to a useful solution. For the future of evolutionary robotics, adding the abovementioned biological elements would be very interesting, and some efforts have already been done in this direction. In [19,8] we find efforts to introduce learning systems that enable individuals to adapt to their environment during their lifetime. Some work can also be found studying how to design a system where the genotype-phenotype mapping can be influenced by the environment [18]. However, these additions to the evolutionary process are non-trivial: research on these more complicated and realistic systems require a substantial increase in computational cost required from the evolutionary process. This results in slower iterations and difficulties in the design and parameter tuning processes, especially if one wanted to study these processes in combination. The development of artificial life through artificial evolution is still in his infancy, and lot of work in the above directions and beyond could be done. Still, both for the current system complexity as well as for the one of future systems, in our view measuring heritability will be a useful tool that greatly increases our understanding of the relationship between phenotypes and genotypes.
2021-10-22T01:15:30.871Z
2021-10-21T00:00:00.000
{ "year": 2021, "sha1": "ea001e0de30db61dd844101d87f7d62f07c8da2f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ea001e0de30db61dd844101d87f7d62f07c8da2f", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
241673907
pes2o/s2orc
v3-fos-license
Computer as media in improving teacher performance and student learning process Technological developments in the globalization era is increasing, including in the education world, this development makes it easier for teachers and students in implementing teaching and learning process through computer media. Computer function is increasing sharply with the birth of internet technology, no longer limited to tools for storing, maintaining, and transferring knowledge, but able to become an interactive learning media through distance learning and teleconference, and others. The distance of the world of knowledge becomes increasingly narrow with the ease of accessing various information-news, science, knowledge, in the form of text and images through the internet network. Introduction Globalization has had an effect on all aspects of the world's order of life. One of aspect that is influenced by globalization is the aspect of education. Education is a process of transformation knowledge from the know to the uninitiated. This opinion is supported by Soekanto which states that education is the transfer of knowledge, norms and values in a formal or informal way [6]. Furthermore Hamalik mentioned that education is a process in order to influence learners to be able to adjust themselves as well as possible with their environment [2]. The development of the education world has a great influence in the world including the countries in it. The education world is like a river that continues as long as people still live in this world. This view means that education is all life situations that affect the growth of individuals as learning experiences that happen in all environments and throughout life. The rapid flow of globalism has an impact on the development of technology and information faster and higher, so with this development has brought impact not only on community activities but also impact on the world of education, the availability of facilities and infrastructure in supporting education are important in order to facilitate teachers in implementing teaching and learning process. Thus, the teacher can perform his / her duties as a professional educator. The use of technology in the education world in Indonesia is rampant so that the use of digital technology is commonly heard in use teaching and learning activities. However, the implementation of the use educational technology is still haphazardly caused by various technical constraints. This phenomenon shows the existence of perceptual gap with the ability of human resources educators and education personnel so that in this case the government must be able to finish the problems that arise in order to achieve equitable education and the creation of a harmonious education. Result and Discussion Learning and study are activity that always exist in human life or in other words can't be separated from human life. Without study and learning, human can't develop their potential to meet needs of his life. This is because all aspects of life including humans are always changing. Based on this explanation, we need to know the definition of learning from some experts according Warsita, learning is an activity to make learners learn or activities to educate the learners [1]. Meanwhile, according to Pribadi states that learning is a deliberately designed process to create the occurrence of learning activities in individuals [4]. As according to Aqib defines learning as a systematic effort by teachers to realize the learning process runs effectively and efficiently starting from planning, implementation and evaluation [1]. Based on some of the above opinions, learning is a systematic planned effort used by teachers to encourage students to be active, creative and create a pleasant atmosphere in it's function as a contributor and facilitator in order to achieve effective and efficient learning process that starts from planning, implementation and evaluation. Each school has the means as a learning resource. Indonesian Dictionary, the definition of the means is anything that can be used as a tool to achieve the purpose or goal. As for the meaning of the infrastructure is all things either in the form of material and nonmaterial which the main support implementation of a process are. Facilities are anything that directly supports the learning process activities, for example learning media, school equipment, etc. While the infrastructure is anything that can indirectly support for success of the learning process, for example the way to school, school lighting, restrooms and so on. Completeness of facilities and infrastructure will assist teachers in conducting the learning process, thus facilities and infrastructure are important components that can affect the learning process [5]. School facilities are instrumental in supporting the success of learning programs. School facilities are also very instrumental in improving the competence of teachers. Adequate facilities can increase teacher knowledge and improve teacher skills. The experience of teachers increases by utilizing a number of facilities. So they always and can learn whenever they have time. Some of facilities directly related to the development of teacher competencies and also help the teaching and learning process in the classroom include computer and science laboratories, libraries, internet and horseshoes. The facility is a learning resource that is very important for the development of teacher competencies. Therefore, school leaders, systems, and school cultures encourage teachers to use the learning resources as well and as effectively as possible to support the learning process and teacher competence. Provision of adequate learning resources aims to enable teachers to learn, support learning in the classroom, and improve their competence. The use of technology in education is very important where it relates to the tools used in the implementation of teaching and learning such as computers, language laboratories and other projected media. This program is only possible running well when the teachers were able to operate a computer, and of course computers are also available at the school, at least one class of the computer. But with the different financial capabilities of schools and the limited capacity of the government, not all schools are able to provide computer laboratories. At present, data shows that of all vocational schools in Indonesia around 70 percent have computer labs. Meanwhile for new high schools around 40 percent of junior high schools are 30 percent and elementary schools are still under 10 percent [3]. Benefits of the computer are to store knowledge, keep it, and move it [3]. The presence of computers has a tremendous impact on the development of science. The way individuals and companies work increases sharply several times, both in terms of quality and quantity. The presence of portable computers allows each individual to learn wherever and whenever, in the car, while waiting, at the mall, and so on. Because through the computer can search and get the information he needs in a matter of seconds, the computer allows teachers guide students to use critical technology, which became lifelong learning. Computers can also make teachers work carefully with students and other educators to achieve educational goals and standards. In the past write a letter or a book with a note on the paper, and then switch to manual electric machines, before moving on to computers. In the next development of computers also experienced the development and innovation very quickly, from PC, notebook / laptop, to netbook. Nowadays, in general, almost certainly in every house there is a computer device. Professional workers, lawyers, doctors, businessmen, professors, teachers, and even students and students, have and attached to a notebook or netbook, and take it with them wherever they go. The presence of laptops and netbooks is not only mushrooming in the campus environment or academic environment, but has penetrated into large, medium and small business areas, such as malls, cafes, and shopping places that are commonly visited by students and the public. This is possible because of the availability of free internet connection facilities for visitors by the owner or manager of the building or shop. Only carry a laptop, turn on, free internet facilities can be enjoyed (usually with certain conditions). Conditions in elementary, middle and high school, in certain big cities in Indonesia have experienced conditions similar to those mentioned above. Computer function is increasing sharply with the birth of internet technology, no longer limited to tools to store, maintain, and transfer knowledge, but able to become interactive learning media through distance learning and teleconference, and others. Distance of the world of knowledge becomes increasingly narrow with the ease of accessing various information-news, science, knowledge, in the form of text or images through the internet network. The Google search engine is worth mentioning here as one good example of providing the necessary data, whether on education, politics, economics, sports, health, culture and entertainment. Teachers in the 21st century actually should know and be able to operate computers. Although it may be the fact that many teachers still stutter technology. When viewed from the field of work, teachers will always be related to the computer, such as preparing a lesson plan, preparing curriculum level education, and writing scientific papers. Teachers should have a computer or laptop to support teaching and learning activities and productivity as educators. But it is clear that not all teachers can afford to buy a computer. That there is a gap in ownership and access to information-communication technology between developed and developing countries between big cities and small cities [3]. In the midst of the demands competence and professionalism of teachers today. It seems that the teacher really needs computer or laptop media, and better if you can connect to the internet. Computers will be very beneficial for teacher performance. Among others: a. Adding scientific insight. In addition through the book, teachers can get information from computers and the internet. Only by carrying a lightweight laptop. Teachers can carry and read hundreds of book titles and thousands of pages in digital form. b. Enabling teachers to interact with professional people outside their school environment. This moment can be used to share experiences and knowledge and useful information to improve the teacher quality. c. Facilitate teacher work. Teachers write and draw with computers more easily than ordinary writing and drawing tools. Probably from the time also could be faster. So it's easy to correct writing or drawing through the computer in case of errors. d. Make it easier for teachers to deliver instruction (message or information) in students. Teachers convey student information by speaking or writing on the board but information can also be submitted through Power points in form of text, drawings, or tables. Even PowerPoint are made properly and carefully will attract more students to learn and easier to absorb the meaning and message. e. Motivate teachers to be productive or more productive in their work. The presence of a computer allows the teacher to write his ideas anytime and anywhere. Likewise when you have to send the manuscript -in the form of an article or book -to the publisher, the teacher only needs a few seconds or minutes through the email facility. If most teachers still difficult to have a laptop. So computer labs in schools can be utilized by teachers as much as possible, which is when they are in school. Teachers can't always deal with computers and the internet every day. Unless both of us only need it when needed for something that is really important, useful, and improve the quality of teachers beyond this, now books are very abundant in school and private libraries, and these books require teachers who are diligent in reading books for touch it. Conclusion Education and learning is an activity that always exist in human life or in other words can't be separated from human life. Without education and learning, human can't develop their potential to meet the needs of their life. This is because all aspects of life including humans are always changing. School facilities are instrumental in supporting the success of learning programs. And school facilities are also very instrumental in improving teacher competence. Adequate facilities can increase teacher knowledge and improve teacher skills. The experience of teachers increases by utilizing a number of facilities. So that they are always and can learn whenever they have time. The use of technology in the learning activities are compulsory in education in the era of globalization. Because it can facilitate the learning process and also this as an age demand so that students are not left behind in the use of technology and can increase the existing human resources so that students are able to compete well when plunging into the field of work.
2019-09-15T03:06:25.514Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "fb5df99f591039a75bfd3b005c55de638b9e4da5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1175/1/012161", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eef6f673bd414d1d1ae54fa3a1cac5d96d98e3ed", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
240157863
pes2o/s2orc
v3-fos-license
White matter and nigral alterations in multiple system atrophy-parkinsonian type Multiple system atrophy (MSA) is classified into two main types: parkinsonian and cerebellar ataxia with oligodendrogliopathy. We examined microstructural alterations in the white matter and the substantia nigra pars compacta (SNc) of patients with MSA of parkinsonian type (MSA-P) using multishell diffusion magnetic resonance imaging (dMRI) and myelin sensitive imaging techniques. Age- and sex-matched patients with MSA-P (n = 21, n = 10 first and second cohorts, respectively), Parkinson’s disease patients (n = 19, 17), and healthy controls (n = 20, 24) were enrolled. Magnetization transfer saturation imaging (MT-sat) and dMRI were obtained using 3-T MRI. Measurements obtained from diffusion tensor imaging (DTI), free-water elimination DTI, neurite orientation dispersion and density imaging (NODDI), and MT-sat were compared between groups. Tract-based spatial statistics analysis revealed differences in diffuse white matter alterations in the free-water fractional volume, myelin volume fraction, and intracellular volume fraction between the patients with MSA-P and healthy controls, whereas free-water and MT-sat differences were limited to the middle cerebellar peduncle in comparison with those with Parkinson’s disease. Region-of-interest analysis of white matter and SNc revealed significant differences in the middle and inferior cerebellar peduncle, pontine crossing tract, corticospinal tract, and SNc between the MSA-P and healthy controls and/or Parkinson’s disease patients. Our results shed light on alterations to brain microstructure in MSA. INTRODUCTION Multiple system atrophy (MSA) is a progressive neurodegenerative disorder characterized by parkinsonism that responds poorly to levodopa, as well as autonomic failure and cerebellar ataxia 1 . Clinically, MSA is divided into parkinsonian (MSA-P) and cerebellar types. Pathologically, glial cytoplasmic inclusions (GCIs), which are associated with α-synuclein aggregation, appear in the oligodendrocytes of patients with MSA, predominantly in the pontine nucleus, olivary nucleus, cerebellum, substantia nigra, striatum, and white matter 2 . The accumulation of these GCIs causes chronic neuroinflammation 3 . Several imaging studies of MSA patients have reported the utility of diffusion tensor imaging (DTI) for discriminating MSA from Parkinson's disease (PD) [4][5][6] . In recent years, more advanced diffusion magnetic resonance imaging (MRI) methods, such as neurite orientation dispersion and density imaging (NODDI) 7 , have been developed to capture more detailed alterations in brain microstructure. NODDI allows the parameters of intracellular volume fraction (ICVF; indicating neurite [axon and dendrites] density based on intracellular diffusion), orientation dispersion index (ODI; indicating neurite dispersion), and isotropic volume fraction (ISOVF; indicating the volume fraction of isotropic diffusion, such as that which occurs in cerebrospinal fluid) to be obtained 7,8 . Furthermore, the fractional volume of free water (FW) reflects isotropic water diffusion in the interstitial extraneuronal space 9,10 , and increases in response to neuroinflammation, axonal injury, and demyelination 11 . FW-eliminated DTI (FWE-DTI), which involves the elimination of alterations from FW, can provide measures of FW-corrected fractional anisotropy (FA T ), FWcorrected mean diffusivity (MD T ), FW-corrected axial diffusivity (AD T ), and FW-corrected radial diffusivity (RD T ), which are more specific to tissue alterations and neurodegeneration than the corresponding measures obtained from conventional DTI [12][13][14][15] . Recently, FW studies in MSA patients have been reported [16][17][18] , but they focused on the basal ganglia and substantia nigra only. Furthermore, myelin-sensitive imaging techniques, such as magnetization transfer saturation (MT-sat) imaging, also exist and are suitable for the evaluation of white matter demyelination. These allow a myelin volume fraction (MVF) to be obtained, with low values indicating demyelination (Supplementary Table 1) 19 . In the present study, we hypothesized that advanced diffusion MRI and myelin-sensitive imaging would allow us to detect more specific pathologies-such as neuroinflammation, neurodegeneration, and demyelination-than conventional MRI in the white matter of patients with MSA-P, which may be useful for understanding the pathological mechanisms of MSA-P 3 . To evaluate this hypothesis, we used DTI, NODDI, FWE-DTI, and MT-sat imaging to investigate the white matter and substantia nigra of healthy controls (HCs), patients with PD, and patients with MSA-P. Both cohorts had similar demographics, but the MRI findings of a pontine or middle cerebellar peduncle sign were less pronounced in the second cohort. There were no significant differences in age or sex between any of the groups, or in the levodopa equivalent daily dose (LEDD) or the presence of rapid eye movement sleep behavior disorder (RBD) between MSA-P and PD patients. The MSA-P patients had a significantly longer (P < 0.05) disease duration, higher Hoehn & Yahr (HY) stage, and higher Movement Disorder Society Unified PD Rating Scale (MDS-UPDRS) part 3 score (Japanese-translated version) than the PD patients. The Unified MSA Rating Scale (UMSARS) part 2 was only assessed in the MSA-P patients. T2-weighted imaging (WI) in patients with MSA-P showed the pontine cross sign, the vertical hyperintensity line in the pons, cerebellar atrophy, hyperintensity of the middle cerebellar peduncle (MCP), and the putaminal slit (Table 1). Voxel-wise tract-based spatial statistics (TBSS) analysis TBSS analyses were performed using the acquired MRI data, but the MVF was not calculated for the second cohort; these subjects did not undergo MT-sat because of a difference in the imaging protocol (see Methods). We compared DTI (FA, MD, AD, and RD), NODDI (ICVF, ODI, and ISOVF), FWE-DTI (FW, FA T , MD T , AD T , and RD T ), and MT-sat (MVF) indices between HCs, PD patients, and MSA-P patients (Fig. 1, first cohort). Details of the anatomical regions, peak t values, and peak Montreal Neurological Institute (MNI) coordinates of the significant clusters are shown in Supplementary Tables 2 and 3 (first and second cohorts, respectively). In the first cohort, DTI analyses revealed diffuse differences between the groups in the cerebral and cerebellar white matter and the brainstem (Fig. 1a, Supplementary Table 2). Specifically, the MSA-P patients had significantly (family-wise error-corrected P value < 0.05) lower FA and higher MD, AD, and RD compared with the HCs, while the PD patients had significantly lower FA and higher MD and RD compared with the HCs. Furthermore, the MSA-P patients had significantly higher MD, AD, and RD than the PD patients. The PD patients showed no significant differences in FA and AD compared with either the HCs or MSA-P patients. In the second cohort, very similar results (i.e., higher values) were obtained for MD, AD, and RD in the MSA-P patients compared with the HCs and PD patients (Supplementary Table 3). In the first cohort, differences in the NODDI indices of ICVF or ODI in the MSA-P patients (Fig. 1b, Supplementary Table 2) were largely limited to the brainstem, in contrast to the diffuse differences that were obtained with DTI. In diffuse white matter regions, there was significantly lower ICVF and higher ODI and ISOVF in the MSA-P patients compared with the HCs, and significantly lower ICVF in the PD patients compared with the HCs. Furthermore, the MSA-P patients had significantly lower ICVF in the MCP, significantly higher ODI in the external capsule (EC), and diffuse regions of significantly higher ISOVF compared with the PD patients. In the second cohort, very similar results were obtained, with the MSA-P patients showing diffuse regions of lower ICVF and higher ISOVF compared with the HCs, as well as significantly higher ODI in the EC (Supplementary Table 3). In the first cohort, FWE-DTI analyses of MSA-P patients revealed significant differences in the brainstem, including in the MCP (Fig. 1c). Specifically, there were diffuse regions of significantly higher FW, a narrow area of significantly lower AD T , and a narrow area of significantly higher FA T , MD T , and RD T in the MSA-P patients compared with the HCs, as well as significantly lower FA T and higher FW and RD T in the PD patients compared with the HCs. Furthermore, the MSA-P patients had significantly higher FW and FA T compared with the PD patients. Very similar results for FW, MD T , and RD T were obtained in the second cohort when comparing the MSA-P patients with HCs, as well as for FW in the PD patients compared with HCs (Supplementary Table 3). MT-sat analyses, which were only performed on the first cohort, revealed lower MVF in the MSA-P patients compared with the HCs and PD patients (Fig. 1d). There were no differences between the HCs and PD patients. Region-of-interest (ROI) analysis ROI analysis was performed based on the regions in a white matter atlas; 30 ROIs were analyzed to comprehensively investigate MSA-P-specific white matter changes (see Methods for more details). In the first cohort, the white matter microstructures of MSA-P patients were significantly different to those of HCs and PD Table 4). The substantia nigra pars compacta (SNc) was automatically applied to the automated anatomical labelling atlas 3 (AAL3) 20 , and the anterior (aSN) and posterior (pSN) regions were analyzed separately (Fig. 3). In the first cohort, both the aSN and pSN had significantly higher ICVF and FA T in the MSA-P and PD patients compared with the HCs, and significantly lower RD T . The FW was significantly higher in the aSN of PD patients compared with HCs, whereas in the pSN it was significantly higher in both MSA-P and PD patients compared with HCs ( Fig. 4, Supplementary Table 5). Similar results were observed in the second cohort, especially for FA T and RD T (Fig. 4, Supplementary Table 4). Tract-based spatial statistics (TBSS) analyses showed that MSA-P patients had significantly (P < 0.05, FWE-corrected) lower (blue/light blue voxels) FA, ICVF, MVF, and AD T , and higher (red-yellow voxels) MD, AD, RD, ODI, ISOVF, FW, FA T , MD T , and RD T than healthy controls, and significantly higher (blue/light blue voxels) MD, RD, FW, and RD T , and lower (redyellow voxels) FA, ICVF, and FA T than Parkinson's disease patients. MSA-P patients showed lower (blue/light blue voxels) ICVF and MVF, and higher (red-yellow voxels) MD, AD, RD, ODI, ISOVF, FW, and FA T , compared with Parkinson's disease patients. There were no significant differences in MVF between the groups (see Supplementary Table 2). The TBSS results were very similar in the second cohort (see Supplementary Table 3). The skeleton is presented in green. To aid visualization, the results are thickened using the fill script implemented in the FMRIB Software Library. Correlation analysis Spearman's rank correlation analysis was used to investigate the correlations between significantly different MSA-P-specific regions from the ROI analysis and clinical features (Tables 2 and 3, first and second cohorts, respectively). In the first cohort white matter analyses, the FW results had a significant (FDR-corrected P < 0.05) positive correlation with HY stage in the ICP, whereas MDS-UPDRS part 3 scores were negatively correlated with MVF in the PCT and MCP (Table 2; vs. clinical features). For the MCP and ICP, which showed MSA-Pspecific changes, there were correlations between all parameters. In the MCP, there were significant (FDR-corrected P < 0.05) positive correlations between FW and RD T , and between MVF and ICVF, whereas there were negative correlations between the FW and ICVF, RD T and ICVF, MVF and FW, and MVF and RD T . In the ICP, there was a positive correlation between the MVF and ICVF, and In the second cohort of white matter, very similar results of first cohort were observed in the four regions of MCP, ICP, PCT, and CST. The differences from the first cohort were the absence of significant differences in FW in the ICP (although a non-significant trend remained), significantly higher FW in the PCT, and significantly lower ICVF in the CST of MSA-P patients compared with HCs and PD patients. In the boxplot, the center line represents the median, the box represents the third quartile, and the whiskers represent the maximum and minimum values. FW free water, ICVF intracellular volume fraction, MVF myelin volume fraction, pSN posterior part of SNc, RD T free water-corrected radial diffusivity, aSN anterior part of SNc, SNc substantia nigra pars compacta. T. Ogawa et al. there were negative correlations between the FW and ICVF and between the MVF and FW (Table 2; vs. other indices). In the second cohort, there were significant (FDR-corrected P < 0.05) negative correlations between the disease duration and RD T of the MCP, and between the HY stage and ICVF of the MCP. There were also significant positive correlations between HY stage and RD T of the MCP, and between MDS-UPDRS part 3 scores and RD T of the MCP (Table 3; vs. clinical features). Comparisons between the other indices, ICVF, and RD T revealed significant (FDRcorrected P < 0.05) negative correlations in the MCP (Table 3; vs. another model). In the first cohort, the SNc had significant (FDR-corrected P < 0.05) positive correlations between the MDS-UPDRS part 3 scores and FA T in the aSN, and HY stage and MDS-UPDRS part 3 scores were positively correlated with FA T in the pSN. Moreover, there were significant negative correlations between MDS-UPDRS part 3 scores and HY stage and RD T in the aSN, and HY stage and RD T in the pSN (Table 2; vs. clinical features). In both the aSN and pSN, there were significant (FDR-corrected P < 0.05) positive correlations between the ICVF and FA T , and significant negative correlations between ICVF and RD T , and FA T and RD T ( Table 2; vs. other indices). In the second cohort, there were no clear correlations between any SNc parameters and clinical features (Table 3; vs. clinical features). However, the correlation results between SNc parameters and other indices were consistent with those in the first cohort (Table 3; vs. other indices). Nominal logistic regression analysis and receiver operating characteristics (ROC) curves Stepwise forward logistic regression analysis revealed that, from the 30 ROIs in ICVF, FW, and MVF in which multiple changes were noted in the white matter (Supplementary Table 5), four tracts might be useful for differentiating MSA-P from PD. ROC analysis was performed to confirm the diagnostic benefits for MSA-P. In the first cohort, the area under the curve (AUC) for ICVF was 0.935 (specificity 94.7%, sensitivity 81.0%) when using the four regions of the MCP, CST, superior longitudinal fasciculus (SLF), and inferior fronto-occipital fasciculus (IFOF). For FW, the AUC was 0.965 (specificity 94.7%, sensitivity 95.2%) when using the MCP, ICP, SLF, and inferior longitudinal fasciculus (ILF). For MVF, the AUC was 1.000 (specificity 100%, sensitivity 100%) when using the MCP, EC, SLF, and uncinate fasciculus (UF) (Fig. 5a, Supplementary Table 5). In the second cohort, we examined whether the tracts selected in the first cohort were able to continue to differentiate MSA-P from PD. Both the ICVF and FW results had AUCs over 0.9 using the same regions as used for the first cohort (Fig. 5b, Supplementary Table 5). DISCUSSION In the present study, we used advanced diffusion MRI (NODDI, FWE-DTI) and myelin-sensitive imaging (MT-sat) to examine the white matter and SNc microstructure in HCs, PD patients, and MSA-P patients. These advanced diffusion MRI and myelin-sensitive imaging techniques were able to capture alterations specific to MSA-P patients, which involve white matter degeneration and myelin changes related to oligodendrogliopathy. MRI has been used to identify the characteristic pathological changes of MSA in the pons, cerebellum, striatum, and substantia nigra. MSA can be diagnosed from conventional MRI sequences if characteristic findings are depicted, such as the pontine cross, hyperintense MCP, and putaminal slit sign 1 . However, not all MSA-P patients have these specific MRI findings (Table 1); therefore, various studies using MRI, including diffusion MRI, are required to improve the diagnosis of MSA-P at earlier stages and to identify other characteristic changes. Our TBSS analysis of DTI identified diffuse white matter alterations (Fig. 1a). A decrease in FA and an increase in MD suggests neurodegeneration 21 , whereas an increase in RD suggests demyelination 22 . An increased MD and RD compared with HCs 23 , and a decreased FA and increased MD compared with PD, especially in the MCP 24,25 , have been previously reported in MSA-P patients, and we found similar results in the present study. However, in the current study, patients with MSA-P had no significant differences in FA compared with PD patients. This is probably because FA overestimates neurodegeneration in areas that are rich in crossing fibers and FW 14,26,27 . However, we revealed that FA T was different between MSA-P and PD patients in a small area. Thus, as previously reported 14 , to elucidate the white matter microstructural alterations that occur in MSA, FW signals should be eliminated from the usual DTI [23][24][25]27 . We analyzed white matter alterations in MSA-P patients using multishell bi-tensor NODDI and FWE-DTI. TBSS and ROI analysis of the white matter showed MSA-P-specific alterations, with decreased ICVF and increased FW in the MCP, ICP, and CST (Figs. 1d, 2). Decreased ICVF has been reported to reflect a sparsity of neurites in the white matter 7,8 , and increased FW can be caused by abnormal extracellular FW space, which is associated with neuroinflammation, axon degeneration, and demyelination 11 . Considering the pathology of MSA-P as an oligodendrogliopathy caused by α-synuclein aggregation, the accumulation of GCIs might induce neuroinflammation 28 , Fig. 3 The anterior and posterior ROI of substantia nigra pars compacta. The SNc ROI was automatically created with using automated anatomical atlas 3rd version (AAL3). The anterior substantia nigra (aSN; blue) and the posterior substantia nigra (pSN; red) were manually divided in the middle of each section of SNc atlas (see Methods). ROI region-of-interest, SNc substantia nigra pars compacta. leading to demyelination and a loss of neurites. Thus, the decreased ICVF and increased FW that was captured by diffusion MRI in the present study might reflect the pathological condition of MSA. A previous report revealed that [11 C] (R)-PK11195 positron emission tomography (PET) might be useful for reflecting neuroinflammation caused by microglial activation in MSA 29 . In the future, comparisons between FW imaging and [11 C](R)-PK11195 PET might help to clarify the mechanisms of in vivo neuroinflammation in MSA. Previous studies have reported increased FW in the substantia nigra, striatum, globus pallidus, red nucleus, thalamus, pedunculopontine nucleus, MCP, superior cerebellar peduncle, vermis, cerebellar lobule, and corpus callosum of MSA patients, as well as increased FA T in the striatum 16,17 . Mitchell et al. also reported NODDI and FW alterations in the basal ganglia, thalamus, cerebellum, and brainstem of MSA patients compared with HCs, PD patients, MSA-P patients, and progressive supranuclear palsy patients 18 . The study design of Mitchell et al. was similar to ours, although they used single-shell diffusion-weighted imaging (DWI) data, whereas we used multishell DWI. Multishell DWI might provide more detailed information on brain microstructures and alterations in extracellular FW 30 . In addition, no FW studies have extensively focused on white matter microstructure in MSA [31][32][33][34] . Although MT-sat (a myelin-sensitive technique) has been demonstrated to have good sensitivity for the detection of demyelinating white matter lesions 35 , only a few studies have used myelin-sensitive imaging to examine the white matter of MSA patients. In one study, differences in the magnetization transfer ratio were reported in the precentral gyrus only 36 , whereas another study reported no significant white matter alterations 37 . However, MT-sat improves the magnetization transfer ratio and can better delineate demyelinating lesions, consistent with clinical symptoms 38 . In the present study, specific changes in MVF were observed in MSA-P patients compared with HCs or PD patients (Figs. 1d, 2), whereas there were no differences between the HCs and PD patients. Thus, MVF was able to clearly distinguish between the PD and MSA patients. Consistent with pathological findings, the most prominent alterations in MVF occurred in the brainstem and cerebellum, and were indicative of severe demyelination. Similar to MVF, RD, which can also be used to detect demyelination 22 , showed widespread changes in the white matter, cerebellum, and brainstem in the present study. However, widespread RD changes were also observed in PD patients who had no pathological evidence of demyelination. We therefore speculate that RD might be affected by regions with many crossing fibers, Fig. 4 Results of the nigral ROI analysis. In the first cohort, both aSN and pSN showed significantly higher ICVF and FA T , and significantly lower RD T in the MSA-P and PD patients compared with the HCs. The FW was significantly higher in aSN of PD patients compared with HCs, whereas the pSN showed significantly higher in both MSA-P and PD patients compared with HCs (Supplementary Table 4). Similar results were observed in the second cohort, especially in the FAT and RDT (Supplementary Table 4). In the boxplot, the center line represents the median, the box represents the third quartile, and the whiskers represent the maximum and minimum values. Abbreviations: FW free water, ICVF intracellular volume fraction, MVF myelin volume fraction, pSN posterior part of SNc, RD T free water-corrected radial diffusivity, aSN anterior part of SNc, SNc substantia nigra pars compacta. as well as by disease-induced white matter microstructural alterations 39 . Thus, MVF may be more sensitive than RD for the detection of demyelination in MSA patients. The characteristic conventional MRI findings of MSA-P (shown in Table 1) provide a very good diagnostic basis. Because the present research focused on identifying microstructural alterations of MSA-P patients, we selected cases with sufficient diagnostic evidence of MSA-P, and also examined diagnostic accuracy. Several parameters, including ICVF from NODDI, FW from FWE-DTI, and MVF from MT-sat were useful for discriminating between the MSA-P and PD (Fig. 5). In particular, MVF in the MCP, EC, SLF, and UF reached 100% sensitivity and specificity for this discrimination, while ICVF in the MCP, CST, SLF, and IFOF reached a specificity of about 95%, and FW in the MCP, ICP, SLF, and ILF reached a sensitivity and specificity of about 95%. The sensitivity and specificity of ICVF and FW were confirmed in the second cohort, with the AUCs exceeding 0.9 (Fig. 5a, b). These results suggest that capturing white matter abnormalities may be important for understanding the pathophysiology of MSA-P. ROI analysis revealed alterations of ICVF, MVF, FW, and RD T in the MCP, ICP, PCT, and CST of MSA-P patients compared with HCs and PD patients (Fig. 2). Correlation analyses performed on the two cohorts indicated that MDS-UPDRS part 3 scores and HY stage were associated with MVF and RD T of the MCP (Tables 2 and 3). These data indicate that there are associations between the state of white matter microstructures and disease progression, with the deterioration of motor symptoms. These effects on motor function may be associated with cerebellar dysfunction; in general, cerebellar ataxia is closely correlated with impaired motor function 40 . The parameters of ICVF, FW, RD T , and MVF in the MCP, and the measures of ICVF, FW, and MVF in the ICP, were correlated with one another (Table 2). In the second cohort, although there were fewer participants than in the first cohort, a negative correlation (r s = −0.88) between the RD T and ICVF was also confirmed ( Table 3). Increased RD T and decreased ICVF may reflect demyelination and a decreased density of neurites in the white matter 41 . Considering the MT-sat findings, alterations in these parameters may reflect oligodendrogliopathy in MSA. On the basis of the ROI analysis of the first cohort, MSA-P-specific changes in FW, MVF, and ICVF can be divided into three categories: elevated FW and decreased MVF and ICVF, as found in the MCP and ICP; decreased MVF only, as found in the PCT; and elevated FW only, as found in the CST. These findings suggest that the MCP and ICP may be more vulnerable than other regions to white matter degeneration caused by neurite loss and demyelination. Although the MCP sign in conventional MRI was only observed in 4.8% of patients in the present study (Table 1), the microstructure was highly impaired. PCT degeneration can be seen as a pontine cross sign on conventional MRI. MVF revealed that demyelination might occur in the PCT, although it was only identified in 9.5% of patients in the present study. In the CST, FW was significantly higher and MVF was significantly lower compared with the HCs, but MVF was not significantly different from that of PD patients. These changes in the PCT and CST were different between the two cohorts, but the tendency of the mean value was the same (Fig. 2, Supplementary Table 4), and it was considered that the difference in tract impairment affected the results. Considering the MCP/ICP changes and MSA pathology, it is possible that changes in FW and ICVF may also be observed in the PCT and CST as the disease progresses. The different distributions of ICVF, FW, and MVF in regions where pathological changes have been reported in MSA indicate that a combination of these parameters might help us to understand which neural structures are impaired. In the ROI analysis of the SNc, both the cohorts consistently had increased FA T (in both anterior and posterior parts) and decreased RD T (in the posterior part only) in patients with PD and MSA-P compared with HCs. These findings might indicate the Neurodegenerative diseases such as PD or MSA have generally been associated with decreased FA in the substantia nigra [42][43][44] . However, these findings have not always been consistent. No differences between groups or increased FA have been reported in the substantia nigra of patients with PD compared with HCs 45,46 . The conflicting findings may be partly caused by discrepancies in sample characteristics (e.g., disease duration or symptom severity) and acquisition schemes. Moreover, the correct identification of each ROI is critical to study results. Previous studies in PD and MSA-P patients have used different ROI settings, such as manual or automated identification of the whole substantia nigra (including the substantia nigra pars reticulata) or SNc 47 . Considering that dopaminergic neuronal loss in the SNc is considered a common pathological feature of PD and MSA-P 2 , measurement of the whole substantia nigra might have reduced the diagnostic accuracy. Importantly, DTI-derived measures lack specificity. For example, decreased FA may be attributed to neuronal degeneration or neuroinflammation 48 . In the current study, we evaluated the SNc in two cohorts (with different diffusion MRI acquisition parameters) of patients with PD and MSA-P using the AAL3 atlas (Fig. 3) 20 . A fully automatic method was applied to ensure the accuracy of SNc segmentation in each subject. Furthermore, we performed FWE-DTI to eliminated the influence of extracellular FW from DTI-derived measures, allowing more specific measurements of tissue pathologies. Both the cohorts of PD and MSA-P patients showed consistent results (e.g., increased FA T or decreased RD T ) in the SNc compared with HCs. Therefore, we can reasonably claim the robustness of our results. Increases in iron concentrations in the SNc, as occurs in many neurodegenerative diseases, may lead to dopaminergic cell death 49 . It is well recognized that nigral iron accumulation contributes to changes in DTI-derived measures, including FA elevation. A longitudinal study over 12 years (with 6-year intervals) in the deep gray nuclei of PD patients demonstrated increased FA at the 6-year follow-up and decreased FA at the 12-year follow-up, which suggests iron deposition and neuronal loss, respectively 50 . Furthermore, significantly higher FA T was observed in the pSN of patients with MSA-P (in the first cohort) compared with PD patients in the present study. Pathological studies have demonstrated that iron levels in the substantia nigra are higher in patients with MSA-P than in those with PD [51][52][53] , suggesting that patients with MSA-P have more severe alterations in the SNc compared with PD patients. Indeed, significantly higher HY stages and MDS-UPDRS part 3 scores were observed in MSA-P patients compared with PD patients (Table 1). In addition, decreased RD T was consistently demonstrated in the pSN in both the cohorts of PD and MSA-P patients compared with HCs, and RD has been reported to decrease as local iron concentrations increase 54 . Different pathological processes, such as gliosis, where the infiltration of gliotic cells increases tissue anisotropy, may also be responsible for increased FA in the SNc 55 . Furthermore, a functional connectivity or pathological study has demonstrated widespread connectivity between the SNc and other brain regions such as the substantia nigra pars reticulata, subthalamic nucleus, limbic system, and frontal cortex 56 . Hence, the SNc is thought to contain a large number of different fibers 57 . Although SNc dopaminergic nerves are degenerated in PD and MSA-P, other nondopaminergic fibers are relatively conserved; therefore, a selective loss of specific fiber directions in the crossing-fiber areas may also increase FA 58 . Future studies, including neuromelanin-sensitive imaging 59 and susceptibility-weighted imaging 60 , are warranted to set appropriate ROIs in the SNc and explore iron accumulation with higher specificity. In the present study, significantly higher ICVF was also demonstrated in the aSN and pSN in patients with MSA-P compared with HCs (in the first cohort only). Similar to FA T and RD T , the changes in ICVF might also be attributed to iron accumulation and/or gliosis. This concept is reinforced by the finding that ICVF was significantly correlated with FA T and RD T (Tables 2 and 3). However, further histopathological studies are needed to clarify the underlying mechanisms of increased ICVF. Considering the neurodegeneration of SNc that occurs in synucleinopathies, our results might reflect the precise histological state of the SNc in vivo 2,61 . Moreover, we revealed that HY stage was significantly correlated with RD T of the aSN, and with FA T and RD T of the pSN, and that MDS-UPDRS part 3 scores were significantly correlated with FA T of the aSN and pSN. These results suggest that alterations in FA T and RD T in the substantia nigra might be a useful biomarker for evaluating motor dysfunction in synucleinopathies. The present study had several limitations. First, the diagnoses of PD and MSA-P were carefully determined by neurologists and radiologists based on diagnostic criteria and neuroimaging findings, but were not pathologically confirmed. Second, this was a crosssectional study, and a longitudinal study is needed to elucidate the pathogenesis of MSA and evaluate disease progression. Third, it should be noted that the presence of FW producing neuroinflammation has not been fully validated by pathological or PET studies. Finally, although the MRI data were acquired using a sufficiently uniform procedure, the effects of head angles at the time of imaging need to be taken into consideration. However, in the current study, there were no differences between the groups in head angle (as calculated by the method shown in Supplementary Figure. 1). In conclusion, the present study used two different MRI protocols and cohorts and revealed that multi-shell bi-tensor NODDI, FWE-DTI, and MT-sat can detect white matter and SNc microstructural alterations in MSA-P patients compared with PD patients and HCs. Our data indicate that a combination of these imaging parameters has the potential to identify differences in the degeneration of specific regions. From the results of this study, white matter alterations of ICVF, FW, and MVF, and SNc alterations of ICVF, FA T , and RD T may be useful for evaluating neurodegeneration. Our results may help to understand the alterations in brain microstructure that occur in MSA-P. Further studies combined with histological analysis to elucidate the pathology of neurodegeneration in MSA are needed. METHODS Participants This study was conducted in compliance with the Declaration of Helsinki (1964, latest update in 2013) and received ethical approval from Juntendo University (14-011). Informed written consent was obtained from all participants. Clinical data were carefully evaluated by three movement disorder specialists (T.O., T.H., and H.T.A.), and cases with apparent cognitive impairment based on the Mini-Mental State Examination 62 or minimal vascular lesions were excluded from the study. Patients who met the criteria for the diagnosis of probable MSA-P 1 , age-and sex-matched patients with clinically established PD 63 who met the Movement Disorder Society Clinical Diagnosis Criteria for Parkinson's Disease, and age-and sexmatched HCs were enrolled in this study. All MSA-P patients were followed for more than 3 years and the presence of other neurodegenerative diseases was ruled out. The analyses performed on the first cohort were followed by a second set of analyses on a second cohort to corroborate the findings. The first cohort was made up of 21 patients with MSA-P (62.5 ± 11.7 years; eight men), 19 patients with PD (63.1 ± 8.1 years; seven men), and 20 HCs (62.8 ± 4.7 years; eight men). The second cohort was made up of 10 patients with MSA-P (65.3 ± 9.5 years; seven men), 17 patients with PD (63.2 ± 10.2 years; 10 men), and 24 HCs (65.8 ± 6.5 years; 13 men). In the first cohort, data were collected between January 2017 and December 2018, and in the second cohort, data were collected between January 2019 and November 2020. Clinical information from the MSA-P and PD patients was examined, including disease duration, HY stage 64 , the Japanese translation version of MDS-UPDRS part 3 65 , the UMSARS part 2 (only for MSA-P patients) 66 , the LEDD (calculated from the conversion rate of anti-parkinsonian medications) 67 , and a single-question screen for RBD 68 Acquisition of magnetic resonance imaging data All study participants were scanned on a 3T-MRI scanner (MAGNETOM Prisma, Siemens Healthcare, Erlangen, Germany) using a 64-channel head coil. Multi-shell DWI was performed using a spin-echo echo-planar imaging sequence, which included two b values of 1000 and 2000 s/mm 2 in the first cohort, and two b values of 700 and 2000 s/mm 2 in the second cohort. The DWI data were obtained using an anterior-posterior phaseencoding direction along 64 isotropic diffusion gradients for each shell. Acquisition of each DWI dataset was completed with a b = 0 image with no diffusion gradients. To correct for magnetic susceptibility induced distortions related to the echo-planar imaging acquisitions, standard and reverse phase-encoded blipped images with no diffusion weighting (blip up and blip down) were obtained 27,69 . The sequence parameters used for the first cohort were TR = 3300 ms, TE = 70 ms, field of view = 229 × 229 mm, matrix size = 130 × 130, resolution = 1.8 × 1.8 mm, slice thickness = 1.6 mm, and acquisition time = 07.29 min. The sequence parameters used for the second cohort were TR = 3600 ms, TE = 79 ms, Fig. 5 ROC curve for distinguishing MSA-P from PD. a In the first cohort, the AUC for ICVF was 0.935 (specificity 94.7%, sensitivity 81.0%) using the four regions of MCP, CST, SLF, and ILOF. The AUC for FW was 0.965 (specificity 94.7%, sensitivity 95.2%) using the MCP, ICP, SLF, and ILF. The AUC for MVF was 1.000 (Specificity 100%, Sensitivity 100%) when using MCP, EC, SLF, and UF. b Similar AUCs for ICVF and FW were also found for the second cohort (Supplementary Table 5). CST corticospinal tract, EC external capsule, FW free water, ICP inferior cerebellar peduncle, ICVF intracellular volume fraction, ILF inferior longitudinal fasciculus, ILOF inferior fronto-occipital fasciculus, MCP middle cerebellar peduncle, MVF myelin volume fraction, SLF superior longitudinal fasciculus. To calculate the MT-sat index (the sequence for which was only acquired in the first cohort), predominant T1-WI, proton density-WI, and MT-WI were acquired using three-dimensional multi-echo fast low-angle shot sequences. The settings for the MT-sat sequences were as follows: for MT-off and MT-on scanning, TR = 24 ms, TE = 2.53 ms, flip angle = 5°; for T1-WI, TR = 10 ms, TE = 2.53 ms, flip angle = 13°, with parallel imaging using GeneRalized Autocalibrating Partially Parallel Acquisitions with a factor of 2 in the phase-encoding direction, 7/8 partial Fourier acquisition in the partition direction, bandwidth = 260 Hz/pixel, field of view = 224 × 224 mm, matrix = 128 × 128, slice thickness = 1.8 mm, and acquisition time = 6 min 25 s. MRI pre-processing Head angle was calculated to rule out the effects of head position on the acquired data. The actual head angle was measured by correcting the oblique angle at imaging from the DICOM header information and calculating the angle on the image from the rotation matrix when standardizing the T1-WI images (Supplementary Figure 1). The calculated data confirmed that there were no significant differences (P > 0.05) between the disease groups. The EDDY and TOPUP toolboxes were used to correct susceptibilityinduced geometric distortions, eddy current distortions, and inter-volume subject motion in the DWI datasets 27 . All DWI datasets were then visually checked in the axial, sagittal, and coronal views, and were confirmed to be free from severe artifacts, such as gross geometric distortion, signal dropout, and bulk motion. Multishell DWI data were used to generate NODDI and FWE-DTI maps 70 . The NODDI model 7 was applied to the MRI results using the NODDI Matlab Toolbox5 (http://www.nitrc.org/projects/ noddi_toolbox). The ICVF, ODI, and ISOVF maps were calculated using AMICO (Accelerated Microstructure Imaging via Convex Optimization). FWE-DTI data were processed using a regularized bi-tensor model in MATLAB (MathWorks, Natick, MA, USA) 14 , and FA T , MD T , RD T , AD T , and FW maps were calculated. ISOVF and FW, which can be obtained from NODDI and FWE-DTI, respectively, are both indicators of extracellular FW content in the brain. The DTI measures were obtained using an ordinary least square method applied to the DWI with b = 0 and 1000 s/mm 2 . FA, MD, AD, and RD maps were calculated using the DTIFIT tool implemented in FSL (FMRIB Software Library 5.0.9; Oxford Centre for Functional MRI of the Brain, UK; www.fmrib. ox.ac.uk/fsl), which is based on standard formulae 70 . MT-sat was calculated using an in-house MATLAB script based on previously described theory 71 . MVF maps (calculated for the first cohort only) were obtained using an MTsat correction factor of 0.1. Tract-based spatial statistics analysis We evaluated participants' white matter alterations using a TBSS skeleton projection step 72 . First, nonlinear registration of FA images of all subjects was mapped onto MNI (152) space and interpolation to a 1 × 1 × 1 mm resolution was performed using the FMRIB nonlinear registration tool. Second, the transformed FA images were averaged to create a mean FA image. Third, the mean FA image was thinned to create a mean FA skeleton, which represented the centers of all tracts common to the groups. The threshold of the mean FA skeleton was set to >0.20 to include the major white matter pathways but exclude peripheral tracts and gray matter. The aligned FA map of each subject was then projected onto the skeleton. Finally, the DTI (MD, AD, and RD), NODDI (ICVF, ODI, and ISOVF), FWE-DTI (FA T , MD T , RD T , AD T , and FW), and MT-sat (MVF) maps were projected onto the mean FA skeleton after being registered to MNI space using the warping fields of each subject. The SNc ROI was automatically created using the AAL3 20 . The ROIs of the aSN and pSN were manually divided in the middle of each section of the SNc (Fig. 3). In the SNc ROI analysis, the signal-to-noise ratio can decrease because of the surrounding iron-rich region, resulting in fitting errors. To rule out the possibility of this artifact, voxels with an ICVF of 0.99 or higher were excluded from the analysis. We also considered the possibility that the AAL3, automatically adapted to standard space, might overestimate the surrounding structures, such as the white matter or cerebrospinal fluid. A square ROI that fit in the SNc was therefore created manually using ITK-SNAP 74 with reference to AAL3, and the same analysis was performed (Supplementary Method 1). Statistical analysis All analyses were performed using statistical software (JMP v14; SAS Inc., Cary, NC, USA; or the FSL package for the general linear model analysis). Significance was defined as P < 0.05, corrected for multiple comparisons. The background demographics of participants were analyzed using Wilcoxon analysis or Kruskal−Wallis analysis for continuous variables, such as age, disease duration, HY stage, MDS-UPDRS part 3 scores, and LEDD, and the Pearson's chi-squared test was used for nominal variables. Thirty white matter ROIs were selected based on the JHU and ICBM atlases, and these were used to compare the overall white matter differences between MSA-P patients, HCs, and PD patients. In the 30 ROI analyses, the averaged values of the left and right part of each region were used. For each ROI in the white matter and SNc, Kruskal−Wallis analysis was performed for group comparisons (e.g., HCs vs. MSA-P patients vs. PD patients), and the results were corrected for multiple tests using the Benjamini-Hochberg FDR method 75 . Post-hoc analyses (HCs vs. MSA-P patients, HCs vs. PD patients, and MSA-P patients vs. PD patients) of NODDI, FWE-DTI, and MT-sat measures were calculated using Steel-Dwass analysis. Spearman's rank correlation was used to examine correlations between MRI parameters within regions that showed significant differences in the ROI analyses of the white matter and SNc of MSA-P patients, and patient characteristics including disease duration, HY stage, MDS-UPDRS part 3 scores, UMSARS scores, and LEDD. Correlations between the parameters of the different MRI models were also evaluated using Spearman's rank correlation for regions that showed MSA-Pspecific differences. We used stepwise logistic regression analysis to select the four tracts that were needed to distinguish MSA-P patients from PD patients from the 30 white matter ROIs. The forward method was used, and the inclusion P values were set to 0.05. Furthermore, a ROC analysis was performed using the obtained four tracts, and the AUC, specificity, and sensitivity were calculated. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. DATA AVAILABILITY The data supporting the findings of this study are available from the corresponding author upon reasonable request.
2021-10-30T13:19:22.285Z
2021-10-29T00:00:00.000
{ "year": 2021, "sha1": "63c912d527e6d4e5de80bd1691b1f8a8ec74cfe4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41531-021-00236-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0db26529592832e30d20fe378c26420d92c233bd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265588145
pes2o/s2orc
v3-fos-license
Maxillary Sinus Lift Procedures: An Overview of Current Techniques, Presurgical Evaluation, and Complications A maxillary sinus lift procedure is indicated if a dental implant needs to be placed in the posterior maxilla with limited bone available to accommodate a dental implant. Both open and closed sinus lifting procedures are reliable approaches for increasing the bone volume needed to support proper implant positioning. However, these methods can lead to several complications. In addition to the general complications commonly linked to oral surgery, such as swelling or hematoma, the primary complication in open sinus lifting is typically the perforation of the Schneiderian membrane during osteotomy. Detailed and extensive presurgical evaluation is crucial to minimize such complications. The objective of this study was to delineate contemporary trends in sinus lift surgery, with a specific emphasis on different techniques of sinus lift procedure, anatomical and surgical factors, presurgical evaluation, bone grafting, and the practical implications of these factors in implant dentistry cases involving a deficient posterior maxilla. In conclusion, while both osteotome and lateral window techniques can assist clinicians in addressing the complexities of implant placement in a deficient posterior maxilla, bone height before implantation remains a critical factor in determining the success and longevity of implants. Introduction And Background The maxillary sinus is the largest among all the paranasal sinuses [1].Two types of bone resorption occur when patients lose their maxillary posterior teeth.The first type is centripetal resorption, which is a natural result of the bone remodeling process following tooth loss.The second type is resorption, caused by the pneumatization of the sinus cavity towards the alveolar crest [2].Both types of resorption often lead to a reduced amount of bone available for placement of dental implants, necessitating a regenerative procedure known as a maxillary sinus lifting procedure.Sinus lifts are regarded as a safe treatment option with a lower risk of complications [3,4].The primary goal of this intervention is to create enough bone height and width to facilitate the proper placement of dental implants.This goal can be achieved using either a one-stage or two-stage technique.The one-stage technique inserts dental implants simultaneously with the sinus augmentation procedure.With the two-stage technique, bone augmentation is performed during the initial surgical procedure, and the dental implants are placed later once the necessary bone volume has been established [5]. The traditional sinus lift procedure, initially explained by Tatum H [3,6] in the 1970s, involved a combination of incisions.This combination included a crystal incision along with mesial and distal vertical incisions, allowing for the elevation of a buccal flap to expose the outer bone wall of the sinus.Subsequently, a trapdoor osteotomy (window) was made in the lateral bone wall, providing access to the Schneiderian membrane and the sinus cavity.Schneiderian membrane is a membrane that forms the lining of the inner aspect of the maxillary sinus.The membrane was then meticulously dissected and lifted in an apical direction, with particular care taken to preserve its integrity.This displacement of the membrane created space for the graft material.Bone replacement grafts in maxillary sinus lift procedures encompass a variety of materials.These include autologous bone, which can be sourced from the mandibular ramus, chin, iliac crest, or other intraoral locations, as well as bone substitutes, synthetic biomaterials, or combinations of these substances [7,8]. In cases where patients have sufficient remaining bone height, it is possible to augment the sinus floor using a less-invasive method known as the trans-alveolar approach, which involves the use of the osteotome technique.This technique, first employed by Summers RB [7] in 1994, allows sinus floor augmentation without the need for extensive surgery.However, complications are possible during maxillary sinus lift surgery.The most frequently encountered intraoperative complication in maxillary sinus lift procedures is the perforation of the sinus membrane.Other potential complications include postoperative infection, sinusitis, graft exposure, graft loss, edema (swelling), seroma formation (accumulation of fluid), bleeding, and membrane exposure [9][10][11][12].The objective of this study is to review the maxillary sinus lift procedure, encompassing preoperative assessment, surgical techniques, bone grafting materials, and possible complications. Review Anatomy The maxillary sinus holds approximately 12-15 mL of air in adults [13].It has a pyramidal shape, with its base near the nasal cavity, the upper part serving as the orbital floor, and the tip toward the zygomatic bone [14].An oval or slit-shaped drainage opening, known as the ostium, functions as an overflow opening and is positioned in the upper part of the inner wall [14,15].The space between the semilunar hiatus and the nasal floor can range from 18 to 35 mm, with an average of 25.6 mm [16].The position of the ostium minimizes the chances that it will be blocked during augmentation procedures [17].The base of the maxillary sinus extends from the premolar or canine region anteriorly and to the maxillary tuberosity posteriorly, often reaching its lowest point near the first molar area [18].In dentate adults, the maxillary sinus floor is the thickest of its walls and lies approximately at the same level as the nasal floor.However, in patients who have lost their teeth (edentulous), it is typically situated about 1 cm below the nasal floor.Septa within the sinus is composed of cortical bone and can be found both horizontally and vertically within the sinus floor [19,20].Some studies have observed septa in approximately 25%-31.7% of maxillary sinuses [21,22], and these septa can range from 2.5 to 12.7 mm in length and be in various locations within the maxillary sinus [11].Notably, there tend to be more septa in edentulous or atrophic (reduced in size) ridges than in partially edentulous or nonatrophic arches [19,21]. Blood supply and innervation The primary branches of the maxillary artery, which supply blood to the bony walls and membrane of the sinus, include the posterior superior alveolar artery, inferior orbital artery, greater palatine artery, and sphenopalatine artery.It is essential to note that the locations of the inferior orbital artery and the posterior superior alveolar artery are crucial considerations in surgical planning, as any damage to these arteries can result in bleeding complications [23,24].These two arteries eventually join together, forming a dual arterial arcade that encircles the maxillary sinus [25].This connection can occur in either an extraosseous manner, typically located about 23-26 mm away from the alveolar ridge, or an endosseous fashion, positioned approximately 16.4-19.6mm from the alveolar margin [24].It is noteworthy that the dental branch of the posterior superior alveolar artery consistently exhibits an endosseous connection with the inferior orbital artery in all dissected anatomical cases; however, this connection is visible on radiographs in only 50% of cases [25][26][27]. The innervation of the maxillary sinus is outlined in Table 1.It represents a distinct connection between the venous system of the maxillary sinus and the cavernous sinus, which is significant because it can potentially serve as a pathway for infections spreading from the sinus to the brain [28][29][30]. Presurgical evaluation The presurgical evaluation is preliminarily done through CT or cone-beam computed tomography (CBCT) scans.This evaluation determines essential parameters such as membrane thickness, presence of sinus septa, residual bone height, and presence of teeth.The elevation of the maxillary sinus floor carries a risk of jeopardizing the sinus physiology, and a careful and thorough CBCT evaluation before the procedure can reduce the chances of intra-operative and post-operative complications [31,32].The maxillary sinus is considered healthy when the mucous composition is normal, mucociliary clearance is efficient, and the sinus ostium is patent.These criteria are significant because a healthy maxillary sinus is less likely to develop postsurgical complications, even in the event of a small procedural error, such as a minimal perforation [33]. Risk of perforation The risk of perforation can be associated with irregularity in the membrane thickness, sinus septa, the angle between the buccal and palatal wall, and existing tooth implants or tooth roots adjacent to the sinus [34]. The Schneiderian membrane The Schneiderian membrane is an important parameter during the presurgical analysis [35,36].Membrane thickness of up to 2 mm is considered physiological and favorable; however, thickness exceeding 5 mm is associated with sinus ostium obstruction.Recent CBCT studies indicate that 1 mm is a physiological value and 4 mm is pathological [37][38][39][40]. Sinus septa In approximately 38% of all cases, sinus septa (or Underwood's septa) are found inside the maxillary sinus.Depending on their shape, position, and development, they may threaten membrane integrity during sinus floor elevation, and the presence of these anatomical variations can enhance the risk of perforation [17,38,41].The development of Underwood's septa should be considered in judging the complexity of sinus floor elevation during surgery.If the sinus septum runs transversely, surgery is straightforward, but if it is longitudinal or incomplete, the procedure may become more difficult during membrane elevation [42]. Alveolar-antral artery An intraosseous anastomosis, the alveolar-antral artery, is always present between the posterior superior alveolar artery and the infraorbital artery.However, an extraosseous anastomosis exists in only 44% of cases.Hemorrhage of the alveolar-antral artery is a common complication in sinus lifting procedures.To avoid this, a posterior approach to the bone antrostomy has been suggested.Planning should include a careful evaluation of CBCT to ascertain the course of the artery.Another important consideration is the artery's diameter.If the diameter is less than 1 mm, or if the artery cannot be detected radiographically, the likelihood of severe complications during surgery is minimal.Conversely, if the diameter is 2-3 mm or greater, the risks of hemorrhage and the need to ligate the artery increase [34].Both the diameter and course of the artery are evaluated through CBCT, as shown in Figure 1 [43].CBCT: Cone-beam computed tomography. Presence of teeth The resorption of the alveolar ridge and the maxillary sinus pneumatization are both profoundly influenced by the loss of posterior teeth [44].When a close relation between the sinus membrane and tooth roots has been detected, especially in the case of a single posterior missing tooth, the perforation risk increases [45].However, the probability of perforation decreases when two adjacent teeth are missing.This decreased probability could be due to the presence of sinus pneumatization in a small area with an irregular sinus floor shape.Figure 2 shows the relationship between the extraction of teeth and pneumatization of the maxillary sinus [46].Source: [46]. Residual alveolar ridge height Residual alveolar ridge height has been suggested to significantly influence membrane thickness [47] and the success of implant therapy over time [9].Alveolar ridge height also plays a major role in implant survival rates [48].According to some studies [49,50], a pre-implant bone height of less than 5 mm is associated with a decreased survival rate.These findings indicate that a higher success rate could be achieved with greater alveolar bone height. General considerations Generally, sinus lifting is indicated with a residual bone height of 10 mm or less (including leaving a space of 1 to 2 mm of bone between the implant apex and the sinus floor level) [48].The two basic methods for the sinus lifting procedure are the trans-alveolar (crestal osteotome) and the lateral window [51].If more than 5 mm of bone height is present, the crestal osteotome is the treatment of choice [52].However, if the ridge height is severely reduced, the use of a lateral window is indicated.This technique can aid in achieving a height of up to 9 mm, which is enough to compensate for the bone shortage [48].Factors affecting the prognosis of the maxillary sinus lifting procedure are demonstrated in Table 2. Prognosis The Surgical techniques The maxillary sinus lift procedure has gained widespread acceptance to reduce postoperative complications in cases of limited bone height in the posterior maxillary alveolar ridge [48].This procedure is typically advised in cases with bone height in the posterior maxilla of 10 mm or less [53]. Tatum H [54] introduced the initial lateral window procedure in 1975.This technique involves surgically creating an opening in the lateral wall of the sinus and then gently lifting the Schneiderian membrane to facilitate placing the implant(s) of suitable length.The use of the lateral approach is particularly valuable in cases of substantial bone deficits because it allows an increase in vertical bone height by more than 9 mm [48].The osteotomy can be executed by utilizing either a high-speed handpiece or precise piezoelectric instruments.Using a piezoelectric tip to prepare the window greatly reduces the likelihood of membrane perforation and results in an overall safer procedure [54][55][56]. In 1994, Summers RB [7] was the first to employ the osteotome approach.This technique involves a transalveolar elevation of the maxillary sinus floor.This approach offers several advantages, including efficient surgical procedures, reduced surgical duration, fewer complications, lower postoperative discomfort, and increased patient satisfaction.Furthermore, the osteotome approach typically increases vertical bone height from 3 to 9 mm [52,57,58]. To optimize the results of lifting the maxillary sinus, various minimally invasive strategies have been developed to provide an increased level of patient satisfaction [48].The antral membrane balloon elevation is a minimally invasive approach designed to gradually lift the Schneiderian membrane while ensuring its preservation.The membrane is carefully separated by applying gentle and sustained pressure while inflating a latex balloon.This method is considered relatively safer, with minimal postoperative bleeding, pain, or discomfort [59,60].Large-scale longitudinal studies are required to establish the procedure's clinical effectiveness and long-term outcomes [61]. Recently, a new bioactive kinetic screw bone implant model efficiently accomplishes both autogenous grafting and sinus augmentation while also securing the implant in a single procedure.When the vertical bone height in the planned implant site is less than 4 mm, an additional surgical step is undertaken to harvest bone and enhance its availability [62][63][64][65][66]. Experimental studies conducted on synthetic maxillary bone and sinus have successfully demonstrated the feasibility and simplicity of this innovative technique.However, more studies are needed to further evaluate this technique [67]. The selection of the appropriate surgical technique for sinus lift procedures primarily depends on the height of the existing pre-implant bone.The transcrestal approach tends to be preferred when the residual bone height is greater than 5 mm.In cases where the residual bone height is 5 mm or less, the lateral window approach is considered more suitable [7,58,[68][69][70]. Bone grafting materials Bone grafts can be used to promote bone formation and maxillary sinus augmentation can be accomplished through the use of autografts, allografts, xenografts, alloplastic material, and growth factors [71].Autogenous grafts obtained from the same individual are considered the gold standard because of their osteogenic capacity and osteoconductive and osteoinductive properties.Autogenous grafts also heal quickly and have strong resistance to infections.However, the increased morbidity and unpredictable reabsorption associated with autografts have led to the development and use of synthetic substitutes.Allogenic grafts, obtained from another individual in the same species, have only osteoconductive and osteoinductive capabilities.Xenografts, obtained from different species, possess only osteoconductive capability.Alloplastic grafts, whether natural or synthetic, are solely osteoconductive biomaterials [51,72].Adding xenografts to autogenous grafts improves volumetric stability in the sinus augmentation procedure [73].All graft types can be prepared in different forms, such as large blocks or streaky gels.Some authors have suggested performing sinus lift procedures without grafting materials by utilizing coagulated blood as a scaffold to form new bone.However, this technique was not evaluated alongside appropriate control procedures, and the results were not reproducible [4].A different approach was introduced in managing cases using plateletrich plasma or plasma rich in growth factors, with or without grafting biomaterials.The preparation of platelet-rich plasma involves the use of citrate in blood samples to prevent coagulation and maintain a liquid form.To prepare the gel form, thrombin and/or calcium chloride are added to induce fibrin polymerization [74].Platelet-rich fibrin is considered a second-generation platelet concentrate, offering additional advantages such as enhanced healing capabilities, low cost, and ease of handling.Platelet-rich fibrin can improve new bone formation, and significant results can be obtained after a sinus lift procedure [75].The use of platelet-rich fibrin can also improve implant stability and the osseointegration process [76].Platelet-rich fibrin is currently a trend in the management of sinus lift procedures and is considered superior to first-generation concentrates [77]. Complications of maxillary sinus lift procedure Just as with any other surgical procedure, a sinus lift is associated with various complications, including intraoperative complications, acute postoperative complications, and chronic postoperative complications [78]. Intraoperative Complications Common complications that can occur during maxillary sinus graft surgery are the perforation of the Schneiderian membrane, penetration into the sinus or nasal cavity, bleeding, damage to the adjacent teeth, bone fracture, perforation of the alveolar bone, inadequate initial implant stability, incorrect placement or alignment of the implant, blockage of the opening to the maxillary sinus, and accidental swallowing of surgical instruments [78]. Tearing of the Schneiderian membrane: Tearing of the Schneiderian membrane is the most frequently encountered complication during maxillary sinus graft procedures.The incidence of this complication falls within the range of 20%-44% when the lateral window approach is used [51].Ardekian L et al. [79] reported that perforation of the sinus membrane happened in 85% of cases with a residual ridge measuring 3 mm, whereas in cases with a 6-mm residual ridge, membrane perforation only occurred in 25% of cases.Minor perforations may not necessitate treatment, but in the event of a significant perforation, the procedure should either be halted, or a collagen membrane should be applied to repair the perforation.If the procedure is stopped, a subsequent attempt should not be made for another 4-6 months [10]. Bleeding: The maxillary sinus region contains a network of blood vessels, with the primary vessel being the maxillary artery.This artery gives rise to multiple branches, including the intraorbital artery, the anterior superior palatine artery, and the posterior superior alveolar artery, that supply blood to the sinus cavity and the adjacent tissues and structures.Numerous connections (anastomoses) are typically observed between the posterior superior alveolar artery and the infraorbital artery within the lateral bony wall of the sinus.These connections play a crucial role in ensuring adequate blood circulation in this region [80].Bleeding can occur if arteries are damaged during the preparation of the lateral window.To mitigate this risk, it is advisable to identify the location of the artery prior to surgery using CBCT [2].To address a severed vessel, various methods have been suggested, including applying strong pressure, directly tying off the vessel, introducing particulate bone graft into the arterial canal, using bone wax, smoothing the area with burs, and employing electrocautery.Additionally, having the patient sit upright can help reduce blood flow by 38%, aiding in the control of bleeding [28]. Acute Postoperative Complications Immediate postoperative complications include discomfort, inflammation, swelling, infection affecting both the surgical area and the sinus, sinusitis, bone loss, bleeding, bruising around the mouth and nose, and hematoma (particularly hemosinus).Other potential issues include the presence of emphysema, wound opening, graft loss, fixture displacement or loss, the formation of an oroantral fistula, benign paroxysmal positional vertigo, and transient or permanent numbness in the palate [78]. Chronic Postoperative Complications While implant periapical lesions are infrequent in the maxilla, they can arise in clinical situations in which excessive heat is generated during the drilling process.When the bone is assessed as hard, a longer time gap (at least one minute) between drilling stages is advised.Additionally, utilizing chilled saline instead of the standard room-temperature saline solution can be beneficial [78]. Conclusions In conclusion, maxillary sinus augmentation is an effective preprosthetic method for enhancing the edentulous posterior maxilla.A thorough presurgical evaluation of sinus anatomy significantly lowers the likelihood of complications.Using growth factors and stem cells is a promising technique to improve graft maturation time, although additional clinical research is required to fully assess their advantages. FIGURE 1 : FIGURE 1: CBCT showing both the diameter and course of the alveolarantral artery. FIGURE 2 : FIGURE 2: A panoramic image showing reference lines drawn and perpendicular distances measured from the crest of the bone to the maxillary sinus floor.
2023-12-04T17:43:34.516Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "77ada3b231f546e26b44f4ff34342c1d0fd4d0c1", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/208202/20231128-9741-181nled.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3fcaaff65014d4b798b4306b3ad48387854bb69b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248496801
pes2o/s2orc
v3-fos-license
Local permutation polynomials and the action of e-Klenian groups Permutation polynomials of finite fields have many applications in Coding Theory, Cryptography and Combinatorics. In the first part of this paper we present a new family of local permutation polynomials based on a class of symmetric subgroups without fixed points, the so called e-Klenian groups. In the second part we use the fact that bivariate local permutation polynomials define Latin Squares, to discuss several constructions of Mutually Orthogonal Latin Squares (MOLS) and, in particular, we provide a new family of MOLS on size a prime power. Introduction Let q be a power of prime p, F q be the finite field with q elements and F n q denote the cartesian product of n copies of F q , for any integer n ≥ 1. Also let us use the notation x = (x 1 , . . . , x n ) and x i = (x 1 , . . . , x i−1 , x i+1 , . . . x n ). The ring of polynomials in n variables over F q will be denoted by F q [x]. It is well known that any map from F n q to F q can be uniquely represented as f ∈ F q [x] such that deg x i (f ) < q for all i = 1, . . . , n, where deg x i (f ) is the degree of f as a polynomial in the variable x i with coefficients in the polynomial ring F q [x i ], see [5]. Throughout this paper, we identify all functions F n q → F q with such polynomials, and every polynomial, will be of degree deg x i (f ) < q, unless otherwise specified. We say that a polynomial f ∈ F q [x] is a permutation polynomial if the equation f (x) = a has q n−1 solutions in F n q for each a ∈ F q . A classification of permutation polynomials in F q [x] of degree at most two is given in [10], see also [5] for several properties and results and the particular case n = 1. A polynomial f ∈ F q [x] is called a local permutation polynomial (or LPP) if for each i, 1 ≤ i ≤ n, the polynomial f (a 1 , . . . , a i−1 , x i , a i+1 , a n ) is a permutation polynomial in F q [x i ], for all choices of a i ∈ F n−1 q . Clearly any LPP is a permutation polynomial. The opposite is not true in general. We can see that by simply considering the permutation polynomial f (x) = x q−1 1 + x 2 , which is not an LPP since f (x 1 , a 2 , . . . , a n ) takes only the two values a 2 and a 2 + 1. The author of [8] and [9] gives necessary and sufficient conditions for polynomials in two and three variables to be local permutations polynomials over a prime field F p . These conditions are expressed in terms of the coefficients of the polynomial. A recent result about degree bounds for n local permutation polynomials defining a permutation of F n q is presented in [1]. One of the main contribution in the first part of this paper is a general construction of a family of local permutation polynomials based on a class of symmetric subgroups without fixed points, the so called e-Klenian groups. In the second part of the paper we are interested in Latin Squares, namely t × t matrices with entries from a set T of size t such that each element of T occurs exactly once in every row and every column of the matrix. It is known that every Latin square can be represented by an LPP, f (x, y) ∈ F q [x, y], (see Lemma 24) and the relevance of this representation for the study of Latin squares (also cubes) are described in [8] and [9]. Latin squares occur in many structures such as group multiplication tables and Cayley tables. To be precise Latin squares are referred to as the multiplication tables of an algebraic structure called a quasigroup. Two Latin squares L 1 and L 2 of order t are orthogonal if by superimposing them one obtains all ordered pairs (t i , t j ) ∈ T 2 , (i, j = 1, . . . , t), and mutually orthogonal latin squares (MOLS) are sets of Latin squares that are pairwise orthogonal. The construction of MOLS is a notoriously difficult combinatorial problem and it is one of the most studied research topics in design theory [7]. This interest is also due to the numerous applications that MOLS have in other fields such as cryptography [12], coding theory and many others, see [3,6,13]. We focus on Latin squares of prime p and prime power q = p r order. The goal of the second part of this paper is providing a big family of MOLS based on the local permutation polynomials introduced in the previous part. The remainder of the paper is structured as follows. We start with some general properties and preliminary results on local permutation polynomials in Section 2. Due to the one to one map between Latin squares and local permutation polynomials Section 3 is consecrated to polynomials only with two variables and we provide new families of such local permutation polynomials, the so called e−Klenian polynomials. In Section 4 we show general constructions of MOLS and, in particular, one based on e−Klenian polynomials. We conclude with Section 5, which makes some final comments and poses open questions. Elementary properties and families of local permutations polynomials Our first observation in this section will be related with the degree of local permutation polynomials. For two variables, it is shown in [2] that the degree of a LPP in F q [x 1 , x 2 ] is bounded above by 2(q − 2). The next result gives a natural generalization of this bound to several variables. Proposition 1. Let n ≥ 2 be an integer. Any local permutation polynomial f ∈ F q [x] is linear if q = 2 and has degree at most n(q − 2) otherwise. Proof. It is straightforward if q = 2, so let assume q > 2 and deg x i (f ) < q. We will prove that deg x i (f ) < q − 1 for every variable x i for i = 1, . . . , n, and for that, clearly it is enough to prove it for i = 1, the rest being analogous. Then, we write the polynomial Suppose that M q−1 is a nonzero polynomial, then there exists (a 2 , . . . , a n ) ∈ F n−1 q such that 0 = M q−1 (a 2 , . . . , a n ) ∈ F q , but then f (x 1 , a 2 , . . . , a n ) ∈ F q [x 1 ] is a univariate permutation polynomial of degree q − 1, which is a contradiction, since there is no permutation polynomial of F q of degree a divisor of q − 1, see [5]. Note that, apart from the trivial case n = 1, for q = 2 any permutation polynomial is also a LPP, since as we have seen they are linear. One of the main goals in the theory is to find new families of local permutation polynomials. The next two results can be used to construct some of them. It is known that, If at least one of g and h is a permutation polynomial over F q , then f is a permutation polynomial over F q , and the inverse is also true when q is prime, see [11]. However for LPP we have the inverse for any q, not necessarily prime. Then f is an LPP if and only if g and h are local permutation polynomials. Proof. It is immediate from the fact that any polynomial g is a permutation polynomial if and only if g + a is also permutation polynomial, for any constant a ∈ F q . The following provide another way to construct local permutation polynomials. Theorem 3. Let f ∈ F q [x] be a (local) permutation polynomial. For any permutation polynomial Proof. Both of them are trivial consequence of the fact that composition of univariate permutation polynomial is again a permutation polynomial. The previous results can be used to find local permutation polynomials with the maximum degree allowed by Proposition 1, and hence extending the result in paper [2] where the authors proved that there are local permutation polynomials in F q [x, y] of sharp degree 2q − 4 for q > 3. For instance, since g(x) = x 3 is a permutation polynomial in F 5 [x] and, hence, also an LPP since n = 1, and h(x, y, z) = x 3 + y 3 + z 3 is an LPP by Theorem 2, we have that is a LPP in F 5 [x, y, z] by Theorem 3, and has degree 9 = 3(5 − 2). In fact the previous idea can be generalized for more general q, n. We can prove the following theorem Theorem 4. Let q = p prime and let 1 ≤ n < p an integer such that gcd(n, p − 1) = 1. There exist an LPP in F p [x] of degree n(p − 2). Proof. Note that f (x) = x n and g(x) = x p−2 are permutation polynomials in F p , since gcd(p − 1, n) = gcd(p − 1, p − 2) = 1, see [5]. Now by Theorems 2 and 3, h(x) = (g(x 1 ) + · · · + g(x n )) n is an LPP. So to prove the theorem it is enough to prove that the degree is n(p − 2). Note that this is equivalent to prove that there is a nonzero monomial of degree n(p − 2). Now let us call y i = x p−2 i and S n = y 1 + · · · + y n . Then h(x) = S n n is a form of degree n, so all its monomials are of the form Ay e 1 1 . . . y en n , for e 1 + · · · + e n = n, so the only monomials divisible by y 1 . . . y n are of the form Ay 1 . . . y n for some A ∈ F p . Since S n n = (y 1 + · · · + y n ) . . . (y 1 + · · · + y n ), the monomial y 1 . . . y n will appear only when selecting one distinct variable from each factor. Now, we have n different factors to choose y 1 , n − 1 to choose y 2 and so on, until it remains one factor to choose y n , so in particular the monomial y 1 . . . y n appears n! times, which is non zero, since p ∤ n!. Hence h(x) has the non zero monomial n!x p−2 For the case p = 3, n = 2, we know there is no LPP of sharp degree since we know that all the local permutation polynomials in F 3 [x, y] are linear. For q > 3 and n = 2 , following the same line of reasoning we get a new simpler proof of the result in [2]. For that we need the following lemma which gives the polynomial describing any permutation in F q as the composition of transpositions and cycles of maximal length. The following result is partially cover in [5]. x k permutes 1 and 0, and leave fixed any other element in F q . In general for is a permutation polynomial representing the transposition (ab) On the other hand, if α is a primitive element in F * q then the polynomial is a permutation polynomial representing a cycle of length q. The proof is straightforward. Now we are in a position to prove the following theorem. Theorem 6. For any q > 3 a power of prime q = p s there exist an LPP in F q [x, y] of degree 2(q − 2). Proof. The case F 4 is given by the example p(x, y) = ux 2 y 2 + (u + 1) x 2 y + (u + 1) xy 2 + xy + y 2 + ux + 1, where u 2 + u + 1 = 0. So suppose q ≥ 5, odd. Consider the polynomial in F q [x, y] given by It is an LPP since it is the composition of an LPP and a permutation polynomial by Theorem 2 and Lemma 5. Expanding it we have For any other j = 1, j(q − 2) ≡ q − 2 (mod q − 1), meanwhile for j = 1 and any other k we have that (k − j)(q − 2) ≡ q − 2 (mod q − 1) and hence M is the only monomial of degree 2(q − 2). Now suppose q ≥ 8 a power of 2, and let q 2 = q−2 2 . Consider Again jq 2 ≡ q − 2 (mod q − 1) only if j = 2 and on the other hand we have Bivariate local permutation Polynomials Local permutation polynomials in two variables F q [x, y] correspond to Latin squares of order q. This section provides new families of local permutation polynomials in F q [x, y]. Permutation polynomial tuples Let Σ q be the permutation group with q elements and F q = {c 0 , . . . , c q−1 } the field with q = p r elements. Given a permutation polynomial f ∈ F q [x, y], then for each c i ∈ F q , i = 0, . . . , q − 1, we define the set (1) Since f is a permutation polynomial, it follows that Also, if we consider an LPP, then we see that, for each 0 ≤ i ≤ q − 1, there exist a permutation β i ∈ Σ q such that, So, the above study allows to describe local permutation polynomials as q-tuples of permutations: There is a bijective map between the set of local permutation polynomials f ∈ F q [x, y], and the set of q-tuples β f = (β 0 , . . . , β q−1 ) such that β i ∈ Σ q , (i = 0, . . . , q − 1) and for i = j, β −1 i β j has no fixed points. Proof. We have already seen how to associate a q−tuple of permutation to a given LPP. For the other direction, note that given a q−tuple (β 0 , . . . , β q−1 ) with β i ∈ Σ q , i = 0, . . . , q − 1, and no fixed points as defined above, we can construct the set A i as in equation (2). Then Lagrange Interpolation algorithm would return the polynomial, completing the proof. Remark 8. Note that the q-tuple can be similarly defined acting on the first variable as Let us illustrate the above result by an example: are the product of four transpositions: The example has been created with SageMath, and it can also be used to verify that indeed, for i = j then β −1 i β j has no fixed points. Remark 10. Another interesting fact is that given an LPP f , its associated partition A i of F 2 q , and any σ ∈ Σ q the sets A σ(i) for i = 0, . . . , q − 1 form a new partition of F 2 q , and consequently it provides a new LPP g(f (x, y)), where g(z) ∈ F q [z] is the permutation polynomial associated to the permutation σ, see also . From Lemma 7 we can translate the study of local permutation polynomials to the study of tuples (β 0 , . . . , β q−1 ) ∈ Σ q q , such that β −1 i β j has no fixed point, for i = j. This suggests the following definition: Definition 11. We say that (β 0 , . . . , β q−1 ) ∈ Σ q q is a permutation polynomial tuple if it satisfies that for i = j, β −1 i β j has no fixed point. From a permutation polynomial tuple we have q! local permutation polynomials, just by permuting its elements, see Remark 10. In fact, from one permutation polynomial tuple we can construct many other local permutation polynomials as is shown in the next result: q be a permutation polynomial tuple and let σ, δ ∈ Σ q , then σΩδ = (σβ 0 δ, . . . , σβ q−1 δ) ∈ Σ q q is also a permutation polynomial tuple. The Proposition 12 motivates the following concept: Definition 13. Two permutation polynomial tuples Ω and Γ are equivalent if there exit σ, δ ∈ Σ n such that σΩδ = Γ. Similarly, we say that two local permutation polynomials f and g are equivalent if the corresponding permutation polynomial tuples β f and β g are equivalent. It is straightforward to check that the above is an equivalence relation defined in the set of local permutation polynomials. Observe that every class has a representative containing the identity. If needed we will use this representative. We will see later that in F 2 and F 3 there is only one equivalence relation class, and two in F 4 . Permutation Group Polynomial A significant permutation polynomial tuple is given by a permutation subgroup of Σ q . Definition 14. We say that an LPP Note that a subgroup of Σ q is a permutation polynomial tuple if and only if it has no fixed points, i.e, it is a subgroup such that, apart from the identity, none of its elements has fixed points. Clearly, if C is a cycle of maximum length q, then the cycle subgroup < C > generated by C is a group without fixed points. Next, we describe another family of such subgroups. We will denote C to be a cycle of length |C|. Sometimes, we will use a subindex in the cycle if we need to order cycles. Lemma 15. Let q = p r , G ⊂ Σ q be a nontrivial subgroup without fixed points, and α ∈ G. Then there is an 0 < e ≤ r such that for t = p e and k = p r−e we have α = C 1 · · · C k where |C i | = t for all i = 1, . . . k. Proof. Suppose α = C 1 · · · C k is the representation of α as product of disjoint cycles, and suppose |C 1 | = t 1 < t 2 = |C 2 |. Then α t 1 ∈ G, is not the identity but fixes all the elements in C 1 . Hence, all the cycles have the same length, say, t. Now by Lagrange theorem there exits 0 < e ≤ r such that t = p e . Finally, we remark that k × t = p r , since each element of F q should appear in that representation. In order to find subgroups without fixed points we will use the following technical result. Note that by Lemma 15 the permutations will be products of cycles of the same lenght. Then for any 0 ≤ a ≤ l − 1 and 0 ≤ b ≤ t − 1, β b α a has no fixed points and Proof. We write the elements of F q as c j+il for some 0 ≤ j ≤ l − 1 and 0 ≤ i ≤ t − 1. Then This proves the first claim since (j + a) ≡ j (mod l) unless a = 0, and in that case (i + b) (mod t) ≡ i (mod t) unless b = 0. Moreover α a β b (c j+il ) = α a (c j+(i+b) (mod t)l ) = c (j+a) (mod l)+(i+b) (mod t)l , which proves commutativity. With the above notations and definitions, let C α be the matrix of t rows C i,α , (i = 0, . . . , t − 1) and l columns; let C β be the matrix of l rows C j,β , (j = 0, . . . , l − 1) and t columns; Notice that C α is the transpose matrix of C β . Corollary 17. Let α, β be as in the previous Lemma 16. Then the set defined by is a subgroup of F q without fixed points and order |G| = q. Proof. We have already seen that it is a group without fixed points, so the only thing to see is that |G| = q, which follows since clearly α a β b are all The previous study suggests the following definition: Definition 18. We will call an e-Klenian subgroup to any group of the form given in the Corollary 17. Also we say that a polynomial f ∈ F q [x, y] is an e-Klenian polynomial if f is a permutation group polynomial and the associated group G β f is an e−Klenian subgroup. On the other hand we have the following group polynomials, not e-Klenians. The first is associated with the tuple given by the non-abelian group of order 8 H 1 =< α, β > generated by each of them being the product of four disjoint cycles of length 2. It is straightforward to check that the subgroup H 1 has no fixed points. In this case the local permutation group polynomial associated to H has degree 12 and 46 monomials. Now, we consider another non-abelian group of order 8 without fixed points H 2 =< α, β > generated by permutations which are the product of two disjoint cycles of length 4: α = (0, u, u 2 , u 3 )(u 4 , u 5 , u 6 , u 7 ), β = (0, u 4 , u 2 , u 6 )(u, u 5 , u 3 , u 7 ) In this case the local permutation group polynomial associated with H 2 has degree 10 and 42 monomials. Not only distinguish e-klenians polynomials, but only count them all is non trivial. We have not seen in the literature significant results on this finite group problem. On the other hand, this problem has a straightforward solution when we restrict to e = 0. Indeed the number of cycles of maximal lenght in Σ q is (q − 1)!, and a subgroup generated by a cycle of length q contains exactly ϕ(q) generators, the prime powers of the cycle, so the number of 0-Klenian groups of Σ q is (q−1)! ϕ(q) . Now, for each group, we need to order its elements to get the partitions associated to the polynomial, so the total number of 0 klenian polynomials in F q for q = p r is Let us not that e = 0 is the only case appearing when we restrict to prime fields F p , since any permutation group polynomial of F p [x, y] should be a cycle subgroup of order p. In fact, since any two cycle subgroups of Σ q are conjugated, all e-Klenian polynomials in F p [x, y] are equivalent. We can generalize a bit this result to the following: Then, h is equivalent to an e−Klenian polynomial if and only if for any is an e-Klenian group for l = p e and t = p r−e . Proof. Suppose f is equivalent to an e-Klenian group G = {α i β j : 0 ≤ i ≤ l − 1, 0 ≤ j ≤ t − 1} as in Corollary 17. Then for some permutations σ, γ we have is also an e-Klenian group for e = p l since conjugates of cycles are cycles of the same length. Now, note that in particular as wanted. Corollary 20. There are exactly (q − 1)!N local permutation polynomials equivalent to e-Klenian polynomials over F q , where N is the number of e−Klenian polynomials. In particular if q = p is prime, we have exactly p!(p − 1)!(p − 2)! local permutations polynomials equivalent to a 0-Klenian polynomial. Proof. Every polynomial equivalent to an e-Klenian polynomial is of the form µG where µ ∈ Σ q and G is the q-tuple of an e-Klenian polynomial. Now, the only way of getting two equal polynomials would be if we have µ 1 G 1 = µ 2 G 2 and hence, G 1 = µ −1 1 µ 2 G 2 but then since G 2 contains the identity, µ −1 1 µ 2 ∈ G 1 and, since G 1 is a group its inverse is also in G 1 so we get that G 2 = µ −1 2 µ 1 G 1 = G 1 so for each G, µG gives new polynomials unless µ ∈ G. Since we have q! permutations, we get (q − 1)!N equivalent polynomials, as wanted. The proof of the second assert follows from Equation 3. Local permutation polynomials in In this subsection we show that all local permutation polynomials over the F 2 , F 3 and F 4 are described by e-Klenian polynomials. The finite field F 2 In this case the degree is q − 1 = 1, and hence the only local permutation polynomials over F 2 = {0, 1} are x + y and x + y + 1, which correspond to the only permutation polynomial set Ω = {(I d , β)} ⊂ Σ 2 , where β = (0, 1) is the only cycle of length 2. The two polynomials appear from the two permutations of the two elements of Ω. The finite field F 3 It is known that the number of local permutation polynomials over the field F 3 = {0, 1, 2} is 12, see [8] and, by Corollary 20 we see that they are alll equivalento to e-Klenian polynomials. In fact, we have one 0-Klenian subgroup of Σ 3 generated by the cycle β = (0, 1, 2), giving six 0-Klenian polynomials by Equation 3, and another 6 equivalent to them. The finite field F 4 It is known that the number of Latin squares of order 4 are 576, so we have the same number of local permutation polynomials of F 4 . We will use the following description F 4 = {0, u, u 2 , u 3 } = {0, u, u + 1, 1} such that u 2 + u + 1 = 0. In total there are 4 e-Klenian subgroup . With e = 0, there are three cycle groups of order 4 generated by β i , for i = 1, 2, 3, giving 24 1-Klenian polynomials, and again by Corollay 20 144 local permutation polynomials equivalent to them, giving a total of 576. The finite field F 5 These constructions do not complete the list in other fields of the cardinality bigger than 4. In F 5 , the number of e-Klenian subgroups is 6, giving 720 0-Klenian polynomials by Equation 3, and producing 17280 local permutation polynomials equivalent to them by Corollary 20. On the other hand, it is known that the number of Latin squares of order 5 are 161280. The next example shows an LPP of degree 6 that it is not one obtained by e-Klenian polynomials. Example 21. We will construct a polynomial over F 5 non equivalent to a 0−Klenian polynomial. We need a 5-tuple {β 0 , . . . , β 4 } so that β −1 j β i has no fixed points for any 0 ≤ i < j ≤ 4. So, we first select β 0 ∈ Σ 5 at random. Now, we will need to find α 1 , . . . , α 4 with no fixed points, and consider In order to find an appropriate tuple for an LPP we need α i to verify another condition, namely α i α −1 j to be with no fixed points. Observe that this is similar to the condition on the β's but now 1 ≤ i < j ≤ 4. So, we continue this process and, next, we select at random α 1 ∈ Σ 5 any permutation with no fixed points, and try to find γ 2 , γ 3 , γ 4 without fixed points so that has no fixed points. This will give and then the needed tuple by (4). We start with any permutation β 0 , for example β 0 = (0, 1), and now since the roles of β's and γ's is similar we take Example 22. It is well known that there are two isotopy class of latin squares of size 5, one of them is equivalent to 0-Klenian polynomial and the other one is not, see http://users.cecs.anu.edu.au/bdm/data/latin.html, so we consider the LPP in F 5 [x, y] in the non equivalent class given by f = 2x 3 y 3 + 2x 3 y 2 + 3x 2 y 3 + 2x 3 y + 2xy 3 + x 2 y + 2xy 2 + 2xy + x + y. Orthogonal system of polynomials and Mutually Orthogonal Latin Squares Let us recall the Latin square's definition. In this paper we only consider Latin squares of order a prime power. Definition 23. A latin square of order q is a q × q matrix L with entries from F q such that each element of F q occurs exactly once in every row and every column of L. See [4] for several properties and applications of Latin squares. Further relevance of the use of local permutation polynomials for the study of Latin squares or cubes are described in [8] and [9]. By indexing the cells of L by F 2 q , we have the following known result: Lemma 24. There is a bijective map between Latin squares of order q and local permutation polynomials of F q [x, y]. Proof. Indeed, given a Latin square L over F q with entries a i,j ∈ F q , we consider the Lagrange interpolation polynomial with values f (c i , c j ) = a i,j . Note that, dividing by x q − x and y q − y we can assume deg x (f ) < q and deg y (f ) < q. The converse is clear. We now introduce the orthogonality property of Latin squares: Definition 25. Two Latin squares L 1 and L 2 of order q are called orthogonal Latin squares if for all distinct pairs of coordinates (i 1 , j 1 ), (i 2 , j 2 ) ∈ N 2 . Equivalently, two Latin squares of the same size (order) are said to be orthogonal if, when superimposed, each position has a different pair of ordered entries. In terms of polynomials, the following classical definition appears in [10]: has q n−m solutions in F n q for each (a 1 , . . . , a m ) ∈ F m q . In the special case m = 1, a permutation polynomial alone forms an orthogonal system. On the other hand, if m = n this means that the orthogonal system f 1 , . . . , f n induces a permutation of F n q . These permutations are completely classified in [10] for the special case when the orthogonal system contains polynomials of degree at most two. See also [5] for further properties and results about those interesting systems. An immediate consequence of Definition 26 and Lemma 24 is the following: Corollary 27. Two latin squares L 1 and L 2 are orthogonal if and only the associated polynomials is an orthogonal system. The main goal in this part of the paper is constructing families of orthogonal latin squares. So, this bring the next definition: Definition 28. Given a permutation polynomial f ∈ F q [x, y] we say that g is a companion of f if (f, g) : F 2 q → F 2 q defines a permutation, that is, f, g is an orthogonal system. Obviously any companion must be a permutation polynomial. The following result count the number of companions: Theorem 29. A permutation polynomial f has exactly q! q companions. Proof. We consider the partition of F 2 q given in Equation (1) Now, consider a q-tuple, {σ 1 , . . . , σ q } ⊂ Σ q , and define the polynomial g such that, g(a i,j , b i,j ) = σ i (c j ), j = 0, . . . , q − 1. Now, every pair (c i , c k ) ∈ F 2 q can be determined uniquely as (c i , σ i (c j )) and, hence, the equation (f, g) = (c i , c k ) has exactly one solution for each pair (i, k) ∈ [1, . . . , q] 2 . Hence, each selection of q-tuple gives a different g so, in particular we have q! ways of choosing each σ i and in total there are q! q companions. On the other hand, if g is a companion, g(A i ) = F q and clearly there is a bijection h i : F q → A i , so there is a q-tuple of permutations σ i = g • h i associated to g. The problem is more interesting when we consider local permutation polynomials, that is, Latin squares. Question 30. Is it true that any LPP has a companion which is also an LPP? The answer in general is no. For example, for q = 2, the only local permutation polynomials are x + y and x + y + 1, and obviously is no an orthogonal system of polynomials. For q = 4 we find after some computations with SageMath, that only 144 of the total of 576 local permutation polynomials that exist in F 4 have LPP companions, and each of them has exactly 48 companions. In general we have several ways to find orthogonal systems. First, if we restrict to the linear case we have the following theorem Theorem 31. For q ≥ 3, every linear LPP has companions which is also a linear LPP. Proof. Let f (x, y) = ax + by + c be an LPP. Observe that any linear permutation of this form with ab = 0 is indeed an LPP, trivially. Now consider g = ux + vy + w so that av − bu = 0. Then (f, g) are companions since any linear system with non zero determinant has a unique solution. Observe that, in general, permutation polynomials have many companions. We can take for example v = (c+1)b, u = ca for any c ∈ F q , c = 0, −1. The same example serves to see that different polynomials can share the same companion. Also, given an orthogonal system, we construct new ones with the following simple result. Proposition 32. If f (x, y), g(x, y) is an orthogonal system, then the polynomials af (x, y) + bg(x, y), cf (x, y) + dg(x, y) form also an orthogonal system for a, b, c, d ∈ F q such that ad − bc = 0. Proof. For any pair (c i , c j ) ∈ F 2 q , the system of equations: af (x, y)) + bg(x, y) = c i cf (x, y) + dg(x, y) = c j has a unique solution, just inverting the matrix A = a b c b Another family of orthogonal system is provided by separated variable polynomials: , then f (ah 1 (x) + bh 2 (y)), g(ch 1 (x) + dh 2 (y)) is a orthogonal system for a, b, c, d ∈ F q such that ad − bc = 0. Proof. For any pair (c i , c j ) ∈ F 2 q , the system of equations: has a unique solution, just inverting the matrix A = a b c b The following are very well know results, see [4]. Theorem 35. Let N(n) be the size of the largest collection of MOLS of order n, then we have • If q is a power of prime, then N(q) = q − 1 As a trivial consequence of Proposition 32 and Proposition 33 we have two different complete set of MOLS: Theorem 36. With the above notations and definitions: • If f (x, y) is a local permutation polynomial and g(x, y) is any LPP companion of f (x, y) then the set {f (x, y) + ag(x, y), a ∈ F * q } is a complete set of MOLS. • If f (x), h(y) are permutation polynomials, then the set {f (x)+ah(y), a ∈ F * q } is complete set of MOLS. First we see that g is LPP. We start by proving that for any c k , c m ∈ F q , there exist a y ∈ F q such that g(c k , y) = c m . As in the definitions before, let Then y = α a+u β b+v (c k ) = c (a+2u) (mod l)+(b+2v) (mod t)l , verifies the condition, i.e. (c k , y) ∈ B m , by definition. Now, we want to prove that g is also a permutation polynomial in the first variable, in other words that given c k , c m ∈ F q as before, there exist x such that g(x, c k ) = c m . In particular, we need to find i, j such that c k = α a+i β b+j (c i+jl ). Indeed, in this case x = c i+jl is the solution needed, since by definition (x, c k ) ∈ B m . But this is only possible if a + 2i ≡ u (mod l) and b + 2j ≡ v (mod t), or i = u−a 2 (mod l), j = v−b 2 (mod t). Finally we need to see that (f, g) is an orthogonal system or, in other words, that for any c m , c k ∈ F q as before f (x, y) = c m g(x, y) = c k has exactly one solution. Now, we take the set A m = {(c i+jl , α a β b (c i+jl )), 0 ≤ i ≤ l − 1, 0 ≤ j ≤ t − 1}, and we need to check whether, for some 0 But then i+ a (mod l) = u + 2i (mod l) and b+ j (mod t) = v + 2j (mod t), or i = a−u (mod l),j = b−v (mod t) is the unique solution, so indeed (f, g) are companions, (observe that if k = l then 0 is simply q). Conclusions and open problems Contrary to the many papers and results on permutation polynomials in one variable, there are few for local permutation polynomials in several variables. We have presented some new ideas, concepts and results in the study of these kind of polynomials. In particular, in Theorem 6 we have elegantly shortened the proof of [2] and generalised obtaining in Theorem 1 the bound n(q −2) for the degree of local permutation polynomial f ∈ F q [x] and a sharp bound n(p − 2) Theorem 4 for polynomials defined in a prime finite field F p if gcd(n, p − 1) = 1. It should be interesting to investigate for what prime finite fields F p the condition gcd(n, p − 1) = 1 could be avoided, or more generally, for arbitrary finite fields F q . We think that better results are expected. We have translated the study of local permutation polynomials to the study of permutation polynomial sets, (see Lemma 7 and Proposition 12). We believe this relationship opens a wide line of research in order to investigate very deeply this relationship. Clearly, a significant family of local permutation polynomials are the so called local permutation group polynomial, see Definition 14. We have describe here a small subfamily, the so called e−Klenian polynomials. Giving others rigorous subclass of such permutation group polynomial is a challenging open problem as well. Among other things, this will provide lower bounds in the number of local permutation polynomials and, hence, latin squares. Recall that the precise number of latin squares is an open problem with a lot of interest in the mathematical community in the area.
2022-05-03T06:47:28.824Z
2022-04-30T00:00:00.000
{ "year": 2022, "sha1": "c9e689f4ed0cc21a894341898303c0ca9ca323ee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c9e689f4ed0cc21a894341898303c0ca9ca323ee", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
18187491
pes2o/s2orc
v3-fos-license
The Relationship between Urbanization, Economic Growth and Energy Consumption in China: an Econometric Perspective Analysis As the largest developing country in the world, with rapid economic growth, China has witnessed fast-paced urbanization development over the past three decades. In fact, urbanization has been shown to promote economic growth and improve the livelihood of people, but it can also increase energy consumption and further generate energy crisis. Therefore, a better understanding of the relationship between urbanization, economic growth and energy consumption is important for China's future sustainable development. This paper empirically investigates the long-term equilibrium relationships, temporal dynamic relationships and causal relationships between urbanization, economic growth and energy consumption in China. Econometric models are utilized taking the period 1980–2012 into consideration. Cointegration tests indicate that the variables are found to be of I(1) and cointegrated. Further, vector error-correction model (VECM) indicates that when the short-term fluctuations deviate from the long-term equilibrium, the current changes of energy consumption could eliminate 9.74% non-equilibrium error of the last period, putting back the situation to the equilibrium state through a reverse adjustment. Impulse response analysis intuitively portrays the destabilized changes of the variables in response to some external shocks. However, the impact of energy consumption shock on urbanization and the 5610 impact of urbanization on economic growth seem to be rather marginal. Moreover, Granger causality results reveal that there is a bi-directional Granger causal relationship between energy consumption and economic growth, and unidirectional causality running from urbanization to energy consumption and economic growth to urbanization. The findings have important implications for Chinese policymakers that on the path towards a sustainable society, the effects of urbanization and economic growth on energy consumption must be taken into consideration. Introduction Since the implementation of its "Reform and Opening-Up" policy in the late 1970s, China has witnessed, and is still witnessing, fast-paced urban development [1,2].Over the past three decades, China's urbanization has risen from 17.92% in 1978 to 52.57% in 2012, an average annual growth rate of 1.02% [3].On the one hand, rapid urbanization has been shown to promote economic development and improve people's living standards [4]; on the other hand, it can also contribute to the increase of energy consumption and consequently generate energy crises [5][6][7][8].As a scarce natural resource, fossil energy has begun to set more limits to urbanization process and economic growth, especially in the context of fossil energy crisis [2].Furthermore, the tremendous increase of energy consumption may accelerate global warming and climate change, which are considered two of the major issues facing our planet [9][10][11][12].As the largest developing country in the world, with rapid economic growth, China has witnessed fast-paced urbanization development over the past three decades.This rapid growth of the Chinese urbanization and economy has, however, been achieved by huge consumption in energy resources.According to the scientific report, China is now the largest energy consumer.Therefore, the Chinese government's 12th Five-Year Plan (2011-2015) calls for a 16% reduction in energy intensity (energy consumption per unit of GDP) [13].Under the background of a new round of urbanization and economic development, the issue of energy consumption will become increasingly prominent, and could probably become the bottleneck of urbanization and economic development.Thus, considering the challenges of curbing fossil energy use while maintaining development, it is necessary to investigate the relationship between urbanization, economic growth and energy consumption for developing energy conservation and emission reduction policy [14].In addition, in order to further determine the direction of the causal relationship between urbanization, economic growth and energy consumption that occurs in China's development process, recent research that contain time series data are necessary. In recent years, a body of existing literature has estimated the relationship between urbanization, economic growth and energy consumption with various methods.The empirical results, however, are mixed.Many studies discovered that there was a correlated relationship between urbanization, economic growth and energy consumption.For example, studies taken by Jones [15,16] in 59 developing countries found that urbanization was an important factor affecting energy consumption.Similarly, Dahl et al. [17] found that urbanization and industrialization had positive effects on energy consumption. Using a fixed effect analysis, Parikh et al. [18] also found that there was a positive relation between urbanization and energy consumption.The results were supported by studies undertaken in relation to the United States by Parshall et al. [19]; and in OECD countries by Salim and Shafiei [20].However, taking Australia, Brazil, Denmark and Japan as study areas, Lenzen et al. [21] found that the impact of urbanization on energy consumption varied across countries, even in the same period.In addition, Liddle [22] found that urbanization was important to and correlated with economic growth.However, the impact of urbanization on economic growth varied across regions (countries) based on their level of income and development.Liddle and Messinis [23] further found that urbanization and economic growth either co-evolved in low-income and high-income countries, or else the two processes were decoupled for middle-income and Latin American countries.Ghosh and Kanjilal [24] found that there was a unidirectional causality running from energy consumption to economic activity and economic activity to urbanization in India.Liddle and Lung [25] also found that there was a long-run Granger causality running from electricity consumption to urbanization using various panels.Taking the new EU member as an example, Kasman and Duman [26] found that there was a short-run unidirectional panel causality running from GDP to energy consumption and urbanization to GDP.However, Poumanyvong and Kaneko [27] found that urbanization decreases energy use in the low-income group, while it increases energy use in the middle-and high-income groups.In addition, taking Tunisia as an example, Shahbaz et al. [28] demonstrated that there was a long-term causal relationship between urbanization and energy consumption.Similar results were obtained in seven regions by Al-mulali et al. [6], in MENA countries by Al-mulali et al. [29], in United Arab Emirates by Shahbaz et al. [30].From the above analysis, we learned that most studies focused on the analysis of the relationship between two variables.Little attention has, however, been paid to the estimation of the relationship between three or more variables.Moreover, studies are limited in regarding urbanization as a shift factor when estimating the interactive relationships between variables.This deficiency in contemporary research motivates the present study, which aims to explore the relationship between urbanization, economic growth and energy consumption. As with the models used in previous studies, cointegration and Granger causality tests have been widely used in exploring the relationship between urbanization, economic growth and energy consumption [31][32][33][34][35]. Little attention has, however, been paid to the utilization of VECM and impulse response analysis.For example, whilst Liu [5,31] found that there was a unidirectional causality running from urbanization to the total energy consumption, they ignored the changes of the variables shocked by external environment (impulse analysis between variables).Similarly, from the perspective of asymmetric adjustment, Liu and Xie [36] provided evidence that there was a non-linear causal relationship between the energy intensity and urbanization.However, due to the lack of impulse analysis, they did not portray the destabilized changes of the variables in response to some external shocks.Similar deficiency also existed in studies taken by Jones [15,16], Liddle [25], Shahbaz and Lean [28], and Du et al. [33].Studies like those listed above essentially measured a limited number of aspects in order to reflect the complex relationship between urbanization, economic growth and energy consumption, neglecting the comprehensive and systematic analysis (VECM and impulse analysis).Although these previous studies have certainly enriched our understanding of the relationship between urbanization, economic growth and energy consumption, they have, as a result, failed to provide adequate and explicit evidence in relation to how urbanization and economic growth in fact affects energy consumption. In order to deal with this deficiency, this paper first pre-analyzed the locally important time series data from 1980 to 2012 in China, it then subsequently attempts to re-investigate the long-term equilibrium relationships, temporal dynamic relationships and causal relationships between urbanization, economic growth and energy consumption using econometric analysis.First of all, three types of unit root tests are used to examine the stationarity of urbanization, economic growth and energy consumption.If the variables are stationary, cointegration test and vector error-correction model (VECM) are used to examine the long-term equilibrium relationship between urbanization, energy consumption and energy consumption.Based on vector autoregressive (VAR) model, impulse response analysis is utilized to depict the dynamic changes of the variables shocked by external environment.Finally, the causal relationship between urbanization, economic growth and energy consumption will be investigated by Granger causality test. The rest of this study is organized under three main sections as follows.Section 2 focuses on methods and data, presenting the data pre-processing, the estimation procedure of the econometric models and the data used within the study.Results and discussion are set out in Section 3, and the conclusions and policy implications of the study are summarized in Section 4. Data Source and Pre-Analysis Annual data for energy consumption were obtained from the China Energy Statistical Yearbooks.Annual data for urban population, the total population and GDP were taken from the online version of the China Statistical Yearbooks.Urbanization level represents the share of urban population to total population.Figure 1 plots the evolution paths of urbanization, economic growth and energy consumption covering the years 1980-2012.From Figure 1, we find that, China's urbanization level was 19.39% in 1980; however, it soared to 52.57% in 2012, with an average increase of 1.02%.As indicated in Figure 1, urbanization level remained stable increase before 1996 (the average annual growth rate is 0.64%), but increased dramatically and continuously after 1996 (the average annual growth rate is 1.38).Over the past three decades, China has generated a spectacular economic development with an annual growth rate at 9.9%.China's gross domestic product (GDP) increased from 454.6 billion Yuan in 1980 to 51,894.2 billion Yuan in 2012, with the result that China is now one of the largest economies in the world.However, rapid urbanization and economic development increased energy consumption in the correspondingly period.Specifically, China's energy consumption hiked from 602.75 million tons in 1980 to 3617.32 million tons in 2012.Before 2000, China's energy consumption kept a steady growth; it then increased dramatically entering the new millennium.Figure 2a-c plot the correlative relationships between urbanization (the independent variable) and energy consumption (the dependent variable), urbanization (the independent variable) and economic growth (the dependent variable), and energy consumption (the independent variable) and economic growth (the dependent variable) respectively.From Figure 2, we find the three variables have strongly correlated links (high R 2 ). Figure 3 displays a scatter plot and distribution overlay of urbanization, economic growth and energy consumption data in the form of box chart with the bottom and top of the box representing the 25th and 75th centiles.From Figure 3, we find that urbanization rate is highly concentrated at 30%, and mainly dispersed from 25% to 40%.Energy consumption is mainly distributed between 980 million tons and 2000 million tons, and is concentrated at 1400 million tons (Figure 3).GDP is distributed from 1027.5 to 18,493.7 billion Yuan, with the most concentrated GDP at 8967.7 billion Yuan. Conceptual Framework From the pre-analysis, we found that China's urbanization, economic growth and energy consumption share the same convergence trend, indicating that with rapid urbanization and economic growth, energy consumption increased dramatically.In the context of global change, global warming and energy crisis caused by excessive energy consumption now represent a serious threat to human health and the environment.Therefore, it is of great significance to re-investigate the relationship between urbanization, economic growth and energy consumption, and formulate the sustainable development model for promoting the new-type and healthy urbanization theoretically and empirically.To achieve this goal, an estimation procedure will be designed to explore the relationship between urbanization, economic growth and energy consumption (Figure 4). The unit root tests, namely, ADF, DF-GLS and the PP test will be utilized in this study to examine whether variables are stationary at levels or at the first difference.If the variables are stationary at the first difference, cointegration test and VECM model will be used to the long-term equilibrium relationships.Impulse response analysis based VAR model will be further to portray the dynamic changes of the variables.If the variables are cointegrated, the Granger causality test will be utilized to the casual relationship between the variables. Econometric Methodology Since the main goal of this study is to explore the relationship between urbanization, economic growth and energy consumption, an econometric model will be designed.Before conducting series tests, the natural logarithm of the variables should be used to eliminate the effects of heteroscedasticity in the time series data [2,37].The econometric model is specified as follows: where URBAN represents urbanization level, EC denotes energy consumption, GDP represents gross domestic product, t represents time, α is the slope coefficient, and ε is the residual errors. According to Liddle's [22] study, urbanization, which is constrained to be between 0 and 1, cannot technically be integrated of I (1).Thus, we transform urbanization (URBAN, the share of urban population) to TURBAN according to Equation (2).After applying this logistic transformation, the variable is unbounded above and below.We will get the final equation form after inserting Equation (2) into Equation (1).The specific formula of the transformation is as follows: Stationary test is an important issue in time series analysis.Stationary test is necessary before conducting regression due to its ability to avoid spurious regression [38].In general, graphic observation methods and statistical tests are widely used to estimate the stationarity of the variables.However, the latter one always shows higher power [39].Therefore, we will introduce three types of unit root tests, namely, ADF [40], DF-GLS [41] and the PP [42] test to examine the stationarity of urbanization and energy consumption. Cointegration Test If the variables are stationary at the first difference, cointegration analysis, the method proposed by Engle and Granger [43] will be used to examine the long-term relationship between the variables.The essence of cointegration is that the linear combination of variables is stationary [39].Cointegration tests also require that all variables be integrated of the same order [35].If series xt and series yt have a long-term equilibrium relationship, we can use the following formula to conduct the cointegration test, where μt is residual term, t denotes time.Dynamic ordinary least square (DOLS) significance test will be used to calculate the non-equilibrium error (et), If et is a stable series, we can conclude that series yt and xt are cointegrated; otherwise, there does not exist cointegration relationship between series yt and xt. Vector Error Correction Model Although differenced processing can be used to make the series stationary after the ith difference, they always neglect the important information hidden in the original variables [38].Therefore, in order to deal with this deficiency, vector error correction model (VECM) is established to eliminate the errors. Supposing series y and x have the equilibrium relationship.However, many observed variables are always in the neighborhood of the equilibrium point, not exactly at the equilibrium point.Thus, the short-term relationships between variables are commonly estimated.So the distributed lag form should be considered here to investigate the long-term relationship: From Equation ( 6), we can find that the change of yt not only depends on the change of xt, but also the change of the last period xt−1 and yt−1.Considering the non-stationarity, DOLS test cannot be used to perform the regression.Thus, Equation ( 6) can be deformed to: (1 )( ) 1 1 or where λ = 1 − δ, α0 = β0/(1 − δ), α1 = (β1 + β2)/(1 − δ). Impulse Response Analysis Vector autoregressive (VAR) model, an improved form of univariate autoregressive (AR) model, extends the AR model to contain more than one variable by regarding exogenous variables as the lagged values of endogenous variables.VAR model are widely used in multivariate time series analysis [38].VAR (p) model is specified as follows: where, yt is the vector of endogenous variable, xt is the vector of exogenous variable, p is the lag order, A1, …, Ap and B is the coefficient matrix, εt is the error vector.Actually, coefficients obtained in classical models can only reflect partly dynamic relationship, not the comprehensive relationship.However, VAR model based on statistics focuses on the whole influencing process of one variable impacting on another.Impulse response analysis can capture and portray the dynamic change.Impulse response analysis based on VAR model is widely utilized to depict the dynamic relationship between variables. Granger Causality Test Granger causality tests have been widely utilized to examine the casual relationship between variables.If there is a long-term relationship between two or more variables, Granger causality tests can detect the direction of the casual relationship (unidirectional or bi-directional) [39].The test model is specified as follows [39]: The null hypothesis is βi = 0, under the null hypothesis x does not Granger-cause y; if the null hypothesis does is rejected, it can be said that x Granger-cause y.Similarly, we can examine βj = 0 or not to determine whether y Granger-cause x. Results of Unit Root Tests Before conducting the cointegration test, the stationarity of the variables should be tested.Under the null hypothesis, H0, there is a unit root while the alternative hypothesis H1 there is no unit root.The unit root tests are carried out with individual trends and intercepts for each variable, and the optimal lag lengths are selected automatically using the Schwarz information criteria.The results of the unit root test estimates are reported in Table 1 in both their level and differenced forms.The results in Table 1 clearly show that all variables are not stationary at levels.Therefore, a differenced method is utilized.After conducting the first order difference, we get the differenced forms (ΔTURBAN, ΔEC and ΔGDP).The results of Table 1 further show that all variables are stationary at the first difference rejecting the null hypothesis at less 10% significance.This indicates the variables are found to be of I( 1).Thus, this study can proceed with the cointegration test and VECM to examine whether the long run relationship exists between the variables. Results of Cointegration Test Since all the variables are stationary at the first difference, we proceed to perform the cointegration test.First, the residual term et should be calculated using DOLS; then, it proceeds to examine the stationarity of et using ADF test.From Table 2, we find that et is stationary, indicating an cointegrated relationship between urbanization, economic growth and energy consumption.Thus, the estimated regression equation is: The DLOS test results show that urbanization and economic growth have a long-term relationship with energy consumption.Specifically, 1% increase in urbanization will increase energy consumption by 0.5427% and 1% increase in economic growth will increase energy consumption by 0.5041%.From the results, we can find that both urbanization and economic growth have positive effects on the increase of energy consumption.Along with the rapid urbanization process and economic growth, energy consumption increases accordingly.Therefore, it is not the most feasible method to seek rapid economic growth at the cost of sacrificing the environment in future.Economic growth should be derived from optimizing industrial structure and improving energy efficiency that consume less energy.In dealing with urbanization progress, the Chinese government still faces the dual challenge of reducing environmental pressure (especially reducing energy consumption) while continuing to foster economic development. Vector Error Correction Analysis If the variables are found to be cointegrated, the vector error correction model will be further used.The model was chosen because it has several advantages.First, it is capable of eliminating the spurious regression using the differenced method.Second, it is more efficient to capture the implicit information of the original variables.In order to enhance the accuracy of the model, error-correction term (ECM) will be regarded as the equilibrium error to make up for deficiencies of long-term static model using the short-term dynamic model.The specific regression model is as follows: From Equation ( 13), we can find that energy consumption has short-term fluctuations.Energy consumption will deviate from the equilibrium state when impacted by itself or external changes.However, error-correction term (ECM) can precisely explain the fluctuations and the adjustment.Specifically, ECMt−1 denotes that under the impacts of control variables, when short-term fluctuations deviate from the long-term equilibrium, changes of energy consumption in t period can eliminate the non-equilibrium error of the t − 1 period by 9.74% and make a reverse adjustment to bring the non-equilibrium state back to equilibrium state.Meanwhile, the changes of the last energy consumption will also lead to the changes of the current energy consumption, with a long-term elasticity coefficient of 0.467.From above analysis, we find that even though the relationship between urbanization, economic growth and energy consumption will deviate from the equilibrium state temporarily after being affected by uncertainties, an equilibrium relationship will manifest in the long run. Impulse Response Analysis Impulse response analysis is widely used to examine how the variables can be destabilized by shocks that arise with other variables.Specifically, impulse response analysis can depict the trajectory of the impact of one standard deviation shock from random disturbance term to the endogenous variable.To obtain additional insight into how the volatility of urbanization, economic growth and energy consumption extent to other variables, we preform impulse response analysis.Figure 5 presents the results from the impulse response analysis. As shown in Figure 5, a positive one SD shock to energy consumption leads to a decrease in China's urbanization in the first six lag lengths and a slight increase in the last four lag lengths.This indicates that reduction of energy consumption will limit the development of urbanization.However, the impact of energy consumption shock on urbanization is not significant.As expected, a positive one SD shock to urbanization leads an increase in China's energy consumption after the third lag length.In fact, energy consumption has a rapid increase after that lag length, indicating that urbanization shocks have a lagged positive impact on energy consumption.As indicated in Figure 6, a positive one SD shock to energy consumption leads to an immediate increase in China's GDP.This indicates that the impact of the energy consumption shock on economic growth seems to be rather significant.In other words, economic growth has a large dependence on energy consumption.We also find that the impact of economic growth on energy consumption seems to be large.This indicates that with rapid economic growth, energy consumption will increase accordingly.As depicted in Figure 6, economic growth is correlated with urbanization.With the decrease of GDP, urbanization has a decline trend in the same periods.Urbanization shows a lagged positive response to a positive one SD shock of economic growth.Urbanization has an immediate effect on economic growth.However, the impact of urbanization on economic growth seems to be rather marginal. Granger Causality Test Prior to the Granger causality test, the stationarity of the VAR model should be tested.From the Figure 6, we find that all roots are less than 1 and lie inside the unit circle.This indicates that VAR model is stationary.As shown in Table 2, there is a long-term equilibrium relationship between urbanization, economic growth and energy consumption.However, the causal relationships between these variables are unclear.Therefore, the Granger causality test based on VAR model is used.Since the Granger test results are sensitive to the lag length of the variables, an important preliminary step in work the test is to select the lag length of the variables.Four different lag lengths are selected considering the length of the time series.The results of the Granger causality tests are displayed in Table 3. From Table 3, we find that when the null hypothesis is "TURBAN does not cause EC", the minimum p of the tests is 0.1671 which is larger than 0.1; thus, we cannot reject the null hypothesis.This indicates that energy consumption does not Granger-cause urbanization.According to the same criterion, we find that there is a bi-directional causal relationship between energy consumption and economic growth, and unidirectional causality running from urbanization to energy consumption and economic growth to urbanization.Therefore, Granger causality tests suggest that a bi-directional Granger causal relationship exists between energy consumption and economic growth, while a one-way Granger causal relationship exists from urbanization to energy consumption and economic growth to urbanization. Conclusions and Policy Implications In response to global warming, energy saving strategies have been formulated from social and economic perspectives [44].Increasing attention has been given to the effects on energy consumption caused by urbanization and economic growth.However, quantifying the impacts systematically remains relatively unexplored.As the largest energy consumer, China is facing great pressure to reduce its energy consumption for the mitigation of global warming [45].According to scientific research, urbanization can promote economic growth and improve the living standards, but it can also increase energy consumption [5][6][7], and in turn, generate energy crises [7].Therefore, in order to realize the sustainable development of urbanization, economic growth and energy consumption in China, this paper reinvestigates the long-term equilibrium relationships, temporal dynamic relationships and causal relationships between urbanization, economic growth and energy consumption, based on the time series data set covering the period of 1980 to 2012 in China.Unit root tests, E-G cointegration test, vector error-correction model, impulse response analysis and the Granger causality tests base on VAR model are all utilized. The results of unit root tests indicate that the variables are non-stationary at levels.However, the variables are stationary at the first difference rejecting the null hypothesis.Cointegration test further demonstrates that urbanization and energy consumption are cointegrated.This indicates that there is a long-term equilibrium relationship between urbanization, economic growth and energy consumption: specifically, 1% increase in urbanization will increase energy consumption by 0.5427% and 1% increase in economic growth will increase energy consumption by 0.5041%.As such, urbanization and economic growth have a positive impact on the increase of energy consumption.Further, VECM indicates that when the short-term fluctuations deviate from the long-term equilibrium, the current changes of energy consumption could eliminate 9.74% non-equilibrium error of the last period, putting back the situation to the equilibrium state through a reverse adjustment.Meanwhile, the changes of the last energy consumption will also lead to the changes of the current energy consumption, with an elasticity coefficient of 0.467.Impulse response analysis intuitively portrays the destabilized changes of the variables in response to some external shocks.However, the impact of energy consumption shock on urbanization and the impact of urbanization on economic growth seem to be rather marginal.Moreover, Granger causality results reveal that there is a bi-directional Granger causal relationship between energy consumption and economic growth, and a unidirectional Granger causal relationship from urbanization to energy consumption and economic growth to urbanization. The above findings thus contribute to the literature and suggest meaningful theoretical and policy implications.Over the past decades, China's economy has increased with an average annual growth rate of 9% [46].This rapid growth of Chinese economy has however been achieved by huge energy consumption [42].At present, China is in a period of rapid urbanization, when both industry production and daily life increase the energy consumption.Given the higher growth rate of energy consumption, Chinese policy makers are now paying great attention to the link between urbanization, economic growth and energy consumption.Therefore, the following question is critical for Chinese government: how can China realize the future sustainable development and curb energy use while maintain urbanization development?Due to energy use mainly comes from the process of economic growth and urbanization development, sacrificing economic growth maybe is the most feasible method to address the energy use issue.However, since steady and fast economic growth is always an important goal of Chinese government, direct strategy to curb energy use may lead to many negative impacts, such as unemployment.Therefore, some alternative measures include optimizing industrial structures, energy restructuring, improving energy efficiency and developing low-carbon technology are necessary for Chinese decision makers at central or local levels to address energy security and sustainable economic growth and urbanization development. From a methodological perspective, this paper underscores the promising aspects of employing econometric models such as unit root models, E-G cointegration tests, vector error-correction model, impulse response analysis, and the Granger causality test in understanding the nexus between urbanization, economic growth and energy consumption.The results of the econometric models are capable of better understanding the causal relationship in China over the period studied.We believe that this analysis process is relevant not only to specific countries such as China and that in fact this analysis method constitutes a critical tool for building a more comprehensive understanding of the complex relationship between urbanization, economic growth and energy use both considering the curbing the energy consumption and maintaining urbanization development in any country or region. Figure 1 . Figure 1.The changing trend of urbanization, economic growth and energy consumption in 1980-2012. Figure 2 . Figure 2. The fitting curve of urbanization, economic growth and energy consumption in China from 1980 to 2012. Figure 3 . Figure 3. Box chart of urbanization, economic growth and energy consumption with scatter plot and distribution overlay. Figure 4 . Figure 4. Analysis framework of estimation procedure of urbanization, economic growth and energy consumption. Figure 5 . Figure 5.The impulse response curves of urbanization and energy consumption based on VAR model.Note: the solid lines represent impulse response values, the upper and lower dashed lines in each graph denote the 95% confidence interval. Figure 6 . Figure 6.Results of VAR stability condition check. Table 1 . Unit root test results. Table 2 . Unit root test of residual series et. Table 3 . Results of Granger causality tests.
2015-09-18T23:22:04.000Z
2015-05-07T00:00:00.000
{ "year": 2015, "sha1": "74f3f764e01b046f9a6c885ff797291eb089f608", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/7/5/5609/pdf?version=1430997657", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "74f3f764e01b046f9a6c885ff797291eb089f608", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
110483499
pes2o/s2orc
v3-fos-license
Scattering Analysis of a Millimeter-Wave Scalar Network Analyzer This paper presents the results of a scattering analysis of a millimeter-wa~e scalar network analyzer system. The results clearly indlcate the way in which the individual system components contribute to calibration and measurement efror. procedures which minimize the caEbration error for waveguide measurement systems are deseribed, and the residuaf measurement uncertainty is quantified in a way which establishes the tightest possible bound on the measurement error. I. INTRODUCTION o VER THE PAST several years, there has been considerable progress in the development of millimeterwave components and systems. The development activity in the millimeter-wave bands has resulted in a demand for measurement systems. At microwave frequencies, both scalar and vector network analyzer systems have been available for some time. These network measurement systems are commercially available' from several sources and have reached an advanced level of sophistication with regard to accuracy and automation. They are coaxial based and their performance is generally well understood. At millimeter-wave frequencies, the situation is far less satisfactory. Until recently, an individual with the need to make millimeter-wave network measurements faced the task of creating his own measurement system. Now, scalar millimeter-wave analyzer systems are available commercially from at least one source, so progress has been made with regard to hardware availability. However, millimeter-wave measurement systems are normally waveguide-based and it is difficult to determine the performance of these systems through reference to the existing literature on microwave systems. The best source of information on the performance of microwave scalar network analyzer systems appears to be the literature available from the various manufacturers (see [1], for example). Such literature, however; tends to be slanted toward the use of particular equipment and emphasizes the use of coaxial components. Althougtt many of the measurement system performance principles are independent of whether the hardware is coax or waveguide, it was found that the performance of a millinieter-wave scalar network analyzer could not be satisfactorily explained using results as they appear in the existing literature. Manuscript received June 6, 1983;revised September 6, 1983. This work was supported in part by the Navaf Postgraduate School Foundation Research Progrmn. The author is with the Department of Electncaf Engineering, Naval Postgraduate School, Monterey, CA 93943. The work described in this paper was motivated by the need to answer questions which arose during the development of an automated 60-90-GHz waveguide-based scalar network analyzer system. The questions related to system calibration and measurement uncertainties and their relationship to the characteristics of the individual components used to construct the system. Hence, the analyzer system was modeled as a multiport network and its response was determined through analysis using S-parameters, The pur- H. A. System Description A scalar millimeter-wave network analyzer consists of a signal source, directional couplers and detectors to sample incident and scattered waves, and a receiver to process the detector signals and display the results. If automated, the system kill also have a computer which is interfaced with the signal source and receiver via a control bus. A typical system diagram is shown in Fig. 1. The objective is to use the measurement system to determine the insertion loss IL and return loss RL of a device under test (DUT). With the DUT in the forward direction (port A driven), the return loss at port A and the insertion loss from port A to port B are related to the scattering coefficients of the DUT by RLA = -10log10 lS~uT12 (la) which the magnitudes of the scattering coefficients of the DUT may be determined. A more detailed diagram of the measurement system couplers is shown in Fig. 2. The three couplers will be referred to as the R, A, and B couplers since they provide samples of the incident (reference) signal, the signal scattered from port A of the DUT, and the signal scattered from port B of the DUT, respectively. The square-law detectors at coupler ports 3,4, and 6 provide output signals directly proportional to the RF-signal power scattered to these three ports. The return loss is determined from the ratio VA/ V~, while insertion loss is found from the ratio v'~\ V~. In an ideal system, these ratios would provide the desired quantities IL and RL directly. In practice, however, the results are corrupted by component imperfections. This makes it necessa~first to calibrate the system and then to accept some uncertainty when a measurement is made. The analysis which follows will identify the errors introduced by system component imperfections. It further indicates how calibration uncertainty may be eliminated and how measurement uncertainty may be quantified. B. Return-Loss Measurement Ana&sis Return loss is given by (la) and (lc), which may be rewritten in the form RL~= -10loglOP;/P: where P; is the power scattered from port k of the DUT, and P; is the power incident on port k of the DUT. Samples of the incident and scattered waves are coupled to ports R and A, where they are applied to the square-law detectors which produce output voltages V~and VA, respectively. We are interested in the ratio of these voltages which may be expressed as (VA/VJR) = const (G~,l/G~,l) where GTq, = power delivered to port q power available from source p " As shown in Appendices A and B, the ratio of detector voltages may be expressed in terms of the scattering coeffi- cients of the reflectometer bridge as where a 2 is a constant, and I'iD is the input reflection coefficient of the DUT and where it has been assumed that lS22rm] <<1. Further, it should be recognized that IS,II = 1 and that [Sdl /SA2 I <<1 will be approximately equal to the directivity of the A coupler. However, the coupler directivity will always be an upper bound for ISA1/SQ21. Before making an insertion-loss measurement, the system must be calibrated so that the O-dB return-loss reference level is known. Equation (4) which means that the correct RL reference level may be precisely located. We may also calculated (vA/vR)':: -(vA\vR)'$ mm 'n = IS'*+ 8 (7b) 2(v JvR)% avg and this will be useful in evaluating the residual uncertainty when a measurement is made. &_ I is the equivalent of source mismatch, and it is determined from (7b) with uncertainty no greater than the A coupler directivity (see (6)). The previous results have been derived assuming that a perfect sliding short is used to calibrate the system. If the short is Iossy, then its reflection coefficient will have a magnitude less than unit y. The return-loss reference level in this case will be in error by an amount equal to the loss in decibels. For example, if the sliding short produces VSWR = 20, then Irl = 0.905 and the reference level will be 0.86 dB too low. All subsequent measurements referenced to this level would be in error by the same amount. Since waveguide losses increase dramatically in the millimeter-wave bands, this source of error should not be neglected. Now suppose that a DUT is connected to port 2 of the A coupler. In this case, we have no control ,over the phase of the reflection from the input port of the DUT and we obtain @h'vRk% = &+&r +~s22r: This may be written as The constants Cl and C2 lie in the interval [-1, 1] and depend upon the phases of the directivity and equivalent source mismatch error signal components relative to the signal reflected from the input of the DUT. Clearly, directivity and equivalent source mismatch error cause an uncertainty in the measurement of the DUT input-reflection coefficient. This uncertain y will vary with frequency and is dependent upon II',*I as well. As shown by (7b), 1S22 I may be found with small uncertainty at each measurement frequency during calibration. lS41/S42 I is not generally known as a function of frequency but is bounded from above by the coupler directivity D, which is specified by the manufacturer. Thus, we may express the detector voltage ratios in the form where the worst case uncertainty AI'in is given by The calibration and measurement data acquisition and the computation of measurement uncertainty as described above may be accomplished easily with an automated measurement system. During calibration, it is necessary to move a sliding short through a distance of at least one half a guide wavelength N, so that the phase of the reflected signal varies through a full 360 degrees, An appropriate calibration algorithm would be one which searches for and stores the maximum and minimum values of (VA/ VR) at each desired frequency as the short is moved a distance X/2 at the lowest frequency in perhaps 10 steps. After acquiring the DUT reflection data, an undistorted graph of return loss versus frequency with error bars may be gener: (9) where SI is the isolator VSWR (maximum), and SC is the coupler VSWR (maximum). Thus, measurement uncertainty may be minimized by using an isolator and the A coupler with the lowest possible upper bound on VSWR. There are two remaining observations which are worthy of comment. The first relates to the reflection coefficients r~~and r~q of the R and A coupler detectors. Although these reflection coefficients enter into the determination of the gains G~,, and G~,,, the final result is independent of detector VSWR. At any fixed frequency, the effects of detector VSWR are the same during both calibration and measurement and thus disappear through cancellation of the factor a which appears in both (4) where I'~is the reflection coefficient of the load terminating the DUT. To evaluate the return loss (see (l)), lSl~uT I is required. Equation (10) shows that Iri~I = lS~uTl only if 11'~1= O. Therefore, the best possible load should be placed on port B of the DUT when measuring lri~I at port A, and vice versa. If the DUT is terminated in the B coupler so that return-loss and insertion-loss data may be simultaneously acquired and displayed, then the B coupler VSWR will cause additional uncertain y in ISl~uT 1. Therefore, to achieve the lowest uncertainty, the unexcited port of the DUT should be terminated in a waveguide matched load. Such a load has a VSWR, which is significantly lower than that of a directional coupler. Additionally, if a sliding load is used, the error due to load reflection may be averaged out in the same way that the equivalent source mismatch error is averaged out during the return-loss calibration procedure (see (5), (7)). C. Insertion-Loss Measurement Analysis Insertion loss is given by (lb) where Pq-is the power scattered from port q of the DUT, and P: is the power incident on port k of the DUT. All ports are terminated in the load impedance 20, except port k which is driven by a source with impedance 2.. For this measurement, the network is terminated in the B coupler and samples of the incident and scattered waves are coupled to ports R and B, respectively. The square-law detectors at these ports produce output voltages V~and V~. The ratio of these voltages is given by Using (15), (17), and (18), we find that the worst case uncertainty is +0.32 dB without isolators, while with isolators it is reduced to +0.21 dB. If the A coupler is removed from the system, then the equivalent source mismatch is reduced to Ir,'! <0.2. The worst case uncertainty in the location of the O-dB reference level for insertion-loss mea-surements is correspondingly reduced to +0.15 dB, assuming an isolator is used ahead of the B coupler detector. , The uncertainty may be bounded more tightly if during the insertion-loss calibration run the return loss of the B coupler is measured. This will establish the value of lr~l. During the return-loss calibration run, the value of ISzzI is found. Thus, lr(l < I$z I+ Clr~d I when the A coupler coupler is in the system (C is the power coupling factor). If the A coupler is removed from the system Ir;ls (sl-l)/(sl+l). For either situation, the calibration uncertainty% 3,,0(1+Iwo is reduced since Ir: I is known from direct measurement at each frequency of interest. Now suppose that a DUT is placed between the A and B couplers. We then obtain III. IQSULTS The analytical results presented in the previous sections have been verified experimentally using an automated measurement system covering the 60-90-GHz band. The major components of the measurement system are a solid-state A. Fixed Short The return loss of a fixed waveguide short is of interest because the correct value of the return loss is known to be precisely O dB. It may thus be used to check the performance of the measurement system. The center curve in Fig. 3 shows the measured return loss for a WR (12) waveguide short. Notice that the return loss oscillates about the correct value of O dB as the frequency is varied. This oscillation is caused by the interference between the signal reflected from the short and the error signal component due to equivalent source mismatch. This represents a worst case situation since the reflection coefficient of the short is lrl = 1. For a load of unknown return loss, it is this error which introduces uncertainty into the measurement. The upper and lower curves in Fig. 3 bound the measurement uncertainty. The correct value of return loss, O dB in this case, should always be between these two curves. It can be seen that this is generally the case, although there are several points where the upper bound dips a few tenths of a decibel below the O-dB level. This small error is consistent with our use of 10 positions of the sliding short for, calibration. The error results from the failure of the calibration algorithm to determine ISZ2I precisely. The error may be reduced by using more positions of the sliding short. Also evident in Fig. 3 is the variation of the uncer- tainty with frequency. Here, the uncertainty is less near the edges of the band than it is at the center. Thus, the uncertainty near the edges of the band has been reduced considerably relative to the bound computed using the worst case equivalent source VSWR. B. Detector Mount A second example of a return-loss measurement is shown in Fig. 4 which presents the data obtained for a detector. The measured return loss is in the range 20-40 dB over the frequency band 60-70 GHz. At this level, the source mismatch is less important than the A coupler directivity error. Since the A coupler directivity was >40 dB (D < 0.01) in our system, there is considerable uncertainty if the measured return loss is in the vicinity of 40 dB. This can be seen clearly in Fig. 3. C. Through Section The insertion loss of a through section is of interest because the correct value of the insertion loss is known to be O dB. It may therefore be used to check measurement system performance in the same manner as with the short. The measured return loss of a through section is shown in Fig. 5 along with the bounds on uncertainty. The measured insertion loss is within~0.3 dB of the correct value (O dB) over the 60-90-GHz frequency range shown in the figure. The correct value of insertion loss also lies within the computed range of uncertainty delineated by the curves above and below the curve of measured insertion loss except at 61 GHz. At this frequency, a drop in the measured insertion loss has pulled the upper bound on the uncertainty below the O-dB level to -0.1 dB. This anomaly is believed due to a small change in the output power level of the source between calibration and measurement at that frequency. Overall, insertion loss uncertainty is seen to be considerably less than was the case for return-loss measurements. This is in agreement with the results predicted by the model. D. Calibrated Attenuator As a last example, Fig. 6 shows the measured insertion loss of a WR (12) calibrated variable attenuator over the 60-90-GHz band. This attenuator was supplied from the manufacturer with a calibration curve at 75 GHz, and the micrometer was set accordingly for 10 dB of attenuation. The measured insertion loss varied~2.5 dB over the frequency band, but was indeed measured to be 9.74 +0.3 dB at 75 GHz. CONCLUSIONS This paper has presented a scattering analysis of a waveguide-based millimeter-wave scalar network analyzer system. The results of this analysis clearly indicate the relationship between system component specifications and the performance of the entire measurement system. These results may be summarized as follows. 1) The use of a (perfect) waveguide sliding short permits the correct O-dB return-loss reference level to be found precisely. Losses in the short will cause an error equal to the decibel value of the losses in the short. 2) The use of a sliding short permits the equivalent source mismatch ISZ2I to be determined. 3 If 1ow-VSWR isolators are placed ahead of high-VSWR detectors, the system insertion-loss measurement uncertainty will be reduced. The uncertainty may be reduced further if the A coupler is removed from the system when insertion loss is measured. 11) lr~l and lr~l may be reduced by using E-H tuners at spot frequencies to achieve higher accuracy. Mechanical tuners cannot be used in an automatic system, however, since retuning is necessary at each frequency. 12) The use of unnecessary components (such as waveguide switches) should be avoided since they will degrade system performance. 13) The use of a computer to control instruments and graph results is very desirable. Distortion due to source leveling and detector flatness can be removed and error limits can be computed and displayed. The methods that are proposed here for determining measurements uncertainty result in the tightest possible bounds on the error. Calibration and measurement data are used to achieve this. Simple use of component specifi-cations alone would result in considerably looser bounds on error. The model discussed in this paper does not account for instrumentation errors. Errors of this type may occur due to the following: 1) signal source harmonics; 2) changes in signal source frequency or output power level between calibration and measurement; 3) non-square law operation of detectors; 4) nonlinear amplification of detected signals. These are errors that will depend upon the specific hardware implementation of the measurement system, but they should not be overlooked, particularly since millimeterwave hardware is not yet mature. Overall, the results presented here should bring the important features of measurement system response more clearly into view. The analysis should therefore be useful to those individuals concerned with scalar measurement of millimeter-wave network scattering coefficients. APPENDIX A RETURN-LOSS ANALYSIS With reference to Fig. 2, the R coupler, A coupler, and isolator will be considered as a 4-port network. The behavior of this network may be determined from the network scattering equations. It will be assumed that Sld = Slz = S3A = S32 = O since these coefficients produce terms which are small in comparison to those retained. Likewise, S43 = O and will be neglected.
2019-02-07T21:37:23.387Z
1984-02-01T00:00:00.000
{ "year": 1984, "sha1": "2aea09aff2a20c37f340c6be85df5cb8069b9ced", "oa_license": "CC0", "oa_url": "https://zenodo.org/record/1281722/files/article.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2aea09aff2a20c37f340c6be85df5cb8069b9ced", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
12535838
pes2o/s2orc
v3-fos-license
A renewed Medication Adherence Alliance call to action: harnessing momentum to address medication nonadherence in the United States The problem Nonadherence to prescription medications is a common and costly problem with multiple contributing factors, spanning the dimensions of individual behavior change, psychology, medicine, and health policy, among others. Addressing the problem of medication nonadherence requires strategic input from key experts in a number of fields. Meeting of experts The Medication Adherence Alliance is a group of key experts, predominately from the US, in the field of medication nonadherence. Members include representatives from consumer advocacy groups, community health providers, nonprofit groups, the academic community, decision-making government officials, and industry. In 2015, the Medication Adherence Alliance convened to review the current landscape of medication adherence. The group then established three working groups that will develop recommendations for shifting toward solutions-oriented science. Commentary of expert opinion From the perspective of the Medication Adherence Alliance, the objective of this commentary is to describe changes in the US landscape of medication adherence, framing the evolving field in the context of a recent think tank meeting of experts in the field of medication adherence. Introduction According to the US Institute of Medicine, medications are the most common medical intervention, and their potential to both help and harm is vast. 1 Timely and continuous use of prescription medicines is key to effective disease management, particularly for chronic conditions. Yet, nonadherence to prescription medications remains a serious, common, and costly problem. [2][3][4] Over half of American adults are nonadherent, leading to annual avoidable health care costs ranging in hundreds of billions of dollars. 5 While many interventions and programs have demonstrated improvements in specific settings, [6][7][8] there is no singular effective solution to improve medication nonadherence, as there is no one outstanding explanation for nonadherent behavior. The objective of this commentary is to describe changes in the landscape of medication adherence, framing the evolving field in the context of a recent think tank meeting of predominately US experts in the field of medication adherence. submit your manuscript | www.dovepress.com Dovepress Dovepress 1190 Zullig et al representatives, met to consider the current state of medication nonadherence and provide practical recommendations for shifting toward solutions-oriented science. 9 The group coined themselves the "Medication Adherence Alliance" (or "Alliance"). The overarching goal of the Alliance is to assemble a body of medication adherence experts to comment and provide feedback on current and proposed legislation and policies potentially affecting medication adherence. As an outcome of this initial meeting, the Alliance developed a call to action that summarized three basic assumptions of the think tank participants. 9 In brief, first, the Alliance expected that proven strategies should be identified and integrated in a tailored, multicomponent manner. Second, it was presumed that medication adherence is a shared objective in which providers, payers, patients, and the health care system at large share roles and responsibilities. These participants included multiple members of the health care team throughout the continuum of care, individuals who provide patient social support (eg, family), community liaisons, health care payers and providers, and policy makers. Third, it was believed that while quality indicators for adherence-related outcomes and emerging standards for health information technology would spawn national-level change, the Alliance might proactively improve adherence through strategic, participatory action at local levels in their own health care environments. 9 Nearly 4 years later, the think tank reconvened (2015) for a second 2-day session. The goal of the meeting was to consider if, and how, the US landscape of medication adherence had evolved since the initial call to action. The second think tank was also structured around three key concepts. First, how had the problem of medication adherence evolved in the previous 4 years? For example, how had financial incentives shaped medication adherence priorities? Second, were we closer to identifying successful intervention strategies? For example, how could we leverage information technology, existing data sets, and interoperability standards to improve and automate medication adherence measurement? And finally, could we develop specific recommendations to transform clinical care? What should our revised priorities be in order to advance scientific knowledge and develop evidence-based policies, and ultimately provider improved patient outcomes? Throughout this updated commentary, we address each of these three key concepts. We also discuss perceived future directions for the Alliance and, more broadly, the field of medication adherence. How has medication adherence evolved? Since the initial think tank meeting (2011), there is heightened awareness of medication adherence as a problem on both the national 2-4 and international stage. 2,10,11 There is also increasing attention on raising awareness about the problem of poor adherence by educating and engaging patients, their families, and caregivers. 5 Despite much attention and the development of numerous interventions aimed at improving medication adherence, 6-8 adherence generally remains low and is a continuing public health concern. As a result, there are continued consequences with regard to morbidity/mortality burden and health care costs. There are, however, new possible solutions to improve medication adherence including 1) policy-based interventions (eg, incentive reform), 2) emerging technologies, and 3) patient-level interventions (Table 1). Although the effects may vary by disease burden, therapeutic area, or other factors, 12 it is widely believed that improving medication adherence will result in improved individual-and population-level health and reduced health care spending. 13,14 As a result, the Centers for Medicare Medicaid Services developed a five-star rating system. Under this system, facilities are rated with between one (poorest quality) and five (best quality) stars, indicating their care quality. 15 The rating is based on a number of factors such as health inspections, staffing, and quality measures. Quality measures are grouped in four categories, measures of: 1) operational excellence; 2) Medicare Part D (eg, Medicare prescription drug coverage); 3) clinical quality measures; and 4) operational measures. Medication adherence is a key component of the Medicare Part D quality measure, indicating that improving adherence is critical to ensure a Centers for Medicare and Medicaid Services five-star rating. 16 This policy shift to include a focus on medication adherence is timely. Patients insured by Medicare supplements take an average of approximately seven unique medications, resulting in substantial out-of-pocket cost burden which may increase nonadherence. 17 In fact, cost-saving strategies are widely used by patients insured by Medicare Part D. 17 The hope is that revisions to Medicare Part D will swing health care systems' attention toward continual medication adherence measurement, thus better coordinating with individual patients to address the unique factors contributing to their nonadherence. If this approach is successful, improving adherence may also reduce overall health care costs. As one example, among Medicare beneficiaries with chronic heart failure and diabetes, researchers determined that higher levels of medication adherence were associated with decreased costs in Medicare spending, generally savings beyond drug costs. 13,14 There is renewed interest in policy-based interventions. A switch in incentive reforms could make medication adherence improvement more imperative. One example of a policy-based intervention is incentive reform in the form of value-based insurance designs. "A value-based insurance design encourages the use of services when the clinical benefits exceed the cost and likewise discourage the use of services when the benefits do not justify the cost". 18 In the context of medication adherence, this often translates to reducing or eliminating patients' out-of-pocket costs for certain efficacious medications, with the goal of improving their adherence. The notion is that the medical cost savings, via improved clinical outcomes and decreased utilization of medical services, should be sufficiently high to outweigh the drug costs and that of reduced copayments. Blue Cross Blue Shield of North Carolina implemented a value-based insurance design in 2008, for example, which resulted in improved medication adherence and modestly decreased hospitalizations. 19,20 Despite these positive outcomes, the investment in program cost exceeded its health care savings. 20 These disappointing results notwithstanding, other similar programs have shown promise. [21][22][23] There is evidence that value-based medication adherence programs can be efficacious. As one such example, researchers from Harvard University conducted a randomized clinical trial among patients discharged after myocardial infarction. Patients were randomized at the level of their health care insurer to either usual prescription coverage or full prescription coverage. 24 The authors reported that adherence ranged from approximately 36% to 49% in the usual prescription coverage group and were 4-6 percentage points higher in the full-coverage group. 24 While this is a significant improvement, it may not be practical or sustainable to provide full prescription coverage for an entire population. While full-coverage programs and value-based insurance designs remain promising, more research is needed to understand how appropriately designed programs can generate cost savings while maintaining patient access. A widespread intervention approach for appropriate medication use is medication therapy management (MTM), which is reimbursable under Medicare Part D. MTM involves a comprehensive set of services including medication review, follow-up, and care coordination. While this is a resourceintensive intervention, a recent report developed by Agency for Health Research and Quality (AHRQ) suggests that the evidence base supporting MTM is low. 25 One aspect of MTM is medication synchronization. Medication synchronization is the process of aligning a patient's medication refills such that they are all due on the same day. The timing of the refills must be done thoughtfully; for example, for many patients, it will be critical that the refill date is scheduled to coincide with their paycheck or social security check. Appropriate medication synchronization has been demonstrated to improve medication adherence among people with chronic conditions. 26,27 In addition to the evolution of the problem of medication nonadherence, potential solutions for monitoring and measuring adherence are also developing. From both a patient and health care system perspective, there is increased adoption of Internet (eHealth) and mobile health (mHealth) technologies. The potential for monitoring and measuring adherence will enable innovative technologies and will likely further support evidence-based care, patient self-monitoring and 1192 Zullig et al medication-taking reminders, such as pill-monitoring technologies (including digital pills), mobile health (mHealth) technologies (eg, text messaging, interactive voice response, smartphone applications), and online resources and social media, among others. 28 From the US health care system side, capabilities in electronic health record software will enable automated detection of primary medication nonadherence or discontinuation using pharmacy fill and claims data. These data can be automatically transmitted back to the prescriber as additional information for managing patients' appropriate use of medication, serving to facilitate further coordination among the health care continuum. A broad array of more affordable front-end technology (eg, that the user directly interacts with) is emerging to directly collect and record patient adherence data. This front-end technology pulls into focus the importance of accurate and predictive data analysis, effective patient communication, and provider counseling opportunities to address the more complicated issues of implementing interventions in "real-world" settings. These technological solutions are important tools, but they should be used within the framework of setting a patient-centered agenda, broadening the scope of adherence interventions to include aspects of medication management that may be beyond the traditional scope of adherence research. 29 Technology may also shift the way we define adherence. In the policy arena, a dichotomous measure of adherence is often used, but a continuous measure of adherence with consideration for patterns of use may be more meaningful for clinical efforts. As the US landscape of medication adherence evolves, it is critical that we identify existing, successful intervention strategies to improve adherence. Are we closer to identifying successful intervention strategies? Recent systematic reviews have evaluated the impact of specific intervention approaches on improving medication adherence. [30][31][32][33] Conn et al identified that the largest effect sizes were among studies that used electronic medicationmonitoring systems. Other approaches that improved adherence included prompting patients to take their medications (ie, reminders) and linking medication taking with existing habits or behaviors. 30 There is increasing evidence for the effectiveness of text messaging as a modality to improve medication adherence. When designing interventions delivered via text messages, investigators should consider the content (standardized vs tailored), interaction (one-way vs two-way), timing (along with medication dose or meal), and dose (daily, weekly, or monthly). 34 However, for patients with a history of medication adherence problems, face-to-face interventions may still work best. 30 While there have been numerous successful interventions to improve medication adherence, 6-8 relatively few are implemented and reported beyond the academic research setting; instead, they tend to be isolated to a specific clinical condition or setting and are often only tested in the context of a clinical trial, rather than in a real-world efficacy scenario. Industry stakeholders often develop potentially more practical interventions, but they are often not reported with sufficient detail to enable reproducibility in other health care settings. As an example, CVS Health recently announced a medication synchronization program called ScriptSync™ that is currently available in all CVS retail pharmacy locations and will soon be available via mailed prescription services. 35 CVS has a research branch (the CVS Health Research Institute) that often publishes and disseminates information about what works well in their system; however, many industry stakeholders do not take this additional step. Interventions must also consider cost efficiency and longterm sustainability, value of the intervention to each user, practical factors necessary for implementation in real-world clinical context, and challenges required for dissemination and use on disparate platforms. 8,36 As an example, with the advent of specialty medications (ie, biologic agents for the treatment of cancer), some requiring high out-of-pocket cost, support for appropriate medication use often becomes embedded in a support infrastructure provided by the drug manufacturer. A sustainable intervention might consider who will incur the cost of supporting a program and ensure buy-in from key stakeholders. Furthermore, adherence interventions tend to be evaluated as part of more complex multidimensional programs, and more information is needed regarding the value of each individual component in order to facilitate optimal care management design. Thus, future intervention designs must emphasize practicality and scalability in designing and testing plans for implementation and dissemination. Can we develop specific recommendations to transform clinical care? The primary outcome of the second Alliance think tank reboot was the development of three work groups. Each work group is focused on an area that the Alliance asserts is critical to improving medication adherence and is currently understudied: 1) a "living" laboratory, 2) medication adherence 1193 Renewed Medication Adherence Alliance call to action measurement workgroup, and 3) electronic health record workgroup. These workgroups are composed of multidisciplinary teams united for a finite period of time and a specific goal. The living laboratory provides a forum for linking research and industry that has developed innovative products and interventions targeting improved adherence, with health care systems partners who will provide a real clinic environment in which to pilot test their product or intervention. For example, as an initial living laboratory exercise, the Alliance plans to implement Meducation ® by Polyglot Systems, Inc. 37 in select Premier Health System sites. Meducation provides customized visualization of medication schedules available in numerous languages that are specially designed to accommodate patients with low health literacy. 37 The measurement workgroup is charged with synthesizing and summarizing available methods of measuring adherence (eg, pill cap-monitoring technology, pharmacy refill data, self-report) and providing guidance on when and how each potential source of measurement can be most appropriately used. For example, a clinical trial might be best served with objective pill cap-monitoring data, whereas self-report might be perfectly suitable for providing routine clinical care. The electronic health record workgroup is developing recommendations articulating ways in which electronic health records and supportive systems (eg, pharmacy refill records) can be optimized to support the measurement and improvement of medication adherence. While the Alliance affirms that addressing these three areas (providing a living laboratory, evaluating adherence measurement tools, and integrating electronic records) have the potential to advance clinical care, the Alliance members also recognize the need for better integration with international partners for a broad perspective. One example of an international partner is the European Society for Patient Adherence, COMpliance and Persistence (ESPACOMP). 11 ESPACOMP is a predominately European "nonprofit association established to promote the science concerned with the quantitative assessment of what patients do with medicines they have been prescribed". 11 The group organizes annual symposiums reaching clinical, research, and industry partners focused on medication adherence from multiple countries and continents. The 2015 ESPACOMP annual meeting included abstract submissions from more than 20 countries. 38 Collaborating with colleagues in other geographic and clinical contexts should be a central priority for designing creative solutions to improve medication adherence, learning from each other, and improving our success as a global community. Conclusion While medication nonadherence remains a problem, it is not an insurmountable one. With increased attention, advancements in technology, and new tools for measurement, there are new opportunities for engaging patients and stakeholders to improve medication adherence like never before. The importance of a multidisciplinary approach to treating and training medication adherence cannot be overstated and is being used to promote adherence. 39 This multidisciplinary approach aligns with the increasing use of primary care medical homes. Any one of these areas represents an area ripe for rigorous research and in-depth review. Also of central importance, but not a focus of the Alliance, is providing educational opportunities and tools for health care professionals. The Alliance is primarily focused on adherence efforts within the US; however, there are many successful adherence interventions occurring in other contexts. We assert that the products resulting from the Alliance will be useful to a myriad of stakeholders to foster engagement, collaboration, and implementation of effective programs. A critical premise lies in the belief that improved medication adherence provides better patient outcomes and value to the health care system. To be successful, we must harness this momentum wisely, engaging stakeholders, collaborating with international partners, and designing solutions with implementation and sustainability at the forefront.
2018-04-03T01:37:42.923Z
2016-07-07T00:00:00.000
{ "year": 2016, "sha1": "0fddc36c761bd1f29a7a6100c310d4587af1a604", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=31258", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee0b6025a4193720b2b0c0022bb83370fdb2656d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
404371
pes2o/s2orc
v3-fos-license
Quantized dual graded graphs We study quantized dual graded graphs, which are graphs equipped with linear operators satisfying the relation DU - qUD = rI. We construct examples based upon: the Fibonacci poset, permutations, standard Young tableau, and plane binary trees. Introduction Fomin's dual graded graphs [Fom] and Stanley's differential posets [Sta] are constructions developed to understand and generalize the enumerative consequences of the Robinson-Schensted algorithm. The key relation in these constructions is DU − U D = rI, where U, D are up-down operators acting on the graphs or posets 1 . In this article we develop some of the basic theory of quantized dual graded graph, which are equipped with updown operators U, D satisfying the q-Weyl relation DU − qU D = rI. One of the motivations for the current work were the signed differential posets developed in [Lam], which correspond to the relation DU + U D = rI. Thus quantized dual graded graphs specialize to usual dual graded graphs at q = 1, and to signed differential posets (or their dual graded graph equivalent) at q = −1. The central enumerative identity in the subject developed by Fomin and Stanley is λ⊢n (f λ ) 2 = n! where the sum is over partitions of n, and f λ is the number of standard Young tableau of shape λ. The corresponding analogue (Theorem 4) for a quantized dual graded graph (Γ, Γ ′ ) reads where the sum is over vertices of height n, the polynomials f v Γ (q) and f v Γ ′ (q) are weighted enumerations of paths in Γ and Γ ′ , and [n] q ! is the q-analogue of n!. We explicitly construct examples of quantized dual graded graphs and interpret (1). These examples are based on various combinatorial objects: Date: August 3, 2008. T.L. was partially supported by NSF grants DMS-0600677 and DMS-0652641. 1 Fomin [Fom] also considered more general relations of the form DU = f (U D). the Fibonacci poset, permutations, standard Young tableau, and plane binary trees. Unfortunately, we have been unable to quantize Young's lattice. More examples will be given in joint work [BLL] with Bergeron and Li, where in some cases a representation theoretic explanation for the identities DU − qU D = I and (1) will be given. Quantized dual graded graphs which is a non-zero polynomial in q with nonnegative coefficients. We shall assume that Γ is locally finite, so that for each v, there are finitely many edges entering and leaving. Because each edge has a weight, we shall assume that there are no multiple edges. Let C(q)[V ] be the C(q)-vector space of formal linear combinations of the vertex set V . A linear operator on C(q)[V ] is continuous if it is compatible with arbitrary linear combinations. Define continuous linear operators U, D : and extending by linearity and continuity. We define a pairing (., .) : Then U and D are adjoint with respect to this pairing. In the sequel, we will often write U and D for U Γ and D Γ ′ . When q = 1, we obtain the dual graded graphs of [Fom], which are equipped with the relation DU − U D = rI. We should note that Fomin also considered the more general relation DU = f (U D) for arbitrary functions f ; however, he did not focus on (2) where q is a parameter. If (Γ(q), Γ ′ (q)) are a pair of quantized dual graded graphs then we say that (Γ(q), Γ ′ (q)) is a quantization of (Γ(1), Γ ′ (1)). The basic example of a dual graded graph is Young's lattice of partitions, ordered by containment; see [Fom, Sta]. The following is the basic problem for quantized dual graded graphs. Problem 1. Find a quantization of Young's lattice. In [LS], we constructed dual graded graphs from the strong (Bruhat) and weak orders of the Weyl group of a Kac-Moody algebra. The dual graded graphs constructed this way include Young's lattice, and closely related graphs such as the shifted Young's lattice. Problem 2. Find a quantization of Kac-Moody dual graded graphs. Remark 1. Equation (2) specializes to DU +U D = I when q = −1 and r = 1. Graphs satisfying this relation were studied in [Lam]. More specifically, in [Lam] we studied only such graphs, called signed differential posets, which arose from labeled posets. The examples constructed in the present paper can also be specialized at q = −1, giving what would be called "signed dual graded graphs". The main example in [Lam] was the construction of a signed differential poset structure on Young's lattice. Since we have been unable to quantize Young's lattice, we have stopped short of explicitly writing the examples in the current article using the notation in [Lam]. q-derivatives and enumeration on quantized dual graded graphs Let f (t) = n≥0 a n t n ∈ C[[t]] be a formal power series in one variable. Define the q-derivative as follows: [n] q a n t n−1 . Here [n] q = 1 + q + . . . + q n−1 denotes the q-analogue of n. We also set [n] q ! := [n] q [n − 1] q · · · [2] q [1] q . Let U, D be formal, non-commuting variables satisfying the relation DU − qU D = r. We assume that U and D commute with the variable q. The following Lemma explains the relationship between the relation DU − qU D = r and q-derivatives. Proof. By linearity and continuity it suffices to prove the statement for f (U ) = U n . For n = 0, the formula is trivially true. The inductive step follows from the calculation We now suppose (Γ, Γ ′ ) is a qDGG with a unique minimum (source) ∅, which we assume has height h(∅) = 0. Let us denote the weight generating The following is an analogue of [Fom,Corollary 1.5.4]; see also [Sta]. Proof. By Lemma 3 we have from which the result follows by induction. More generally, let f (∅ → v → w) denote the weight generating function of paths beginning at ∅, going up to v in Γ, then going down to w in Γ ′ . For Other path generating function problems can be solved by studying the "normal ordering problem" for the relation DU − qU D = r, that is, the problem of rewriting a word in the letters U and D as a linear combination of terms U i D j . We shall not pursue this direction here, but see for example [Var]. q-reflection ) be a pair of graded graphs with height function taking values in [0, n], and such that (2) holds for some fixed r, when applied to all vertices v such that h(v) < n. We call such a pair a partial qDGG of height n. We will construct a partial qDGG (Γ n+1 , Γ ′ n+1 ) of height n + 1, and such that they agree with (Γ n , Γ ′ n ) up to height n. Let us write The height n + 1 vertices of (both) Γ n+1 and Γ ′ n+1 will be given by the set This edge has weight m ′ (v, w ′ ) := m(w, v). We omit the proof of the following, which is the same as the corresponding result for differential posets [Sta] or signed differential posets [Lam]. The quantized Fibonacci poset Let (Γ, Γ ′ ) be a qDGG. If the edge sets of Γ and of Γ ′ are identical and in addition every edge weight m(v, w) (and m ′ (v, w)) of Γ (and Γ ′ ) is a single power q i then we call (Γ, Γ ′ ) a quantized differential poset. For then, Γ(1) would be a differential poset in the sense of Stanley [Sta]. Remark 2. We could insist that Γ = Γ ′ as graded graphs, but then in the construction of a quantization of the Fibonacci differential posets we would need to use half powers of q. The vertex set V of (Fib (r) , Fib ′ (r) ) consists of words w in the letters 1 1 , 1 2 , . . . , 1 r , 2 with height function given by summing the letters in the word (all the 1's have the same value). In the notation of the q-reflection algorithm, the vertices v 1 , . . . , v r are obtained from v by prepending 1 1 , 1 2 , . . . , 1 r respectively; the vertices w ′ are obtained from w by prepending the letter 2. The edges (v, w) are of one of the two forms: (1) v is obtained from w by removing the first 1 (one of the letters 1 1 , 1 2 , . . . , 1 r ); (2) v is obtained from w by changing a 2 to one of the 1's, such that all letters to the left of this 2 is also a 2. In either case, let s(v, w) denote the number of letters preceding the letter which is changed or removed to go from w to v. The edges m(v, w) of form (1) have edge weight m(v, w) = m ′ (v, w)q s(v,w) in both Fib (r) and Fib ′ (r) . The edges of form (2) have edge weight m(v, w) = q s(v,w)+1 in Fib (r) , and edge weight m ′ (v, w) = q s(v,w) in Fib ′ (r) . For the rest of this section, we will restrict ourselves to r = 1, and write 1 instead of 1 1 . We now describe the weight of a path from ∅ to a word w in Fib = Fib (1) or Fib ′ = Fib ′ (1) . Given a word w ∈ Fib one has a snakeshape ( [Fom]) consisting of a series of columns of height one or two. For example, for w = 21121 we have the shape . Given such a snakeshape λ, following Fomin [Fom] we say that a Young-Fibonacci-tableau of shape λ is a bijective filling of λ with the numbers {1, 2 . . . , n} so that: (1) In any height two column the lower number is smaller. (2) To the right of a height two column containing the numbers a and b none of the numbers in [a, b] occur. Fomin [Fom] described a bijection between Young-Fibonacci-tableau T of shape λ = λ(w) and paths from ∅ to w in Fib (or Fib ′ ). For example, the tableau 3 5 2 7 6 4 1 corresponds to the path ∅ → 1 → 11 → 21 → 211 → 221 → 2121 → 21121. Lemma 6. Under this bijection the weight of path is equal to wt(T ) in Fib, and equal to wt ′ (T ) in Fib ′ . Proof. This is straightforward, using the description of the bijection on [Fom,p.394]. Thus we have f w Fib = T wt(T ) and f w Fib ′ = T wt ′ (T ) where the sum is over Young-Fibonacci tableau with shape λ(w). It is not clear whether there is a simple way to write the identity that results from Theorem 4. The qDGG on permutations Let V = ⊔ n≥0 S n be the disjoint union of all permutations equipped with the height function h(w) = n if w ∈ S n . Define a graded graph Perm with vertex set V and edge set E consisting of edges (v, w) whenever v ∈ S n−1 is obtained from w ∈ S n by deleting the letter n; define m(v, w) := q n−s , where 1 ≤ s ≤ n is the position of the letter n in w. Define Perm ′ with the same vertex set and edges (v, w) whenever v ∈ S n−1 is obtained from w ∈ S n by deleting the first letter, followed by reducing all letters greater than the deleted letter by one; define m(v, w) := 1 always. For example, in Perm there is an edge from 4123 to 41523 with weight q 3 . In Perm ′ there is an edge from 1423 to 41523 with weight 1. The following result is a straightforward verification of the definitions. Let inv(w) denote the number of inversions of a permutation w. For the pair (Perm, Perm ′ ), we have f w Perm = q inv(w) and f w Perm ′ = 1. Thus Theorem 4 expresses the identity (see [EC1]) The qDGG on tableaux Let Y n denote the set of standard Young tableau P of size n with any shape (see [EC2]). We assume the reader is familiar with tableaux, and with Schensted insertion. Let V = ∪ i≥0 Y i with the obvious height function. Define Tab to be the graded graph with vertex set V , and edges (P, P ′ ) ∈ Y n × Y n+1 whenever there is some k ∈ {1, 2, . . . , n + 1} so that P ′ is obtained from P by first increasing the numbers greater than or equal to k inside P by 1, and then Schensted inserting k; declare m(P, P ′ ) = q n+1−k . Define Tab ′ to be the graded graph with vertex set V and edges (P, P ′ ) ∈ Y n × Y n+1 whenever P ′ is obtained from P by removing n; declare m(P, P ′ ) = 1. The following result is straightforward. Fix a standard Young tableau P ∈ Y n . There is a bijection from the set of paths p from the empty tableau ∅ to P in Tab, to the set of standard Young tableau of shape equal to the shape of T . The bijection is obtained by taking the sequence of shapes encountered along p, or equivalently, by taking the recording tableau of the sequence of Schensted insertions given by p. The following Lemma is immediate. Lemma 9. Suppose p is a path from ∅ to P , corresponding to a standard Young tableau Q. Then the weight of p in Tab is equal to q inv(w(P,Q)) , where w(P, Q) ⇔ (P, Q) under the Robinson-Schensted bijection. It follows that Theorem 4 applied to Proposition 8 gives (3), with the terms labeled by permutations w on the left hand side grouped according to the insertion tableau of w. The qDGG on plane binary trees A plane binary tree is a tree T embedded into the plane which has three kinds of vertices: (a) a unique root node r which has exactly 1 child, (b) a number of internal nodes with two children, and (c) a number of leaves with no children. The leaves are numbered {0, 1, . . . , n} from left to right, where n is the number of internal nodes. Let T n denote the set of plane binary trees with n internal nodes. By definition, T 0 consists of the tree ∅, which has a root r, no internal nodes, and a single leaf 0. We now describe a number of combinatorial operations on plane binary trees; see [AS] for further details. Given two plane binary trees T 1 ∈ T p and T 2 ∈ T q we can graft a new plane binary tree T 1 ∨ T 2 ∈ T p+q+1 by placing T 1 to the left of T 2 in the plane, identifying the two root nodes r 1 and r 2 to form a new internal node, and attaching a new root to this internal node: Given a tree T ∈ T p and a position i ∈ {0, 1, . . . , p} indexing a leaf v ∈ T we can splice T at v to obtain two trees T 1 ∈ T i and T 2 ∈ T p − i as follows: draw the unique path P from v to the root r. Then the edges of T weakly to the left of P form the tree T 1 , while the edges of T weakly to the right of P form the tree T 2 . Note that every internal node of T is "given" to either T 1 or T 2 . The following tree has been spliced at the * -ed leaf: We write SG(T, i) = T 1 ∨ T 2 to denote the composition of splicing and grafting. Given a non-empty tree T ∈ T p , we can obtain another tree T * ∈ T p−1 from T by removing the leftmost (or 0) leaf v and erasing the node w which is joined to v: Define a graded graph Tree with vertex set V , and edges (T, T ′ ) whenever T ′ = SG(T, i) for some i; declare that m(T ′ , T ) := q i . Define a graded graph Tree ′ with vertex set V , and edges (T * , T ) for every T = ∅; declare that m(T * , T ) := 1. Proof. Let T ∈ T p . Let T ′ = SG(T, i), where i ∈ {1, 2, . . . , p}. Then it is not difficult to see that (T ′ ) * = SG(T * , i − 1). This cancels out all the terms in (D Tree ′ U Tree − qU Tree D Tree ′ )T except for the one corresponding to SG(T, 0) * = T which has coefficient q 0 = 1. To describe the identity of Theorem 4 explicitly, let us define a linear extension of T ∈ T p to be a bijective labeling e : T → {1, 2, . . . , p} of the internal nodes of T with {1, 2, . . . , p}, so that children are labeled with numbers bigger than those of their ancestors. Let E(T ) denote the set of linear extensions of T . Also, let us say that an internal node v is to the left (resp. to the right) of an internal node w if v belongs to the left (resp. right) branch and w belongs to the right (resp. left) branch of their closest (youngest) common ancestor. If e is a linear extension of T ∈ T p , then we may define a permutation w e ∈ S p by reading the labels of the internal nodes from left to right. It is well known (see for example [LR]) that as T varies over T p and e varies over E(T ) we obtain every w ∈ S p exactly once in this way. For example, the following are the three linear extensions of the same tree: q inv(we) and f T Tree ′ = 1. Proof. The claim for Tree ′ is clear. For Tree, we will describe a bijection between E(T ) and paths from ∅ to T . Let e ′ be a linear extension of T ′ and suppose that T = T 1 ∨T 2 is obtained from grafting a splice of T ′ . We may treat T 1 and T 2 as subtrees of T ′ , and in particular restrict e ′ to T 1 and T 2 . Thus we may define a labeling e (depending on e ′ , T ′ , T 1 , and T 2 ) of T by declaring it to be equal to e ′ + 1 on T 1 ∪ T 2 , and equal to 1 on the new internal node present in T but absent in T ′ . It is straight forward to see that e ∈ E(T ). Conversely, given e ∈ E(T ), it is easy to recover e ′ and T ′ by comparing the labels along the leftmost branch of T 2 with the labels along the rightmost branch of T 1 . Recursively applying this procedure we obtain the desired bijection between E(T ) and paths from ∅ to T . Finally, the number of new inversions created in each step of this procedure is equal to the number of internal nodes of T 1 , which in turn is the exponent of q in m(T ′ , T ). This completes the proof. Thus Theorem 2 for (Tree, Tree ′ ) amounts to grouping together the terms of the left hand side of (3) into Catalan number many terms.
2008-08-03T19:38:30.000Z
2008-08-03T00:00:00.000
{ "year": 2010, "sha1": "6696607cd2b324a980fbbbfe2711902bad515916", "oa_license": null, "oa_url": "https://www.combinatorics.org/ojs/index.php/eljc/article/download/v17i1r88/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "bf0dbe27bbf57cc5f0ac354da24cf7f9e7f8aa6a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
225941718
pes2o/s2orc
v3-fos-license
Social Media Regulation: Models and Proposals The article deals with the topical issue of social media regulation. It is based on the libertarian theory of economic freedom because, in our understanding, it allows the elaboration of a future-oriented human rights based-on regulatory approach. This approach is premised on both freedom of speech and the right to private initiative protection in contemporary media environment. In the analysis, the recently structured Facebook and Instagram Oversight Board for Content Decisions are also discussed. The article presents arguments for the establishment of an internal body (arbitration) that can practically resolve disputes among participants and between participants and any social media platform on a regular basis. Such a body can also support the effective application of the media codes of conduct without governmental involvement and may strengthen self-regulation of platforms. Introduction  The issue of how to regulate social media platforms, including social networks, is gaining momentum among stakeholders. It is not an exaggeration to state that sometimes the arguments in favour of the regulatory option turn into regulatory obsession based on the claim that social platforms have a dramatic impact upon our lives and the lives of future generations. In these efforts, some specialists discern attempts to impose "overregulation" on social media without solid guarantees for freedom of expression and freedom of enterprise. No doubt the impact of social networks is paramount today but just such an idea was also considered about the impact of broadcasting during the last century, provoking similar discussions. However, one cannot be sure how media landscape will evolve in the upcoming years and how or whether at all social media giants will maintain their powerful positions. Our purpose here is not to make a review of the opinions concerning Internet intermediaries' regulation but to build on some ideas and suggest a practical solution for good social media regulation that does not affect freedom of expression and freedom of private undertaking. The OECD Observer emphasizes "it is one thing to have regulation, it is quite another to have good regulation". In Principles of Good Regulation, Organisation for Economic Co-operation and Development (OECD, 2002) suggested key principles of good regulation among which the following question is of relevance: "Is government action justified?" The answer should be that government intervention is based on explicit evidence that government action is "justified, given the nature of the issue, the likely benefits and costs of action (based on a realistic assessment of government effectiveness), and alternative mechanisms for addressing the issue" (OECD, 2002). Another variation of good regulation, which has become popular in the digital society, is smart regulation. The term is usually associated with the smartness of the digital technology but smart regulation efforts that in the EU aim simply at reducing the regulatory burdens in the EU legislation. The 10 point plan for EU smart regulation suggested by UK back in 2012 and supported by twelve other member states drew attention specifically to alternatives to EU-regulation (UK, 2012). In the same vein, a new OECD (2019) report "Better Regulation Practices Across the EU" says that regulatory policy must not only be "responsive to a changing environment, but also proactively shape this environment. It is also important to engage citizens and all stakeholders in the development of laws". Fresh bottom-up proposals are welcome in this process. In our article, we shall refer to good regulation being interchangeable to smart regulation. In our view, good regulation is a well thought out and effective model of regulation, non-intrusive and unbiased, which can reconcile different interests and requirements. Ideally, we should consider that better regulation practices enhance both the citizens' interests allowing full implementation of their rights and businesses' interests. With respect to social media platforms their regulation is indispensable to good regulation on the Internet and presupposes independence and minimal governmental involvement. Social Media Regulation: A Brief Overview of Recent Sources Recently, various ideas regarding Internet intermediaries' regulation have been thrown into the public space, expanding the debate between more liberal and more conservative minds. Before any discussion about regulation may take place, it is necessary the nature of social platforms to be clarified-to what extent they are media or not, do these platforms perform a media function, should they be regulated as media or a totally new approach is needed, etc. The complexity of platforms prompts that there can be many regulatory challenges, which require a proper response. According to the Council of Europe's background report on media freedom, regulation and trust issued on the eve of the ministerial conference on the media in Nicosia, Cyprus, platforms and information providers are reconstituting the nature of what "media" are, but are not necessarily respecting established standards of media accountability, transparency and independence. The development of self-regulatory standards often takes place in terms of a loose negotiation between politicians and Internet intermediaries. (Council of Europe, 2020) By and large, these new forms of regulation are risky since they may undermine media freedom and democratic values. They can be non-transparent, arbitrary and lacking the stability of the legal guarantees for the media. Thus, they can put at stake the independence of social platforms and free expression in particular. Measures that are tailored according to ad hoc conditions and inclinations can change the free nature of the whole Internet or amount to a shift from a "neutral Internet" which acts as a mere conduit of information, to a hybrid Internet which is developing new approaches to curating, filtering, shaping, and in general gatekeeping Internet content in ways analogous to mass media. (Council of Europe, 2020) SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 77 The conclusion of the report is that the design of co-regulatory frameworks needs to be kept completely independent from executive control, from capture and from conflict of interest. In line with the human rights standards and the Committee of Ministers' recommendation on a new notion of media, if intermediaries are defined as "media", not only the responsibilities, but some of the privileges that comprise a part of that status should be available to them (Council of Europe CM Recommendation [2011] on a new notion of media). Similarly, a report, dedicated to the changing paradigm of intermediary liability, claims that "as these platforms (social platforms-B.Z, V.D.) grew, it became increasingly difficult for them to self-regulate the large volume of content flowing through their pipelines" (Sflc.in, 2019, p. 1). The document deals with intermediary liability practices in India, where in 2018, the Draft Information Technology (Intermediaries Guide-lines [Amendment] Rules) ("Draft Rules") was proposed by the government to fight "fake news", terrorist content and obscene content, among others. The new rules placed more stringent obligations on intermediaries to pro-actively monitor content uploaded on their platforms and enable traceability to determine the originator of information. However, these attempts raise hard questions concerning predominantly the acceptable limits on freedom of speech on the Internet. In order to formulate appropriate answers, it should be recalled that in 2017, in a "Joint declaration on freedom of expression and 'Fake News', disinformation and propaganda", the United Nations Special Rapporteur on Freedom of Opinion and Expression, David Kaye (2017) stated that "general prohibitions on the dissemination of information based on vague and ambiguous ideas, including 'false news' or 'non-objective information', are incompatible with international standards for restrictions on freedom of expression, and should be abolished" (p. 3). The UK House of Commons, Digital, Culture, Media and Sports Committee came to analogous conclusions in its final report on disinformation and fake news. Alongside human rights protection deputies recommended expansion of digital literacy and greater transparency of social media companies instead of imposing new stricter rules on them (UK Parliament, 2019). The more involved and granular the policing becomes, the more it will look like censorship, "which is what it will inevitably become" states one of the opponents of social media regulation Reynolds. He voices his concern that "to police content of social media speech beyond a very basic level of blocking viruses and the like is a bad idea" (Reynolds, 2019, p. 63). Better according to Reynolds is to police collusion among platforms, i.e., to apply antitrust scrutiny. As the pressure for regulation will inevitably soar, it is better to regulate in a way that preserves free speech and does not empower additionally tech oligarchs. This inference, however, does not mean that governments should meddle in social media business at all costs. Considering the future of the Internet another report tackles the cross border legal challenges online and argues that from a general perspective, it is not easy to formulate concrete legal actions on the net (Internet and Jurisdiction, 2019). The authors make the admonition that "the regulatory environment online is characterized by potentially competing or conflicting policies and court decisions in the absence of clear-cut standards. The resulting complexity may be detrimental on numerous levels and creates 'high levels of legal uncertainty in cyberspace'" (Internet and Jurisdiction, 2019, p. 48). Free speech can easily fall victim of such uncertainty. The facilitation of freedom of expression, including cross-border expression, is the cornerstone of the liberal and borderless Internet. In the same vein, the UN has also emphasized that the right to freedom of expression on the net is an issue of increasing importance (United Nations, General Assembly, Human Rights Council, 2016). SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 78 In order to analyze the problems from various perspectives, the Global Status Report bases its findings on a comprehensive survey among its members. When asked what, if any, negative consequences they foresee if cross-border legal challenges on the Internet are not properly addressed, 59% of the interviewed experts pointed to the risks of the potential restrictions on expression stemming from badly or belatedly addressed cross-border legal issues. "This was one of the strongest concerns among the stakeholders" the authors point out. The report reminds of the huge volume of user-generated content that intermediaries have to deal with on a daily basis. The situation is unique and that is why the role of the Internet intermediaries must be approached "with fresh eyes, free from preconceived notions based on comparisons with the roles of offline intermediaries". Policy-makers and the public should have reasonable expectations for media platforms' activities and more precisely that they should not abide "by all laws in the world" (Internet and Jurisdiction, 2019. p. 61). All these valuable observations drawn from different sources serve as a proof that regulation on the net and especially social media regulation represents one of the many intertwined problems generated by digitization. Apparently, efficient solutions related to Internet governance and working jurisdictional decisions can create the necessary safe and free environment that will allow regulation on the net to operate as good regulation and nurture social media efficient regulation. However, concrete regulatory approaches should balance various rights and be workable at the same time. Against such background as a conceptual basis of our paper, we shall use the libertarian theory of economic freedom because, in our understanding, it permits a future-oriented, human rights based-on innovation, encouraging regulation to be created. To search for a suitable regulatory model, we get inspiration from the publications of the renowned Cato Institute, which has published a series of articles discussing intermediaries' liability from a libertarian perspective. What is important about such approach is that it makes possible for policy-makers to elaborate frameworks that protect both freedom of expression and freedom of enterprise online. Further in our discussion, we shall pick some of the points in the article, "Why the government should not regulate content moderation of social media" by John Samples (2019), since we consider these insights are of more universal nature and can serve as a basis of comparison between US and European regulatory approaches. These points in our view do not only create a good ground to reconcile various rights when elaborating regulation but they also correspond well with Hayek's theory of "spontaneous orders" (the grown or self-generating order) of which the Internet is an example. Hayek describes such order as one in which we "would have less power over the details" than "we would of one which we produce by arrangement" (Hayek, 2013(Hayek, , pp. 1300(Hayek, -1307. Respectively, the regulatory mechanisms that operate vis-a-vis social platforms should take into account their peculiarities and, at the same time, be adequate to the specific nature of the Internet. The Libertarian Approach to Social Media: Basic Premises Tom Standage, deputy editor of The Economist, thinks two features of social media stand out-the shared social environment established on social media and the sense of membership in a distributed community in contrast to publishing. In addition, he underlines the undisputable fact that social media represent an economic institution that has "to generate revenue beyond the costs of providing the service" (Samples, 2019). However, each group of people involved in social media communication: users, consumers, advertisers, and managers are related to speech and their relationships create "the forum in which speech happens" and that is why concerns about speech on social media are central to any regulatory effort that should be undertaken. That is also the SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 79 reason why journalists and media leaders represent the most prominent group of those consulted about the Facebook Oversight Board for Content Decisions-a team of 40 experts who will review Facebook's most challenging decisions to allow or remove content from Facebook or Instagram. One of the regulatory options for social media companies is that similarity to publishers may prompt policy-makers to hold them liable for defamation but that is not the case in the US due to Section 230 of the Communications Decency Act (CDA), which explicitly exempts social media platforms from liability by stating that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". The aim of the Congress was to encourage unfettered expression online, to further economic interests on the Internet and to promote the protection of minors by making interactive computer services and their users self-police the Internet for obscenity and other offensive materials. Valuable for the discussion here is to clarify the stand the US Supreme Court has taken towards private forums of speech during years. The case-law has supported the independence of these forums to take their own decisions. "The history of public values and social media suggests a strong presumption against government regulation. The federal government must refrain from abridging the freedom of speech, a constraint that strongly protects a virtual space comprising speech" (Samples, 2019). The government has also generally refrained from forcing owners of private property to abide by the First Amendment. The conclusion rooted in American law and practice is that "those who seek more public control over social media should offer strong arguments to overcome this presumption of private governance". Other arguments supporting the principle of free, private initiative can also be put forward. One of the relevant questions is whether big tech companies enjoy a monopoly position due to the networks' effects they exploit. Although a few tech companies dominate some markets, that does not mean these firms are leaders for good and can never be displaced. David S. Evans and Richard Schmalensee (2017) warned that "the simple networks effect story leads to naïve armchair theories that industries with network effects are destined to be monopolies protected by insurmountable barriers to entry". According to the authors, the flaw here is that such theories, concentrated on successful firms at a point of time, observe their benefits from networking and conclude they would be market leaders forever. However, competition authorities should scrutinize online platforms when there is evidence they break the rules and harm consumers notwithstanding the network effects' benefits they get. More thoughts in favour of competition can be shared at this place. Giedrojc (2017) who has explored thoroughly competition and social order underlines that competition has especially valuable contribution to social change. "Even the best government regulation is insufficient for upholding liberal order. Competition is the ultimate solvent of power and indispensable dimension of open society" (Giedrojc, 2017, p. 13). There is no other creation than the Internet with all ensuing phenomena, such as networks and platforms that symbolizes better open society and its forward-looking paradigm. Having this in mind, it is not certain that governmental regulation of platforms will produce more competition in the online marketplace of ideas. Regulation may simply protect both social media owners and government officials from competition and back the status quo. When commenting on the emergence of broadcasting in the last century economist Thomas Hazlett (2018) pointed to the fact that FCC carefully planned the structure and form of television service but it also severely limited the number of competing stations, which resulted in the soaring value of the licenses. Hazlett (2018) expands on this issue in his book "The political spectrum: the tumultuous liberation of wireless technology, SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 80 from Herbert Hoover to the smartphone". He also quotes an expert who claims that "the effect of this policy has been to create a system of powerful vested interests, which continue to stand in the path of reform and changes" In our opinion, nobody wishes such system to be perpetuated on social media today because it will stifle diversity, put impediments to development and innovation and generally undermine liberty on the Internet. Terrorism, disinformation, and hateful speech can be seen as strong grounds for governmental regulation of social media. In crisis, especially hierarchies and states are responsible for the security and stability of society. However, American courts have consistently refused to hold social media platforms liable for terrorist acts. In Fields v. Twitter (Fields v. Twitter Inc., 2018WL 626800 [9th Cir. Jan. 31, 2018) and similar cases, plaintiffs failed to demonstrate that ISIS's use of Twitter played an instrumental role in the attacks against them. Though they cannot be seen as uniquely instrumental in the realization of terrorist plans, any standard of liability that might implicate Twitter in terrorist attacks can prove to be overbroad (and inconsistent with the First Amendment or with any legal standard of certainty) and also encompass other services that are frequently used by terrorists. On the other hand, it is not uncommon social media to serve the public interest and to provide opportunities for counter speech and intelligence gathering. Sometimes, state security services could ask social media platforms to refrain from removing terrorist accounts, as they provide valuable information concerning the aims, priorities, and the locations of terrorist actors. Therefore, social intermediaries are not in black only. There can be two other potentially compelling reasons for government action preventing the harms caused by "fake news" and "hate speech". The terms may prove vague, and their use may lead to legal confusion. The term "fake news" has come to public agenda relatively recently and different definitions have been created including variations as mis-, dis-, and mal-information with their respected consequences. The EC has also elaborated a definition of fake news but it is not mandatory for the EU member states. In United States v. Alvarez, the court refused to recognize a general exception to the First Amendment for false speech: "The Court has never endorsed the categorical rule the Government advances: that false statements receive no First Amendment protection" (United States v. Alvarez, 567 U.S. 709 [2012]). In Europe, as a rule, scales tip towards more regulation and additional requirements for social media platforms, including the threat of huge fines being imposed. The broad framework formulated by the Council of Europe in the recommendation on Internet intermediaries provides that laws, regulations and policies applicable to Internet intermediaries, regardless of their objective or scope of application, including commercial and non-commercial activities, should effectively safeguard human rights and fundamental, as enshrined in the European Convention on Human Rights and should maintain adequate guarantees against arbitrary application in practice. In 2018, the European Commission issued recommendation on measures to effectively tackle illegal content online. The recommendation demands greater responsibility to content governance on the part of platforms. In cases of allowing illegal expression, the implementation of the agreed Code of Conduct against illegal hate speech online between the EC and tech giants (Facebook, Twitter, YouTube, and Microsoft), has not produced the expected results to the full. Though the fifth round of assessment in June 2020 reported overall positive outcomes, platforms are still lacking in transparency and are not providing users with adequate feedback on the issue of hate speech removals. Concerning fake news, the Commission suggests a complex of measures but still SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 81 considers that self-regulation can contribute to policy responses, provided it is effectively implemented and monitored. Actions such as the censoring of critical, satirical, dissenting, or shocking speech should strictly respect freedom of expression and include safeguards that prevent their misuse. The actions should also be in line with the Commission's commitment to an open, safe, and reliable Internet (European Commission, 2018). The long term EU intentions in this field are aiming at adopting a larger document-the Digital Services Act-to update the rules about online liability and define platforms' responsibilities vis-a-vis content. This means that legal regulation will prevail over self-regulation. The European efforts to fight illegal hate speech are also an object of criticism. The main argument is that there is no single universally accepted definition of hate speech. According to some experts, it is debatable whether the competent EU bodies and national authorities should impose censorship and public control, as long as "the EU's broad concept of 'hate speech' covers many forms of expression which are varied and complex: Therefore, the approaches must also be appropriately differentiated" (Pana, 2018). In 2018, the Economic Eommission for Africa (ECA) proposed a new EU law requiring platforms to take down any terrorism-related content within an hour of a notice being issued. The law additionally forces platforms to use a filter to ensure it is not re-uploaded. Should they fail in either of these duties, governments are allowed to fine companies up to 4% of their global annual revenue. For a company, like Facebook, that could mean fines of as much as $680 million (around €600 million). This is widely proclaimed as necessary measure, though again it is not without its opponents. Critics say that the instrument relies on an overly expansive definition of terrorist content, and that an upload filter could be used by governments to censor their citizens, while removing extremist content could prevent non-governmental organizations from being able to document human rights crimes in zones of conflict and tension (Porter, 2019). In our view, such governmental initiatives are always met with suspicion by more libertarian oriented persons and groups. The risks of censorship behind as well as the elusiveness of terms will always provoke protests from human rights activists around the world who fear laws regulating hate and false expression could be abused to silence public debate and crack on the opposition in the authoritarian states. Therefore, the best option to preserve freedom of expression on the Internet is to encourage social platforms to put consistent efforts in self-regulation. The first attempt in this direction comes from Facebook. The company has designed a new regulatory mechanism spending two years (2018-2020) to collect and discuss input from stakeholders and experts globally. The Facebook Oversight Mechanism In May 2020, Facebook announced the first members of the Oversight Board; the new structure is going to decide in the most difficult and significant cases regarding content. Behind the proposal is Mark Zuckerberg's (2020) idea that "online content should be regulated with a system somewhere between the existing rules used for the telecoms and media industries". 1 When the Facebook chief executive officer (CEO) shared his thoughts 82 about a new system for content governance and enforcement for the first time, he referred to the worldwide impact Facebook exerts and the responsibility the platform has: "Facebook should not make so many important decisions about free expression and safety on our own". The Board which is central to the overall oversight mechanism will be free to choose and to consider cases referred to them by Facebook and users' appeals following the existing appeals process. It will entertain cases in which Facebook has decided to leave up or remove content from Facebook or Instagram according to its Community Standards. To avoid conflicts of interest, current or former Facebook employees and government officials will not be able to serve as Board members, among other disqualifications. 2 At first glance, the new Facebook model of regulation presenting a triad of relationships between the company, a board and a trust supporting the board and appointing board members, may resemble the structure of a public service media (PSM). However, Facebook is not a media in the true sense of the word and it differs from any other media system. There were, however, statements that the platform performs a public function, but this function is not explicitly the PSM function that we know from the legacy media era. We have already stressed in this article that the nature of every social platform together with the functions it discharges need clarification and to what extent (if at all) they are media in particular. In addition, a public function of a social media platform will be something new and its characteristics should be well studied and determined. By and large, we view the new Facebook controlling mechanism as an encouraging attempt to establish a more responsible self-regulation of the platform. A more detailed review of its operation and results can be furnished when sufficient practice is accumulated. At the beginning, some details seem problematic and this has to be taken into account while observing the next steps of the Board.  the Board is called "Facebook High Court" by M. Zuckerberg but it hardly bears the features of such body. For instance, there is no explicit requirement for lawyers to be members of the deciding panel. Conditions for Board members are very broad and are the following: For the board to be successful, all potential members should embody certain principles, such as a commitment to the board as an institution. In addition, we are seeking candidates who are experienced at deliberating thoughtfully and collegially as open-minded contributors on a team, skilled at making and explaining decisions based on a set of policies, and familiar with matters relating to digital content and governance, including free expression, civic discourse, equality, safety, privacy and technology. Facebook will extend a limited number of offers to candidates to serve on the Oversight Board as co-chairs. If and when those members accept the role, they will then work together with us to select, interview and make offers to candidates to fill the remaining board positions, over time. All members, including the co-chairs, will be formally appointed by the trustees. Though not expressly required there are still a few lawyers elected to sit on the Board but if it is expected to perform as a High Court, it is not certain the legal expertise will be sufficient. In fact, the Charter does not provide for the procedural guarantees of the Board activities. Possibly, they will be outlined in greater detail in the Board guidelines. However, at this stage it is not clear what type of body has been structured-an adjudicating one or a policy commission.  the Board will select cases to look at and such type of adjudication resembles private adjudication similar to arbitration. It seems to be flexible and speedy. Under the arbitration system, however, disputing parties are allowed to choose the arbitrators. Each party chooses an arbiter and they both choose the third one who acts as a chair of the deciding body. There is no possibility of this type under the new FB regulatory scheme and, in fact, it lacks the advantages of the arbitration procedure.  The Board will formulate policy advices to the company. In order to escape the shortcomings of sheer declarations it should be openly stated that such policy proposals will be based on constant and consistent court practice. It is doubtful whether non-binding recommendations can be effective tools for content management guiding of FB and for presenting models of measures that could be used by other platforms and bodies.  the trust supporting the Board will be funded by Facebook. Under this condition, it is not certain that the whole mechanism will be financially and operationally independent. In arbitration disputes parties pay an arbitration fee and the procedure is carried out to their interest. The problem with costs is not well settled within the new FB mechanism. From a financial perspective and in order to guarantee independence, at least relative independence of the adjudicatory body, parties involved in the procedure may pay fees instead of leaving the overall funding being secured by FB.  the first portion of board members will be appointed by Facebook and then they will propose future members-such approach towards the Board membership can also compromise its independence, since FB is powerful enough to impose its nominations from the very beginning. Despite the pitfalls and unpredictable outcomes of the new regulatory approach undertaken by Facebook we think that this is a positive move towards strengthening self-regulation of the platform as well as of social platforms at large which may follow this example or modify it according to their needs. It is another story that through the Oversight Board FB pursues to a greater extent the protection of the company's public image and not that much the protection of the rights of its users. The Board case law is expected to be a valuable contribution to the theory and practice of media ethics and to the creation of more accountable social media. The public relies on FB Board to entrench the principles of human rights and rule of law and to promote high quality of content. Time will show whether such hopes have sufficient grounds. Establishment of an Arbitration Mechanism at Social Media Platforms We now come to the crux of our work to propose an internal body for social media that can practically resolve disputes among participants and between participants and the social media platform on a continuous basis. Social media serve as organizations that provide a space for the creation and exchange of information among a huge number of users and perform as intermediaries or organizers of an information forum. They cannot be held responsible for the content of the information created and exchanged by third persons; however, since they facilitate debate, they should take steps to settle properly disputes related to the debate. A possible solution for them can be the establishment of an arbitration mechanism (tribunal) for resolving disputes through its institutionalization by the social media themselves. Such arbitration should be included in the terms and conditions offered to users. The arbitration mechanism will not be in contradiction with other bodies like the FB Oversight Board, for instance, since the latter will treat the most important cases only, while the former will operate routinely. Inspiration for this idea can be found in the United Nations Commission on International Trade Law (UNCITRAL) Model Law on International Commercial Arbitration (1985), with amendments as adopted in 2006. The purpose of the Model Law is to entrench modern, fair, and harmonized rules on commercial SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 84 transactions and to promote the best commercial practices worldwide. The law is designed to assist states in modernizing their laws on arbitral procedure. It reflects universal consensus on key aspects of international arbitration practice having been accepted by states of all regions and systems (UNCITRAL, 1985). According to eminent Professor Roy Goode (2004), "arbitration is a form of dispute resolution in which the parties agree to submit their differences to a third party or a tribunal for binding decisions" (p. 1162). We have to distinguish the roles of interested parties in this process. Within the sovereignty of states, in order to protect citizens, the obligation to defend national security and counter terrorism lies within the scope of states. In such cases, governments can adopt special laws protecting high public interests based on internationally recognized principles. States can also adopt multilateral conventions supported by enforcement mechanisms (as in the case of money laundering, cyber crime, drug trafficking, trafficking in human beings, etc. legislation). The elaboration of these pieces of legislation and conventions should be transparent, based on shared human rights values and include the efforts of various stakeholders. Outside these legitimate interests, it is not justified states to impose burdensome administrative requirements on structures like platforms, to curb freedom of private entities and meddle in business. Regulatory measures have to abide by the proportionality test the first part of which represents the principle of minimal impairment of the right or liberty. The attempts of a number of nation-states to set controlling, even censoring functions on social platforms, generate problems related both to the right to freedom of expression and the right to free initiative. On the one hand, government interference can suppress certain types of speech and have a chilling effect on expression in general or affect the economic independence of companies. Yet, on the other hand, there can be controversies between the participants in the information forum, as well as between the participants and the social media concerning content, and accordingly with claims for the removal of harmful and offensive content in which states should not step in. The setting up of arbitration mechanisms at social platforms can be related to the specific features of social platforms and the Internet environment they operate in. The establishment of special dispute resolution bodies is not a novelty for organizations authorized to tackle Internet matters. For instance, the Internet Corporation for Assigned Names and Numbers (ICANN) helps coordinate the Internet Assigned Numbers Authority (IANA) functions, which are key technical services critical to the continued operations of the Internet's underlying address book, the Domain Name System (DNS). The body pursues the Uniform Domain-Name Dispute-Resolution Policy which provides for "agreement, court action, or arbitration before a registrar will cancel, suspend, or transfer a domain name". Expedited administrative proceedings can be initiated by the right holder through filing a complaint with an approved dispute-resolution service provider (ICANN, n.d.). The Model of Stock Exchange Arbitration Mechanism Arbitration tribunals being institutionalized units of private, non-governmental adjudication are inherent in such self-governing and self-regulating business organizations, such as regulated markets for securities and other financial instruments. The most typical representative of these markets is the stock exchange. A stock exchange represents a club organization based on membership of securities traders. The stock exchange creates and enforces rules that regulate both the membership and the trade. Disputes shall be settled by special arbitrators organized at the stock exchange arbitration tribunal (court). The membership of the club is contractual and it is mandatory for any member to accept and abide by the so-called "arbitration clause". The clause requires any dispute regarding financial instruments trading and club membership to be decided by the listed SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 85 arbitrators chosen by the parties accordingly. The arbitrators included in the public list are persons of high professional and moral standing. The stock exchange itself is not responsible for the arbitration decisions, since it is often involved in the disputes. The costs of the arbitration decisions (awards) shall be borne by the parties to the dispute. It is also a principle that the dispute settlement rules are created by the stock exchange itself. Social Media and the Arbitration Model Social media is a business and club-like organization (see the opinion of Tom Standage on p. 5) and its rules are binding for the participants in the information forum. In this sense, it can be viewed as an institution similar to a stock exchange. This similarity allows the transposition of the arbitration model to social media and the setting up of such unit at social media platforms. Exchange underpins the operation of both entities (in the one case, it is about exchange of information and ideas, while, in the other, it is about exchange of special goods, such as securities and financial instruments) and their organization is rooted in the principle of membership of participants (terms and conditions acceptance). In the context of this similarity, the specific features of the stock market and of social media cannot be an obstacle to the establishment of an arbitration tribunal at the social media platforms. Arbitration is initially a mechanism for adjudication of commercial disputes but at the stock exchange traders represent many non-commercial persons. The users of social media services also comprise numerous non-commercial persons. In our view, there is no fundamental impediment to using this method by non-traders, if there is a contractual agreement for its implementation. The terms and conditions can bind users of their services through the incorporation of an arbitration clause. By the arbitration procedure disputes about the content of the information on social platforms could be resolved in an impartial and professional manner by unbiased and professional arbitrators selected by the participants themselves. These arbitrators should be recognized media lawyers and professionals with high personal integrity. The arbitration process for resolving disputes is significantly faster and cheaper than litigation. We shall quote again Professor Goode (2004) who stressed that due to its "consensual nature the arbitration mechanism avoids unnecessary delay or expense" (pp. 1174-1175). Arbitration cases are in principle one-instance cases and in exceptional and rare instances only a court can challenge the arbitration awards. Renowned Professors Loss and Seligman (1995) draw attention to the fact that under US securities' legislation courts have limited power to review arbitration awards (at the stock exchanges-B.Z., V.D.) on such grounds as an award being made in "manifest disregard of the law", or its being "completely irrational", or "arbitrary and capricious". A court can also void an arbitration agreement if it finds that there was fraud in the inducement of the arbitration clause itself. (p. 1139) Therefore, the court is not completely isolated in the process of adjudication but can interfere to protect parties' interests in exceptional cases when the arbitration threatens the stability of the legal order. The arbitration settlement of disputes is an opportunity the mediating function of social media to be consolidated. Arbitration will also liberate the platforms from the tasks of censors and controllers of content imposed by legislation in some countries. The adoption of an arbitration clause may restore public trust in social media and strengthen their capability to self-regulate. The recognition of this method by the national states on whose territories the social media operates may be accomplished either by the adoption of appropriate legislation or by concluding multilateral international treaties. SOCIAL MEDIA REGULATION: MODELS AND PROPOSALS 86 The logic of creating and implementing such a model requires as a first step an arbitration unit to be established in nation states where social media operate. The arbitration institutionalization depends on the creation of a representative office in the territory of each state in which arbitration units can be set up. Conclusion The proposition of an arbitration model of settling disputes at social media platforms comprises an approach that assures a wide space for self-regulation of social media. It can better safeguard both freedom of expression and free business initiative. At the same time, this model is also a form of media protection against unjustified and arbitrary state regulatory interventionism, which may easily jeopardize freedom of expression and economic freedom. Social media moderation can prove to be more effective than the increases in government power in conflicting cases. From a libertarian perspective, Samples (2019) shared his suspicion that when government imposes regulation on social media "government officials may attempt directly or obliquely to compel tech companies to suppress disfavored speech", which may result in "public-private censorship" (Samples, 2019). The 2017, United Nations Educational, Scientific, and Cultural Organization (UNESCO) reported "Fostering Freedom Online: The Role of Internet Intermediaries" whose aim was to shed light on how Internet intermediaries both foster and restrict freedom of expression across a range of jurisdictions, circumstances, technologies, and business models came to similar conclusions. Three case-studies are included in the text as an illustration of how an Internet user's freedom of expression hinges on the interplay between a company's policies and practices, the government policy, and geopolitics. These inferences are in harmony with Hayek's vision that "it is not freedom is an impracticable ideal, but because we have tried it the wrong way" (Hayek, 2013, pp. 489-496). Contemplating on these issues solutions that generate minimal risks for human rights online should be searched for. Hayek also reminds us to preserve "what is truly valuable in democracy". Having all these arguments in mind, one should recall that the UN Guiding principles on business and human rights (2011) require "business enterprises should establish or participate in effective operational-level grievance mechanisms for individuals and communities who may be adversely impacted". These mechanisms should be people-centered, easy to implement and generate mutual trust. In addition, it is worth remembering the advice of the European Court of Human Rights (ECHR) that
2020-10-28T18:06:53.077Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "13af428aa1b03614f420e317262a397a7e33eed4", "oa_license": null, "oa_url": "https://doi.org/10.17265/2160-6579/2020.02.002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cac2f91855af87a9510c8b652e167fbefbd864ee", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Sociology" ] }
135042
pes2o/s2orc
v3-fos-license
Indoxyl Sulfate Affects Glial Function Increasing Oxidative Stress and Neuroinflammation in Chronic Kidney Disease: Interaction between Astrocytes and Microglia Indoxyl sulfate (IS) is a protein-bound uremic toxin resulting from the metabolism of dietary tryptophan which accumulates in patients with impaired renal function, such as chronic kidney disease (CKD). IS is a well-known nephrovascular toxin but little is known about its effects on central nervous system (CNS) cells. Considering the growing interest in the field of CNS comorbidities in CKD, we studied the effect of IS on CNS cells. IS (15–60 μM) treatment in C6 astrocyte cells increased reactive oxygen species release and decreased nuclear factor (erythroid-derived 2)-like 2 (Nrf2) activation, and heme oxygenase-1 (HO-1) and NAD(P)H dehydrogenase quinone 1 expression. Moreover, IS increased Aryl hydrocarbon Receptor (AhR) and Nuclear Factor-kB (NF-kB) activation in these cells. Similiar observations were made in primary mouse astrocytes and mixed glial cells. Inducible nitric oxide synthase and cyclooxygenase-2 (COX-2) expression, tumor necrosis factor-α and interleukin-6 release and nitrotyrosine formation were increased by IS (15–60 μM) in primary mouse astrocytes and mixed glial cells. IS increased AhR and NF-kB nuclear translocation and reduced Nrf2 translocation and HO-1 expression in primary glial cells. In addition, IS induced cell death in neurons in a dose dependent fashion. Injection of IS (800 mg/kg, i.p.) into mice induced histological changes and increased COX-2 expression and nitrotyrosine formation in thebrain tissue. Taken together, our results show a significant contribution of IS in generating a neurotoxic enviroment and it could also have a potential role in neurodegeneration. IS could be considered also a potential therapeutical target for CKD-associated neurodegenerative complications. INTRODUCTION Neurodegenerative diseases have become a growing health burden and, in our aging population, are often linked to other comorbidities. Oxidative stress and neuroinflammation contribute to the pathogenesis of neuronal degeneration (Guo et al., 2002) and can cause cell membrane damage from lipid peroxidation, changes in protein structure and function, due to protein oxidation, and structural DNA damage, hallmarks of several neurodegenerative diseases (Adams and Odunze, 1991;Smith et al., 1994;Petersén et al., 1999;Frank-Cannon et al., 2009). The central nervous system (CNS) is particularly sensitive to oxidative stress, probably because of its high oxygen demand and the presence of polyunsaturated fatty acids and low levels of glutathione (GSH; Richardson et al., 1990;Roger et al., 1997). Increasing reactive oxygen species (ROS) production can exacerbate the expression of inflammatory mediators as detected in patients with neurodegenerative diseases (Hsieh and Yang, 2013). Chronic kidney disease (CKD) is characterized by a progressive loss of renal function that, in its terminal phase, shows signs and symptoms of uremic syndrome (Vanholder et al., 2001). Patients with CKD have many comorbidities such as immune disorders, with the coexistence of immunodeficiency and immune activation, and neurological complications that largely contribute to the morbidity and mortality of this disease (Buchman et al., 2009;Krishnan and Kiernan, 2009;Marzocco et al., 2010). CKD is frequently associated with cognitive impairment and, among patients with terminal CKD receiving haemodialysis, more than 85% have cognitive deficits (Krishnan and Kiernan, 2009). Cognitive impairment in CKD is also associated with a poorer clinical outcomes (Sehgal et al., 1997;Kimmel et al., 1998;Murray and Knopman, 2010;Radic et al., 2010). Patients with CKD are also at higher risk of cognitive decline and even dementia (Seliger et al., 2004;Wang et al., 2010). Causes of cognitive impairment in CKD are multifactorial and they include cerebrovascular disease, renal anemia, secondary hyperparathyroidism, dialysis disequilibrium, and uremic toxins accumulation. Plasmatic levels of uremic toxins increase as CKD progresses, and they are believed to be the main cause of cognitive impairment (Krishnan and Kiernan, 2009). However, the exact role or mechanism of uremic toxins in cognitive disorders has not been determined yet. One of the most important uremic toxins is indoxyl sulfate (IS), a protein-bound uremic toxin, which is not effectively eliminated by dialysis. IS is a nephro-vascular toxin (Niwa, 2010) that causes nephrotoxicity especially on tubular cells, inhibits proliferation Abbreviations: AhR, Aryl hydrocarbon Receptor; CKD, chronic kidney disease; CNS, central nervous system; COX-2, cyclooxygenase-2; DCF, 2 ,7 -dichlorofluorescein; DPI, diphenyleneiodonium; ELISA, enzymelinked immuno sorbent assay; EP, endogenous peroxidase; H 2 DCF-DA, 2 ,7 -dichlorofluorescin-diacetate; H 2 O 2 , hydrogen peroxide.; HO-1, heme oxygenase-1; HRP, horseradish peroxidase; IL-6, interleukin-6; iNOS, inducible nitric oxide synthase; IS, indoxyl sulfate; LDH, lactate dehydrogenase; NAC, N-acetylcysteine; NF-kB, nuclear factor-kB; NQO1, NAD(P)H dehydrogenase quinone 1; Nrf2, nuclear factor (erythroid-derived 2)-like 2; PBS, phosphate buffer saline; PDTC, pyrrolidine dithiocarbamate; ROS, reactive oxygen species; TNF-α, tumor necrosis factor-α. of endothelial cells and is an inducer of free radicals (Dou et al., 2007). Moreover, it has been reported that IS enhances inflammatory response and ROS in LPS-stimulated macrophages (Adesso et al., 2013). Among various uremic toxins, IS is a likely candidate capable to trigger cerebral dysfunction in kidney disease (Watanabe et al., 2014). Therefore, we chose to investigate the effects of IS on glial cells and the impact on neuronal survival, all primary aspects involved in CNS homeostasis. Reagents All reagents and compounds, unless stated otherwise were purchased from Sigma Chemicals Company (Sigma, Milan, Italy). Cell Culture In Vitro Studies C6 glioma cell line was obtained from American Type Culture Collection (ATCC; Manassas, VA, United States). C6 were cultured in DMEM, 10% FBS (mL/L), penicillin/streptomycin (100 units/0.1 mg/mL) and 2 mML-glutamine at 37 • C in 5% CO 2 atmosphere and passaged at confluence using a solution of 0.025% trypsin and 0.01% EDTA. This cell line was originally derived from rat brain tumors and have oligodendrocytic, astrocytic and neuronal properties (Benda et al., 1968;Parker et al., 1980). C6 cells are widely used as an astrocyte-like cell line (Quincozes-Santos et al., 2009). Ex Vivo Studies: Primary Astrocytes, Microglia and Neurons Cultures of mixed glial cell from cortex were prepared from postnatal days 1-2 mouse pups (Female C57BL/6 mice; Harlan Laboratories, Udine, Italy). Mice were housed under specific pathogen-free conditions and fed with standard chow diet at the University of Messina, Department of Chemical, Biological, Pharmaceutical and Environmental Sciences. The animal experiments were performed according protocols following the Italian and European Community Council for Animal Care (DL. 26/2014). Cerebral cortices were excised, meninges, olfactory bulb and thalami removed, and the hemispheres were transferred to petri dishes containing HBSS and were cut into four small pieces. Brains were centrifuged for 1 min at 200-300 g. The supernatant was removed and the pellet was incubated with HBSS/10 mM HEPES buffer, 0.5 mg/ml Papain, 10 µg DNAse solution for 25 min at 37 • C. The extracted cells were centrifuged for 5 min at 200-300 g and the pellet was resuspend in BME medium (10% FBS and 0.5% penicillin/streptomycin). The cell suspension was filtered through a 70-µm cell strainer to remove debris. The extracted cells were suspended in BME medium (10% FBS and 0.5% penicillin/streptomycin) in 75 cm 3 flasks. The medium was changed after 48 h and then twice per week (Gelderblom et al., 2012). After 20 days, in some flasks, to obtain only astrocytes in the culture, microglia were dislodged using an orbital shaker (200 rpm for 1 h, 37 • C). Moreover, in order to further remove residual microglia from the remaining cell monolayers, it was used a 60-min exposure (50 mM) to the lysosomotropic agent Leu-Leu-OMe (<5% microglia, referred to some microglial cells not dethached from the treatments, was deteced by flow cytometry using anti-Iba1 as antibody; Marinelli et al., 2015). Dissociated cell cultures of mouse hippocampus and cortex were established from day 16 C57B/6J mouse embryos, as previously described (Fann et al., 2013). Hippocampal and cortical neurons were plated in 35, 60, or 100-mm diameter polyethylenimine-coated plastic dishes. Primary neurons were maintained in Neurobasal medium containing 25 mM of glucose, B-27 supplement (Invitrogen), 0.001% gentamycin sulfate, 2 mML-glutamine, and 1 mM HEPES (pH 7.2) at in 5% CO 2 atmosphere 37 • C. Approximately 95% of the cells in such cultures were neurons and the remaining cells were astrocytes. Cell Treatment C6 cells and primary astrocytes and mixed glial cell cultures were plated 24 h before the experiments. The cellular medium was then replaced with fresh medium and cells were treated with IS (15-60 µM) for 24 h in all experiments, except for NF-kB and Nrf2 evaluation and AhR activation, where IS was added to cells for 20 min and 1 h, respectively. Primary hippocampal and cortical neuronal cultures were plated for 2 weeks before the experiments. Then the cells were treated with IS (15-60 µM) for 24 h. For the experiments, we considered the list of uremic toxins provided by the European Uremic Toxin Work group (Vanholder et al., 2003) and thus used the IS concentration range found in the cerebrospinal fluid of CKD patients (Hosoya and Tachikawa, 2011). Measurement of ROS Reactive oxygen species production was evaluated by the probe H 2 DCF-DA as previously reported (Pepe et al., 2015). H 2 DCF, in presence of ROS, is rapidly oxidized to the highly fluorescent DCF. C6 (3.0 × 10 5 cells/well) and primary astrocytes and mixed glial cell cultures (1.5 × 10 5 cells/well) were plated into 24-well plates and then IS (15-60 µM) was added. After 24 h cells were collected, washed with PBS and incubated in PBS containing H 2 DCF-DA (10 µM) at 37 • C. Cellular fluorescence was evaluated using fluorescence-activated cell sorting analysis (FACSscan; Becton Dickinson) and elaborated with Cell Quest software. In some experiments, in C6 cells, either DPI (10 µM), that has frequently been used to inhibit ROS production mediated by flavoenzymes, or NAC (2 mM), a free radicals scavenger as well as a major contributor to maintenance of the cellular GSH, were added 1 h before IS. In other experiments, in C6 cells, PDTC (200 µM) or CH-223191 (1 µM), a ligand-selective antagonist of the AhR, were added 1 h before IS. Immunofluorescence Analysis with Confocal Microscopy For immunofluorescence assay, C6 cells (3.0 × 10 5 /well), primary astrocytes and mixed glial cells (2.0 × 10 5 /well) were seeded on coverslips in 12-well plate and treated for 1 h with IS (30 µM). In some experiments with C6 cells, DPI (10 µM) and NAC (2 mM) were added 1 h before IS. In other experiments, CH-223191 (1 µM), was added 1 h before IS to C6 cells. Then cells were fixed with 4% paraformaldehyde in PBS and permeabilized with 0.1% Triton X-100 in PBS. After blocking with BSA and PBS, cells were incubated with rabbit anti-Nrf2 antibody (Santa Cruz Biotechnologies; sc-722; used at diluition 1:250), with mouse anti-AhR antibody (Abcam; ab2769; used at diluition 1:250) or with rabbit anti-p65 antibody (Santa Cruz Biotechnologies; sc-372; used at diluition 1:250). The slides were then washed with PBS for three times and fluorescein-conjugated secondary antibody (Immuno Reagents; used at diluition 1:2000) was added for 1 h. DAPI was used for counterstaining of nuclei. Coverslips were finally mounted in mounting medium and fluorescence images were caught using the Laser Confocal Microscope (Leica TCS SP5) as previously reported (Del Regno et al., 2015). TNF-α and IL-6 Determination Tumor necrosis factor-α and IL-6 concentration in the supernatant of cultured primary astrocytes and mixed glial cells stimulated for 24 h with IS (15-60 µM) were performed by an ELISA assay. For this we used commercially available kits for murine TNF-α and IL-6 (e-Biosciences, San Jose, CA, United States) as previously reported (Marzocco et al., 2015). Cytotoxicity Assay on Primary Cortical and Hippocampal Neuronal Cultures The cytotoxic potential of IS (15-60 µM) on primary neuronal cultures after 3 h of treatment was performed using the Cytotoxicity Detection KitPLUS LDH (Roche) according to the manufacturer's instructions. This assay was based on the evaluation of LDH activity. In the evaluation three controls are included: the first was the background control (assay medium), the second was low control (untreated cells), and the last was the high control (maximum LDH release). To determine the experimental absorbance values, the average absorbance values of the samples and controls were calculated and subtracted from the absorbance values of the background control. The percentage of cytotoxicity was determined using the equation: IS was dissolved in PBS and it was injected into mice (800 mg/kg, i.p. given once) (Ichii et al., 2014). After 3 h of treatment, animals were sacrified and kidneys, brains and serum were collected and stored for the analysis. IS Serum Evaluation by HPLC IS levels in mice serum were evaluated according the methods of Zhu et al. (2011) as previously reported . Serum Nitrite/Nitrate, TNF-α, and IL-6 Evaluation Nitrite/nitrate, TNF-α, IL-6 release was evaluated on serum samples of mice treated with IS (800 mg/kg) for 3 h. Serum nitrite/nitrate (NOx) concentration is a marker of NO levels. For the evaluation, serum samples were incubated with FAD (50 µm), NADPH (1 mm), and nitrate reductase (0.1 U/mL). The samples were then incubated with sodium pyruvate (10 mm) and LDH (100 U/mL) for 5 min. The total NOx concentration was measured by Griess reaction adding 100 µL of Griess reagent (0.1% naphthylethylenediamine dihydrochloride in H 2 O and 1% sulfanilamide in 5% conc. H 2 PO 4 ; 1:1 v/v) to 100 µL of serum treated samples, each in triplicate. The optical density at 550 nm (OD550) was measured at 540 nm in a microplate reader Titertek (Dasit, Cornaredo, Milan, Italy) and the NOx concentrations (µM) in the samples were calculated from a standard curve of sodium nitrite (Bianco et al., 2012). TNF-α and IL-6 concentration in serum mice was assessed by an ELISA (e-Biosciences, San Jose, CA, United States). Histology and Immunohistochemistry For the histological examination, kidney and brain from sacrificed mice were immediately incised and fixed in 10% formalin. For the morphological evaluation paraffin-embedded 4 µm sections were stained with haematoxylin and eosin (H&E). For the immunohistochemistry analysis, 4-µm-thick sections of the brain and kidney tissue were collected on silane-coated glass slides (Bio-Optica, Milan, Italy). Immunohistochemical stain was performed using HRP conjugated antibodies. Antigen retrieval pretreatments were performed using a HIER citrate buffer pH 6.0 (Bio-Optica, Milan, Italy) for 20 min at 98 • C. EP activity was quenched with 3% H 2 O 2 in methanol and sections were treated with a blocking solution (MACH1, Biocare Medical LLC, Concord, CA, United States) for 30 min each. Slides were then incubated overnight at 4 • C with primary antibody diluted in PBS (0.01 M PBS, pH 7,2). Antigen-antibody binding was detected by a HRP polymer detection kit (MACH1, Biocare Medical LLC, Concord, CA, United States). Antibody deposition was visualized using the DAB chromogen diluted in DAB substrate buffer and the slides were counterstained with haematoxylin. Between all incubation steps, slides were washed two times (5 min each) in PBS. For each tissue section, a negative control was performed using an irrelevant mouse or rabbit Ab. Data Analysis Data are presented as standard error of the mean (SEM) showing the combined data of at least three independent experiments each in triplicate. Statistical analysis was performed by analysis of variance test, and multiple comparisons were made by Bonferroni's test. A P-value lower than 0.05 was considered significant. IS Enhanced ROS Release in C6 Cells In order to assess the effect of IS on oxidative stress in C6 cells, we evaluated intracellular ROS production. Our results indicated that IS at all tested concentrations (15-60 µM), induced a significant and concentration-dependent increase in ROS production (P < 0.001 vs. control; Figure 1A). We examined ROS production also in presence of DPI (10 µM) and NAC (2 mM). As shown in Figure 1A, DPI and NAC significantly inhibited ROS release induced by IS (P < 0.001 vs. IS alone, Figure 1A). IS Reduced Nrf2 Nuclear Translocation in C6 Cells Following its activation, Nrf2 translocates into the nucleus and regulates cell protective gene expression. We labeled Nrf2 with a green fluorescence to track the influence of IS (30 µM) added for 1 h. In presence of IS, we observed a reduction in Nrf2 nuclear translocation ( Figure 1B). To study the mechanisms of ISinduced reduction, we also examined Nrf2 nuclear translocation after treatment with IS (15-60 µM) in the presence of DPI and NAC. As shown in Figure 1B, NAC more than DPI increased Nrf2 nuclear translocation compared to IS alone (P < 0.01 vs. IS, Figure 1B). Frontiers in Pharmacology | www.frontiersin.org FIGURE 2 | Effect of IS (30 µM) on p65 nuclear translocation in C6 cells in presence of the antagonists CH-223191, DPI and NAC in C6 cells (A). Nuclear translocation of NF-kB p65 subunit was detected using immunofluorescence confocal microscopy. Scale bar, 10 µm. Blue and green fluorescences indicate localization of the nucleus (DAPI) and p65, respectively. Analysis was performed by confocal laser scanning microscopy. Effect of IS (15-60 µM) on ROS formation (B), evaluated by means of the probe H 2 DCF-DA, in C6 cells in presence of NF-kB-inhibitor PDTC. Values are expressed as mean fluorescence intensity (n = 9). ••• denotes P < 0.001 vs. control. * * * denotes P < 0.001 and * denotes P < 0.05 vs. IS alone. IS Reduced HO-1, NQO1, and SOD Expression in C6 Cells Enzymes dealing with oxygen radicals are HO-1, NQO1, and SOD. In oder to assess their expression profile in the presence of IS, we treated C6 cells with IS (15-60 µM). After 24 h, we observed a decrease in HO-1 and NQO1 expression (P < 0.05 vs. control for HO-1, P < 0.01 vs. control for NQO1; Figures 1C,D). A weak inhibition by IS was observed on SOD expression ( Figure 1E). IS Induced AhR Activation in C6 Cells Aryl hydrocarbon Receptor (AhR) is the believed binding partner of IS. Therefore, we investigated AhR activation, through a green fluorescent labeling, in the presence of IS (30 µM) and DPI. After Becton Dickinson) and elaborated with Cell Quest software. Effect of IS (30 µM) on Nrf2 nuclear translocation in astrocytes and mixed glial cells (C). Nuclear translocation of Nrf2 was detected using immunofluorescence confocal microscopy. Scale bar, 10 µm. Blue and green fluorescences indicate localization of nucleus (DAPI) and Nrf2, respectively. Analysis was performed by confocal laser scanning microscopy. Effect of IS (15-60 µM) on HO-1 expression (D) in astrocytes and mixed glial cells. Cellular fluorescence was evaluated using fluorescence-activated cell sorting analysis (FACSscan; Becton Dickinson) and elaborated with Cell Quest software. Values are expressed as mean fluorescence intensity (n = 9). ••• , •• , and • denote P < 0.001, P < 0.01, and P < 0.05 vs control. * * and * denote P < 0.01 and P < 0.05 vs. astrocytes. 1 h nuclear presence of AhR was increased after IS treatment and the IS effect could partially be blocked by DPI (Figure 1F). To evaluate the possible involvement of AhR in ROS release induced by IS, we analyzed ROS production in presence of the AhR inhibitor: CH-223191 (1 µM). CH-223191 significantly reduced IS induced ROS production (P < 0.001 vs. IS; Figure 1G). IS Induced p65 NF-kB Nuclear Translocation in C6 Cells Nuclear factor-kB p65 was labeled with a green fluorescence to track the effect of IS (30 µM) on NF-kB activation. p65 nuclear traslocation resulted increased after IS treatment (Figure 2A). The IS-induced p65 NF-kB nuclear translocation was inhibited by DPI and NAC and to a lesser extent by CH-223192 (Figure 2A). To evaluate the possible involvement of NF-kB in ROS release induced by IS, we analyzed ROS production in presence of a NF-kB inihibitor: PDTC (200 µM; Figure 2B). PDTC significantly reduced IS induced ROS production (P < 0.05 vs. IS alone; Figure 2B). IS Influenced Oxidative Estress and Pro-inflammatory Parameters in Primary Astrocytes and Mixed Glial Cell Cultures Primary astrocytes and mixed glial cell cultures are a less artificial cell culture system. In these cells IS (15-60 µM) also induced a significant ROS production (P < 0.001 vs. control; P < 0.05 vs. astrocytes alone; Figure 3A). IS led to an increase of nitrotyrosine formation (P < 0.05 vs. control, P < 0.05 vs. astrocytes alone; Figure 3B) and to a reduction of Nrf2 translocation ( Figure 3C) and HO-1 expression (P < 0.05 vs. control; Figure 3D). In primary mixed glial cell cultures the response was much more prominent indicating the contribution of the microglial cells. Comparable FIGURE 4 | Effect of IS (30 µM) on AhR (A) and p65 (B) nuclear translocation in astrocytes and mixed glial cells. Nuclear translocation of AhR and p65 was detected using immunofluorescence confocal microscopy. Scale bar, 10 µm. Blue and green fluorescences indicate localization of nucleus (DAPI) and AhR and p65, respectively. Analysis (n = 9) was performed by confocal laser scanning microscopy. Moreover under the same experimental conditions, we observed a significant increase in iNOS and COX-2 expression in astrocytes and mixed glial cell cultures treated with IS (15-60 µM; P < 0.001 vs. control and P < 0.01 vs. astrocytes alone; Figures 5A,B). IS treatment also induced a significant production of TNF-α in astrocytes and mixed glial cell cultures and IL-6 in mixed glial cell cultures (P < 0.05 vs. control and P < 0.05 vs. astrocytes alone; Figures 5C,D). IS Increased Cellular Death in Neuronal Cultures In order to investigate the effect of IS on neuronal death we used cortical and hippocampal neuron cultures. Our results showed FIGURE 5 | Effect of IS (15-60 µM) on iNOS (A), COX-2 (B) expression by astrocytes and mixed glial cells. Cellular fluorescence was evaluated using fluorescence-activated cell sorting analysis (FACSscan; Becton Dickinson) and elaborated with Cell Quest software. Values are expressed as mean fluorescence intensity (n = 9). Effect of IS (15-60 µM) TNF-α (C) and IL-6 (D) release by astrocytes and mixed glial cells (n = 9). Cyokine release was assessed by ELISA assay and expressed as pg/ml (n = 9). ••• , •• , and • denote P < 0.001, P < 0.01, and P < 0.05 vs. control. * * * , * * , and * denote P < 0.001, P < 0.01, and P < 0.05 vs. astrocytes. that both, cortical and hippocampal neurons, are susceptible to IS-induced neuronal cell death in a dose-dependent fashion (P < 0.05 vs. control, P < 0.005 vs. hippocampal neurons; Figure 6). IS enhanced NO, TNF-α, and IL-6 Levels in Mice Serum and Increased COX-2 and Nitrotyrosine Expression in Brain and Kidney To match our in vitro findings with the in vivo situation, we injected mice with IS (800 mg/Kg) which resulted in a significantly higher IS serum concentration (79.66 ± 1.67 µM vs. 0.55 ± 0.00 µM, P < 0.001 vs. control). Total nitrite serum increased significantly in IS-treated mice compared to control mice (89.76 ± 9.98 vs. 55.99 ± 9.69 µM, P < 0.05). TNF-α and IL-6 evaluation indicated that IS induced a weak increase in TNF-α serum levels (18.91 ± 1.77 vs. 17.43 ± 2.02 of control group; P = NS) but a significant increase if IL-6 serum levels (23.99 ± 3.38 vs. 12.93 ± 2.48 of control group; P < 0.05). Our results indicate a COX-2 immunoreactivity in a subset of neurons in the brain tissue in normal as well as treated mice. In the treated mice, more cells showed an immunoreactivity which extended to degenerating neurons and blood vessels. Also in the kidney, we observed a strong COX-2 staining primarily in the glomeruli (Figure 7). Similarily, the anti-nitrotyrosine antibody stained neurons of the treated mice, while we saw only weak staining in the control group. Also in the kidney the immunostaining of the glomeruli was stronger in the treated mice compared to control (Figure 7). IS Enhanced Not Only Kidney Cell Damage but Also Neuronal Cell Damage According to previous observations (Ichii et al., 2014), we found atrophic glomeruli with thickening of the Bowman's capsule and mesangial matrix and aspects of segmental solidification after IS treatment (Figure 7). The tubular epithelial cells showed granulefatty degeneration and sometimes vacuoles and were arranged around amorphous and hypereosinophilic protein aggregates ("casts"; Figure 7). We observed interstitial edema, dilatation of renal arterioles and small hemorrhagic areas (Figure 7). We could also observed IS effects in the brain. Histological evaluation showed some neurons showing cytoplasm angular margins, with eosinophilic cytoplasm and pyknotic nuclei (neuronal necrosis). Around the necrotic neurons were slightly hyperplastic glial cells (satellitosis). DISCUSSION In this study, we can provide evidence that IS can directly influence glial function and can cause neuronal damage, implicating IS directly in the pathways by which CKD influences cognitive functions. Cognitive impairment of CKD patients is one of the main complications despite pharmacological and dialytic treatment (Vanholder et al., 2001;Raff et al., 2008;Di Micco et al., 2012). We were able to show that IS induces oxidative stress and inflammatory mediators in glial cells. Oxidative stress and inflammation are essential for defense against injuries, but, if not properly regulated, they are capable of initiating various deleterious effects (Libetta et al., 2011). Oxidative stress increases together with the progression of CKD and it correlates with the level of renal function and, therefore, also with IS levels. In addition, the antioxidant systems are also compromised in CKD patients and worsen with the progression of renal failure (Morena et al., 2002). Thus, the control of inflammation and oxidative stress is of particular importance in uremic syndrome. Our observations point to specific pathways underlying the oxidative stress and inflammation induced by IS in glial cells: (i) NADPH oxidase and glutathione levels, (ii) AhR and NF-kB activation, (iii) a reduced antioxidant response Nrf2-mediated, (iv) activation of pro-inflammatory mediators, and (v) alteration in glial proliferation/cell cycle. Moreover, we find a direct link between IS and neuronal damage linking IS to neurotoxicity. We found that IS induced a significant and concentration-related ROS release from cultured C6 astrocytes, primary astrocytes and to an even greater extent in mixed glial cell culture. Mechanistic studies revealed that both NADPH oxidase, as evaluated by the presence of DPI, and GSH homeostasis, as evaluated by NAC addition, are involved in IS-induced ROS release. These results are in accordance with previous studies reporting that IS interfered both with pro-and anti-oxidant factors in endothelial cells (Dou et al., 2007;Yu et al., 2011a), vascular smooth muscle cells (Mozar et al., 2011), kidney cells , and macrophages (Adesso et al., 2013). It has been also reported that NAD(P)H oxidase levels increased in CKD patients and in experimental models of renal insufficiency (Fortuno et al., 2005;Castilla et al., 2008). IS is a potent AhR ligand (Schroeder et al., 2010). In the brain, AhR is ubiquitously expressed including the cerebral cortex, hippocampus, and cerebellum (Lin et al.,FIGURE 7 | Histologic and immunohistochemical findings of brain and kidneys in treated mice (IS column). (A) (1) Brain; normal tissue from control mouse. (2) Brain; neuronal pyknosis associated with mild satellitosis. (3) Kidney; normal tissue from control mouse. (4) Kidney; Atrophic glomeruli and severe vacuolar degeneration of tubules with proteinaceous amorphous material and hypereosinophilic concretions within lumen (arrows); Hematoxylin and Eosin (HE) stain. (B) (1) Brain; normal tissue from control mouse. (2) Brain; strong immunoreactivity for COX-2 antibody in degenerating neurons (arrows) from treated mouse. (3) Kidney; normal tissue from control mouse. (4) Kidney; strong immunoreactivity for COX-2 antibody in blood vessels of the glomeruli (arrows) from treated mouse. Immunohistochemistry (HRP-method). (C) (1) Brain; normal tissue from control mouse. (2) Brain; the immunoreactivity with the anti-nitrotyrosine antibody is intensely detected in the neurons of a treated mice (arrows). (3) Kidney; normal tissue from control mouse. (4) Kidney; strong immunoreactivity in blood vessels of an atrophic glomerulus (arrow) from treated mouse. Immunohistochemistry (HRP-method). Data are from two independent experiments and represent mean ± SEM (n = 5-10 per group). 2008). It has been implicated in sensorimotor and cognitive dysfunctions caused by oxidative stress or excitotoxicity (Kim and Yang, 2005;Williamson et al., 2005;Lin et al., 2008). Our results indicated that IS activates AhR in astrocytes which likely promoted further oxidative stress. This result fits with data reporting an AhR-mediated oxidative stress pathway in human vascular endothelial cells (Watanabe et al., 2013). Interestingly, our data gives further insight in IS-induced AhR-ROS pathway in astrocytes since treatment with IS in presence of DPI is able to reduce AhR activation. Here we report that IS also activated NF-κB and previous studies indicated a reciprocal interaction between NF-κB and Nrf2 (Bolati et al., 2013). IS-induced ROS is able to induce NF-kB activation. This activation could, in turn, be responsible for the Nrf2 downregulation (Bolati et al., 2013), because the interaction of p65 with Keap1 promotes reduction of Nrf2 protein level through Nrf2 ubiquitination (Yu et al., 2011b). Moreover, the upregulation of p53 expression induced by IS-induced NF-κB activation is involved in the suppression of Nrf2 mRNA expression (Faraonio et al., 2006). Our data indicate a cross-talk between ROS and NF-kB because, DPI and NAC treatment were able to reduce NF-kB activation and NF-kB inhibition was able to interfere with ROS release in astrocytes. Nrf-2 is a transcription factor responsible for the regulation of the cellular redox balance and protective antioxidant and phase II enzymes (Kensler et al., 2007). Nrf2 binding to the antioxidant response element (ARE) induced the regulation of some anti-oxidant proteins such as HO-1 and NQO1 (Kansanen et al., 2013). We found that IS also reduced HO-1 and NQO1 expression, thus further contributing to a decrease of antioxidant defenses and to oxidative stress-induced damage in CNS cells. Astrocytes are the most abundant glial cells in the CNS and they have a number of important physiological properties related to the homeostatic control of the extracellular environment. Astrocyte cells provide structural, trophic, and metabolic support to neurons, modulate synaptic activity and are involved in multiple brain functions contributing to neuronal development. Moreover, astrocytes actively participate in processes triggered by brain injuries, aimed at repairing brain damage (Vernadakis, 1996;Marchetti, 1997). It has been recently reported that astrocytes contribute actively to various forms of dementia (Rodríguez et al., 2009) and disturbances in the complex neuronglia interaction are increasingly recognized as an important pathophysiological mechanism in a wide variety of neurological disorders including neurodegeneration (Erol, 2010). In response to a variety of stimuli and pathological events, astrocytes and microglia become activated. Microglia, activated earlier than astrocytes, promotes astrocytic activation by releasing inflammatory mediators and ROS. On the other hand, activated astrocytes facilitate the activation of distant microglia, and in some cases also inhibit microglial activities (Tremblay et al., 2011;Kingwell, 2012;Liu et al., 2012). Thus, atrocytes-microglia interactions are important in regulating both physiological and pathological conditions. We could demonstrate that in mixed glial cell cultures stimulation with IS resulted in higher levels of ROS and proinflammatory mediators. Activated microglia and astrocytes also release a variety of cytokines, chemokines, and toxic factors, such as TNF-α, IL-6, and NO, all of which may lead to neuronal toxicity and result in the aggressive neuronal apoptosis, that has been reported as the most crucial event in neuronal loss of neurological diseases (D'Amelio et al., 2010;Allaman et al., 2011;Heneka et al., 2015;Varley et al., 2015). In this study, we observed that IS significant increases in astrocytes and mixed glial cells cytokines production and of pro-inflammatory enzymes as iNOS and COX-2 that, together with oxidative stress conditions can influence neuronal death, involved in neurodegeneration. Moreover, we observed that IS increased neuronal cell death in cortical and hippocampal neurons thus supporting its effect in neuronal loss. Taken together, we can show that the AhR is important for the IS induced activation of NF-kB, ROS and pro-inflammatory cytokine production, and downregulation of cell protective factors such as Nrf2, HO-1 or NQO1 in glial cells. Some of these pathways can be specifically blocked. Evidences of IS-induced effects on CNS are here supported also by in vivo experiments. IS induced histological brain alterations and the expression of oxidative stress and inflammatory markers, such as nitrotyrosine and COX-2. Until now, there was little information about the potential of IS on SNC cells. Taken together, our results highlight the effect of IS on CNS homeostasis. This study adds to hypothesis that IS significantly contributes to neurological complications observed in CKD, and that its levels could be not only a marker of disease progression but also a pharmacological target in cognitive dysfunction observed in CKD. AUTHOR CONTRIBUTIONS SA drafted the manuscript, participated in research design and carried out the experiments; TM contributed to conceive and design the experiments and to the writing of the manuscript; SC and BR contributed to the writing of the manuscript and designing the experiments; MC participated to the in vivo experiments; OP performed the histological and immunohistochemical analysis; GA contributed to analyze the data; AP contributed to data analysis and to the writing of the manuscript; SM conceived and designed the research, the experiments and contributed to the writing of the manuscript. All authors read and approved the final manuscript.
2017-06-15T19:01:27.371Z
2017-06-12T00:00:00.000
{ "year": 2017, "sha1": "4a4df02735348e4f79c2f2b30045af93150c70e2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2017.00370/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a4df02735348e4f79c2f2b30045af93150c70e2", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257090341
pes2o/s2orc
v3-fos-license
The Nemestrinidae in Egypt and Saudi Arabia (Brachycera: Diptera) The Nemestrinidae are a widespread group of moderate to large-sized rather stout flies. All known larvae of these flies are internal parasitoids of nymphs and adults of grasshoppers and larvae of scarabaeid beetles and have the potential to be used as biocontrol agents. All known Egyptian and Saudi Arabian nemestrinid taxa are systematically catalogued in the present study. A total number of 13 species classified in only 2 genera, Nemestrinus (subfamily Nemestrininae) and Trichopsidea (subfamily Falleniinae), were investigated. Twelve of these species are represented in Egypt, out of which 5 species are represented in Saudi Arabia as well. Two of the treated species, Nemestrinus ater (Olivier) and N. rufipes (Olivier), are newly recorded herein from Saudi Arabia. Only one species, Trichopsidea costata (Loew), was recorded exclusively from Saudi Arabia. An updated classification, taxonomic data, world and local distributions with collection dates and coloured photographs of some species were provided. Hope that the results of this study will provide the basis for systematic studies and fauna analyses of future works on Nemestrinidae. It seems likely that further species will be discovered with more research involving a variety of collecting methods. Thirteen nemestrinid species belonging to 2 genera and 2 subfamilies were represented in both Egypt and Saudi Arabia. Two of these species are newly recorded herein from Saudi Arabia. Page 2 of 9 El-Hawagry et al. Egyptian Journal of Biological Pest Control (2022) 32:26 ( Table 1). No previous studies on Nemestrinidae were carried out in Saudi Arabia; however, Steyskal and El-Bialy (1967) published a list of Egyptian Diptera including Nemestrinidae, and El-Hashash et al. (2021) studied one genus, Nemestrinus, taxonomically in Egypt. Moreover, some species were described from Egypt in some other miscellaneous studies as Olivier (1810), Wiedemann (1828), Macquart (1840) and Efflatoun (1925). Egypt and Saudi Arabia are two neighbouring Arabian countries situated at the junction of the Afrotropical and Palaearctic biogeographic regions. The faunas in both countries are mainly Palaearctic, except for the southeastern corner of Egypt (Gebel Elba) (El-Hawagry et al. 2018) and the south-western district of Saudi Arabia, south to the Tropic of Cancer (El-Hawagry et al. 2017), which are mainly Afrotropical. The present study is one in a series of studies on different families of Diptera aiming to catalogue the entire order in both Egypt and Saudi Arabia. Methods Previous studies concerning the Nemestrinid flies in Egypt and Saudi Arabia, in addition to material deposited in Egyptian and Saudi Arabian museum or collected by the authors, were the main sources for the present study. Different collecting methods were used, included sweeping nets, Malaise traps, pitfall traps and light traps; however, majority of specimens were collected by the sweeping nets and only two specimens of Trichopsidea costata were collected by pitfall trap and light trap, as one specimen by each. The classification of Papavero and Bernardi (2009) is considered in the present study, in which the extant genera of Nemestrinidae are classified in 5 subfamilies: Atriadopinae, Cyclopsideinae, Falleniinae, Hirmoneurinae and Nemestrininae. The classification of species within genera follows Richter (1988). Taxonomic information as type species, type localities and synonymies was mainly obtained from Richter (1988). However, world and local distributions, and collection dates of species were obtained from different relevant literature, in addition to local museums and/or collected specimens. These sources are listed in square brackets at the end of each section. In the sections of localities and dates of collection, the 8 known Egyptian ecological zones (Coastal Strip (CS), Eastern Desert (ED), Fayoum, Gebel Elba (GE), Lower Nile Valley & Delta (LNVD), Sinai, Upper Nile Valley (UNV) and Western Desert (WD)) were adopted in the present study. However, there are no evident ecological zones in Saudi Arabia, so the administrative divisions (also known as regions or provinces) were used instead, namely, Al-Baha, Al-Jawf, Al-Madinah, Al-Qaseem, Asir, Eastern Province, Hail, Jazan, Makkah, Najran, Northern Frontier, Riyadh and Tabuk. Localities within each Egyptian ecological zone or Saudi Arabian administrative region are alphabetically arranged and written after a colon following each zone or region and then followed, between parentheses, by the collection dates. Coordinates of nemestrinid localities in Egypt and Saudi Arabia are listed (Table 2). Note. Egyptian records of this species were taken from an old list of species preserved in ESEC. However, these records seem to be doubtful and we could no t check them. The collection in ESEC was closed at the present time for unknown reasons and material was thought to be abandoned there. [Source: Macquart (1840), Richter (1988) Lichtwardt (1909), Abdu and Shaumar (1985), Richter (1988) Note. This species was recorded herein for the first time from Saudi Arabia and the Afrotropical Region, considering the south-western district of Saudi Arabia as affiliated to the Afrotropical Region. Discussion Only 5 species of Nemestrinidae were treated in the present study as recorded from Saudi Arabia. The number of species is still low and does not represent the real fauna of the family in this large country. However, this low diversity of species should be interpreted cautiously, since the family, as many other dipterous families, seems to lack sampling efforts in Saudi Arabia and extensive faunistic and systematic studies are required. On the other hand, comprehensive surveys by late Efflatoun Bey and his coworkers and their followers started in Egypt more than 100 years ago (El-Hawagry et al. 2020). These surveys resulted in considerable number of nemestrinid flies pinned and preserved in the Egyptian insect collections. El-Hashash et al. (2021) synonymized Nemestrinus abdominalis Olivier (1810) and Nemestrinus fascifrons (Bigot 1888) with Nemestrinus ater (Olivier, 1810). However, they did not check the types of these 3 species, and they almost based on original descriptions and/or some specimens preserved in EFC. They assumed that these specimens were identified by late Efflatoun Bey as N. fascifrons and N. ater. They stated that all specimens were of one sexually dimorphic species as males were identified as N. fascifrons and females as N. ater. Consequently, they synonymized N. fascifrons with N. ater based on this assumed Efflatoun's identifications. However, these specimens are not types and there were no labels in the box or under any specimen to indicate who identified them. So, these identifications are doubtful and may be wrong. In like manner, there are no specimens of N. abdominalis preserved in any Egyptian insect museum to be checked. Consequently, we cannot adopt these synonymies without checking the types which are not available for us. Our viewpoint agrees with that of Sack (1933) and Paramonov (1945) who keyed the 3 species and clearly differentiated between them using identifiable features. Lichtwardt (1909) and Bequaert (1938) synonymized Nemestrinus ruficornis (Macquart, 1840) with Nemestrinus rufipes (Olivier, 1810). However, Sack (1933), Paramonov (1945) and Richter (1988) considered it as a separate valid species. El-Hashash et al. (2021) adopted the first opinion and considered the 2 species as synonyms without checking the type material or any other material of N. ruficornis and based only on the original descriptions. Types of this species were not available to validate its classification. Consequently, we cannot adopt this synonymy as well. Hope the results of this study may provide the basis for systematic studies and fauna analyses of future works on Nemestrinidae. It seems likely that further species will be discovered with more research involving a variety of collecting methods. Conclusions In the present study, the family Nemestrinidae was catalogued in both Egypt and Saudi Arabia. The study revealed that 13 nemestrinid species belonging to 2 genera, Nemestrinus and Trichopsidea, and 2 subfamilies, Nemestrininae and Falleniinae, were represented in the two countries. Two of these species, Nemestrinus ater (Olivier) and N. rufipes (Olivier), are newly recorded from Saudi Arabia.
2023-02-23T15:21:22.717Z
2022-03-20T00:00:00.000
{ "year": 2022, "sha1": "3b0411d89509c75015e5bdcade67eefb2430c687", "oa_license": "CCBY", "oa_url": "https://ejbpc.springeropen.com/track/pdf/10.1186/s41938-022-00525-7", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3b0411d89509c75015e5bdcade67eefb2430c687", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
54447521
pes2o/s2orc
v3-fos-license
Influence of support surfaces on the distribution of body interface pressure in surgical positioning Abstract Objective: to evaluate the interface pressure (IP) of support surfaces (SSs) on bony prominences. Method: a quasi-experimental study with repeated measures on each SS. Twenty healthy adult volunteers participated in the study. The participants were placed in the supine position on a standard operating table for evaluation of IP on the bony prominences of the occipital, subscapular, sacral, and calcaneal regions using sensors. Seven evaluations were performed for each bony prominence: one on a standard operating table, and the others on tables containing SSs made of viscoelastic polymer, soft foam, or sealed foam. Descriptive statistics and analysis of variance were used to analyze the data. Results: the mean IP was higher on the viscoelastic polymer-based SS compared to the other SSs (p<0.001). The mean IP was relatively lower on the density-33 sealed foam and density-18 soft foam. In addition, this variable was comparatively higher in the sacral region (42.90 mmHg) and the calcaneal region (15.35 mmHg). Conclusion: IP was relatively lower on foam-based SSs, especially on density-18 soft foam and density-33 sealed foam. Nonetheless, IP was not reduced on the viscoelastic polymer SS compared to the control SS. Introduction Support surfaces (SSs) are specialized devices, overlays, pads, and integrated systems that redistribute body pressure. These devices are designed to control pressure, shearing, and fabric friction while maintaining the microclimate or other therapeutic functions (1) . The redistribution of body pressure, especially on bony prominences, is the primary safety characteristic of positioning materials (2) , which aim to prevent complications such as pressure ulcers (PU) (3) and compartment syndrome (4) . The etiology of PU involves, among other factors, interface pressure (IP), characterized by compression of soft tissues between the bony prominences and the surfaces on which patients lie. Exposure to IP over prolonged periods decreases tissue perfusion and oxygenation of the skin and deeper layers. In view of this causal relationship, the present study used IP as a criterion for assessing PU risk (5)(6)(7)(8) . The literature does not indicate an acceptable threshold for IP. However, there is evidence that the mean capillary refill pressure is 32 mmHg, and this criterion was adopted for evaluating IP (5)(6)(7)(8) because the external pressure that exceeds this level may obstruct blood flow. There are gaps in knowledge on the behavior of SSs in the redistribution of IP because of delays in technological advancements in health (7) , methodological limitations, and lack of standardization in classifying SSs (1) . Few studies to date determined the IP redistribution of these materials in the surgical setting. The objective of this study is to evaluate the IP of SS [viscoelastic polymer, sealed foams (28, 33, and 45 kg m 3 ), and soft foams (18 and 28 kg m 3 )] on the bony prominences of the occipital, subscapular, sacral, and calcaneal regions. The viscoelastic polymer was selected because it is a static SS highly recommended for clinical surgical practice (8) and is frequently used as a test surface in laboratory studies (5) . Sealed and soft foams of different densities were selected because of their potential as raw materials for producing lower-cost SSs; therefore, they may be a more cost-effective alternative for redistributing pressure on bony prominences. The density that best distributes IP should be evaluated to provide evidence that support decision-making for purchasing SSs. Methods This preliminary and interdisciplinary quasiexperimental study was conducted in two partner research centers located in two public universities in the Triângulo Mineiro region, state of Minas Gerais, Brazil, and specialized in two distinct areas of research: nursing and mechanical engineering. Measurements were performed in the research center specialized in mechanical engineering using high-precision equipment and software, and clinical evaluation was performed by the core nursing research team. The study protocols complied with the guidelines established by the Revised Standards for QUality Improvement Reporting Excellence (SQUIRE 2.0) (9) . The participants were non-randomly selected from the academic community of the university in which data were collected to field this study by invitation to volunteer. The initial invitation was made by e-mail sent to potential participants. The message contained information about the study objectives, the importance of participation, and the risks and benefits of participation. The inclusion criteria were being older than 18 years and the presence of chronic comorbidities as long as these were controlled. The exclusion criteria were the presentation of skin lesions, impairment of bony prominences, absence of limbs, or presence of folds in the limbs. The literature does not present the parameters for calculating the sample size for assessing IP. Therefore, an initial sample of 20 participants was selected, and statistical power was analyzed later. A significance level of 0.05 was adopted for estimating statistical power. Statistical power was estimated for differences in mean IP using different SSs. A power of 99% was reached within the limits of the statistical program's precision. In clinical and practical terms, there was a difference in maximum IP between the SSs, which justified not including more participants in the study. Results The mean age of the study participants was 28. Figure 3) and the SOT ( There were no statistically significant differences in the mean peak IP using the D45 sealed foam compared to the SOT in the occipital and subscapular regions ( Table 2). Rev. Latino-Am. Enfermagem 2018;26:e3083. A multivariate, multiple-factor analysis was performed to assess differences in the mean peak IP between the study groups according to nutritional status (underweight, normal weight, overweight, and obese). There were no significant differences between the groups (p=0.87) (Table 3). Discussion The precise measurement of IP depends on several factors, including equipment calibration and the proper use and number of sensing elements per tissue area. A higher number of sensing elements per tissue area may increase measurement sensitivity. The number of sensors per tissue area in the equipment used in the present study was higher than that in other studies that used pressure mapping technologies (5)(6)(12)(13) . An experimental study in Belgium mapped IP on different SSs using the ErgoCheck System detection technology, which is composed of 684 sensors (5) . A 48 inches x 48 inches (6) . Therefore, the technologies used for areas of detection by sensors were inferior to that used in the present study. An experimental study that evaluated the pressure distribution properties of an electrophysiology laboratory surface and an operating room table used the FSA Mapping System, which is a mesh of 1,024 sensors with a detection area of 1920 mm x 762 mm (13) . Although the number of sensors was the same as that used in the present study, the detection area of this system was 4.5 times larger, which might affect measurement sensitivity. A study conducted in the United States evaluated mean IP in the supine position using an electropneumatic sensor (14) ; nonetheless, this study provided no information about the dimensions of the sensor and other specifications, which limited comparisons between the technologies used. With respect to the immobilization time of the participants to measure IP values, the methodology proposed in this study followed that of other studies, whereby immobilization time did not alter the pressure detected by the sensors (5,15) . Mean IP was relatively higher on the viscoelastic polymer SS compared to other foams and the SOT. Studies with different research designs and outcomes did not recommend the use of viscoelastic polymers or indicated that evidence was not sufficient to make a recommendation (16)(17)(18) . calcaneal regions on two SSs made of a three-layer common foam and high-density foam (3.5 inches). The results indicated that there were no significant differences between the tested SSs. Mean IP in the sacral region was higher than capillary refill pressure (37.51 mmHg and 38.18 mmHg, respectively) (14) . These results do not agree with our findings, in which mean IP on different types of foam was lower than capillary refill pressure. In a cross-sectional study in the United States, the foams used were not fully characterized. Furthermore, the authors used SSs with overlapping layers, which compromised comparisons between studies (14) . A study conducted in Belgium compared IP on four viscoelastic surface was 90 mmHg (13) . In the present study, the highest IP in the sacral region on the viscoelastic polymer SS was 94 mmHg. The results of the present study indicated that IP was comparatively higher in the sacral and calcaneal regions on the viscoelastic polymer SS and the SOT, which corroborates the conclusions of a retrospective chart review that evaluated the factors contributing to the development of PU in patients who underwent surgical procedures (19) . An experimental study found that mean peak IP was higher in the sacral region on the Eggcrate ® SS compared to the SOT (59 ± 17 mmHg, p=0.01) and a gel mattress (61 ± 27 mmHg, p=0.02). On the heels, mean peak IP was lower on Eggcrate (70 ± 24 mmHg) compared to the SOT (122 ± 58 mmHg, p=0.02) and the gel mattress (134 ± 59 mmHg, p=0.005) (6) . IP on the SOT was higher than the value found in the present study. In the calcaneal region, the results of a study conducted in the United States indicated that pressure on the heel was high on most SSs (6) , which agrees with our findings and indicate the need to implement actions to relieve this pressure when this body region is elevated. There were no statistically significant differences in IP between the groups according to nutritional status. It is important to consider that nutritional status is a useful evaluation criterion adopted by many researchers but expresses only a relationship between two variables (body weight and height). In this respect, individuals with the same nutritional status may have different body compositions (relationship between lean body mass, fat mass, and body water volume), which may explain the absence of correlation between BMI and IP. A previous study found a positive relationship between body composition and IP and proposed a virtual reference model for the action of tension on the analyzed tissue. In this study, the stress caused by IP was more evident in the muscle layer. Furthermore, there was no relationship between the fat layer and a higher level of muscle shearing (20) . In view of differences in research findings, it is necessary not only to evaluate IP but also to consider that ulcer etiology has multiple causes, including tissue tolerance to pressure and shearing, and this tolerance may be affected by microclimates (heat and humidity), nutrition, perfusion, associated diseases, and tissue condition (3) . Body composition is also relevant because different types of tissue have distinct reactions to pressure. One of the limitations of the present study is the participation of healthy volunteers. Although data were collected in environmental conditions similar to those to which surgical patients are exposed, some factors related to the procedure should be considered.
2018-12-12T19:54:06.447Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "12edd9c47ba64531c26d5eab054aa63d59405844", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rlae/v26/0104-1169-rlae-26-e3083.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12edd9c47ba64531c26d5eab054aa63d59405844", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
199637689
pes2o/s2orc
v3-fos-license
Site-directed M2 proton channel inhibitors enable synergistic combination therapy for rimantadine-resistant pandemic influenza Pandemic influenza A virus (IAV) remains a significant threat to global health. Preparedness relies primarily upon a single class of neuraminidase (NA) targeted antivirals, against which resistance is steadily growing. The M2 proton channel is an alternative clinically proven antiviral target, yet a near-ubiquitous S31N polymorphism in M2 evokes resistance to licensed adamantane drugs. Hence, inhibitors capable of targeting N31 containing M2 (M2-N31) are highly desirable. Rational in silico design and in vitro screens delineated compounds favouring either lumenal or peripheral M2 binding, yielding effective M2-N31 inhibitors in both cases. Hits included adamantanes as well as novel compounds, with some showing low micromolar potency versus pandemic “swine” H1N1 influenza (Eng195) in culture. Interestingly, a published adamantane-based M2-N31 inhibitor rapidly selected a resistant V27A polymorphism (M2-A27/N31), whereas this was not the case for non-adamantane compounds. Nevertheless, combinations of adamantanes and novel compounds achieved synergistic antiviral effects, and the latter synergised with the neuraminidase inhibitor (NAi), Zanamivir. Thus, site-directed drug combinations show potential to rejuvenate M2 as an antiviral target whilst reducing the risk of drug resistance. Introduction The 2009 H1N1 "swine 'flu" outbreak dramatically illustrated the speed at which influenza pandemics can spread in the modern era due to globalisation. Whilst not as virulent as the 1918 Spanish Influenza, which claimed more than 50 million lives, swine 'flu caused increased mortality and morbidity, placing considerable burden upon even advanced health care systems. The unexpected origin (Smith et al, 2009;Solovyov et al, 2010;Zhang & Chen, 2009) of swine 'flu precluded the rapid deployment of a vaccine, making antiviral prophylaxis the only means by which to curtail the initial stages of the pandemic. Another class of influenza antiviral, the adamantane M2 proton channel inhibitors (M2i) amantadine and rimantadine, are now clinically obsolete due to widespread resistance (Furuse et al, 2009;Zaraket et al, 2010). This is due to a near-ubiquitous S31N polymorphism within M2 (other rarer variants also occur) generating resistance at little or no associated fitness cost to the virus. Targeting rimantadine resistant M2 has been a long-standing priority yet progress targeting M2-N31 is limited compared to other minor variants (Drakopoulos et al, 2018;Li et al, 2017;Li et al, 2016a;Li et al, 2016b;Musharrafieh et al, 2019;Thomaston & DeGrado, 2016;Wang et al, 2013a;Wang et al, 2018;Wu et al, 2014). The majority of studies have focused upon adamantane derivatives with various chemical groups linked via the primary amine. Acidic pH promotes M2 channel activity by both enhancing tetramer formation and the subsequent protonation of conserved His37 sensor residues within the channel lumen Pinto et al, 1992;Salom et al, 2000;Shimbo et al, 1996;Wang et al, 1995). This causes conformational shifts in adjacent Trp41 "gates" via a mechanism that remains debated (Andreas et al, 2015;Cross et al, 2012;Hong & Degrado, 2012;Hu et al, 2010;Leiding et al, 2010;Phongphanphanee et al, 2010;Pielak & Chou, 2010;Thomaston et al, 2019;Williams et al, 2016). More than twenty M2 atomic structures exist on the PDB, although none feature full-length protein. Instead, minimal "trans-membrane" (TM) or C-terminally extended "conductance domain" (CD) peptides have been investigated as these regions recapitulate channel function, although the CD region possesses enhanced biological activity ). Interestingly, drugbound TM and CD structures differ with respect to adamantane binding (Schnell & Chou, 2008;Stouffer et al, 2008); TM channels harbour a single lumenal amantadine molecule, whereas CD structures bind four rimantadine molecules at membrane-exposed peripheral sites, corresponding to the region largely absent from TM peptides. Ensuing controversy remains, hampered by the poor chemical probe qualities of adamantanes and a lack of confirmatory functional studies comparing TM and CD peptides (Andreas et al, 2010;Cady et al, 2011a;Cady et al, 2010;Cady et al, 2011b;Du et al, 2009;Hu et al, 2011;Koz akov et al, 2010;Ohigashi et al, 2009;Pielak et al, 2011;Pielak et al, 2009;Rosenberg & Casarotto, 2010) . In the present study, we show that both M2 binding sites are viable antiviral targets that enable synergistic M2-targeted combination therapy. In silico high throughput screening enriched for novel compounds with predicted preference for one or other site, validated by the first comparative TM/CD peptide screen for M2 -N31 channel activity. Several hits identified in vitro show antiviral activity versus pandemic H1N1 influenza A virus in the laboratory setting, comprising both modified adamantanes as well as unique scaffolds. Whilst a previously reported adamantane M2-N31 inhibitor rapidly selected resistance in culture, this did not occur for newly derived compounds. Excitingly, pairs of M2-N31 inhibitors achieved synergy, as did combining novel scaffolds with the NAi, Zanamivir. Together, these observations provide a firm basis for rejuvenating M2-N31 as a viable target for much needed drug combinations, which should help combat the emergence of antiviral resistance. Robust identification of specific M2-N31 inhibitors in vitro. We adapted an indirect liposome dye release assay for viroporin activity (Atkins et al, 2014;Carter et al, 2010;Foster et al, 2014;StGelais et al, 2007;Wetherill et al, 2012) for M2 CD region peptides derived from Influenza A/England/195/2009 (Eng195), a prototypical first wave virus from the 2009 H1N1 pandemic. In addition to the wild type Eng195 M2 harbouring N31, we included a mutated S31 peptide to allow validation with rimantadine ( Figure 1a). Both peptides induced equivalent dose-dependent release of carboxyfluorescein (CF) from liposomes, and acidic pH increased M2 activity ( Figure S1a, b). Critically, rimantadine only blocked activity of Eng195 M2-S31 peptides, confirming the ability of the assay to discriminate between susceptible and resistant M2 variants. Modified adamantane compounds have been shown to inhibit M2-N31 activity (Li et al, 2017;Wang et al, 2011;Wang et al, 2013a;Wang et al, 2018;Wu et al, 2014). Thus, to validate the assay we tested Eng195 peptides versus a small collection of similar prototypic molecules that included inhibitors of rimantadineresistant hepatitis C virus (HCV) p7 (Foster et al, 2011). Encouragingly, from nine compounds tested, three modified adamantanes inhibited both M2-N31 and M2-S31 peptides (compounds D, H and J, Figure 1b). A further two adamantanes showed no activity (E, G), and another amiloride-related compound (L) was similarly inactive (Figure 1b). Interestingly, two amiloride-like molecules (B, K) showed activity versus M2-S31 but not N31, reminiscent of rimantadine. Lastly, the alkylated imino-sugar NNDNJ also blocked the activity of both M2 peptides, yet this compound disrupted oligomerisation, as previously shown for HCV p7(Foster et al, 2011) ( Figure 1A, S1c). D, H, J and NNDNJ displayed antiviral effects in Eng195 culture ( Figure 1c and data not shown), establishing the dye release assay as a robust means of screening for M2-N31 specific inhibitors with genuine antiviral effects. Ambiguous predicted binding modes for prototypic M2-N31 inhibitors. To gauge how novel M2-N31 inhibitors might bind the channel complex, we generated structural homology models for Eng195 M2-N31 and S31 based upon the PDB: 2LRF CD structure from the Chou laboratory (Schnell & Chou, 2008) (Figure 2a, c). This template includes both potential rimantadine binding sites. As noted previously, N31 caused splaying of the trans-membrane domain (TMD) compared with the more lumenally oriented S31, yet the structure also differed throughout the helical bundle, consistent with a reported destabilising effect for N31 (Pielak et al, 2009) (Figure 2a, c). Surprisingly, docking of both rimantadine and novel inhibitors led to distinct binding poses at the lumenal and peripheral sites for N31 and S31 models. For the wild type Eng195 M2-N31 model, predicted binding at the peripheral site (defined by D44, R45 and F48) consistently involved H-bonding to D44, whereas the orientation of inhibitors altered in S31 models (Figure 2a, b, d and table S1). Similarly, M2-N31 lumenal interactions occurred near to the N-terminal neck of the bundle near N31 and V27 (Figure 2a, b, d and table S1), whilst binding within M2-S31 models resembled previous structures, occurring further inside the TMD just above H37. Such N/S31 dependent "flipping" within the channel lumen has been observed previously (Wu et al, 2014). Based upon these observations, we reasoned that such promiscuous pleotropic binding may result from the chemical properties of adamantane derivatives, and that molecules with improved molecular fit may exhibit less ambiguous predicted binding modes. However, it would also be necessary to validate site preferences in vitro to generate meaningful structure-activity relationships (SAR) for improved M2-N31 inhibitors. Determination of M2-N31 inhibitor binding preference in vitro. TM peptides lack the majority of the Cterminal extension present within CD peptides that contains the proposed peripheral binding site. Thus, we hypothesised that lumenally targeted compounds would inhibit both TM and CD peptides, whereas those with a peripheral site preference would show activity only against CD peptides. The dye release assay was therefore adapted to include Eng195 TM peptides, accounting for their reduced biological activity compared to CD (Figure 3a, S1a). We first tested a published M2-N31 inhibitor, M2WJ332 (Wang et al, 2013b), a modified adamantane with activity against full-length M2-N31 shown to bind within the lumen of a TM domain NMR structure (A/Udorn/307/1972 (H3N2) M2-S31N, PDB: 2LY0, figure 3b). Surprisingly, M2WJ332 blocked the activity of Eng195 M2-N31 CD peptides and had no TM-specific activity either under standard assay conditions (up to strong functional preference for the peripheral binding site in vitro despite its location within the 2LY0 structure (Wang et al, 2013b). Accordingly, docking of M2WJ332 within the 2RLF-based Eng195 homology model resembled other adamantanes by generating poses within both the lumen and the peripheral site ( Figure 3d, S2a). Whilst structural and biophysical studies have previously compared lumenal and peripheral binding (Cady et al, 2011b;Rosenberg & Casarotto, 2010), to our knowledge this represents the first functional evidence supporting the relevance of peripherally targeted M2 ligands in vitro. Importantly, this suggests that M2-N31 possesses two potential binding sites to exploit for antiviral discovery. Screening of novel M2-N31 inhibitors enriched for lumenal or peripheral site preferences. The next step was to enrich in silico screening libraries for compounds predicted to bind preferentially at one or other M2 site, removing as much ambiguity as possible through extensive attrition of compound characteristics. Using the Eng195 2RLF-based model as a template, grids corresponding to each site were targeted by an in silico high throughout screen (eHiTS, SimBioSys Inc.), based upon a random chemical library and a second input ligand pool derived through evolution of the compound D molecular structure (ROCS, OpenEye Scientific) ( Figure 4a). eHITS score ranked the top 1000 hits for each site and docking scores were cross-validated using a second software package (SPROUT, Keymodule Ltd.). Short-listing of compounds involved an attrition protocol directed by agreement between the two binding scores, compound molecular weight, specific binding pose and drug-like qualities. Prioritisation of compounds focused upon site selectivity, rather than merely predicted potency. Details of resultant compounds are summarised in Table 1. Compound screens for activity versus TM and CD peptides at 40 M yielded multiple hits (defined as a ≥30% reduction in channel activity for at least one M2 peptide at 40 M) corresponding to lumenal and peripheral site preferences. Interestingly, more lumenally targeted hits presented compared to peripheral, and a third class of compound displayed specificity to TM rather than CD peptides, e.g. compound P6.4 (Table 1). A minority of compounds displayed functional site preferences contradicting docking predictions, yet rational enrichment of ligand pools in silico had significantly augmented the number of M2-N31 targeted hits, with ~50 % of compounds displaying M2-inhibitory activity in vitro compared with a hit rate of <1 % from a random prospective screen (Hansson et al, 2014). We next titrated exemplar compounds from each class to ensure specificity corresponded to that observed in the 40 M screen and predicted interactions (Figure 4b, S2b). Interestingly, whilst lumenal compounds (e.g. L1.1) displayed equivalent activity against both TM and CD peptides, some peripherally targeted ligands (e.g. DP9) also began to exert measurable effects versus TM peptides at higher concentrations. This included DL7, which despite predictions of lumenal binding, displayed clear preference for CD peptides at lower concentrations ( Figure 4b). We hypothesise that this occurs due to inefficient interactions with the partial peripheral binding site present at the C-terminus of TM peptides. Moreover, titrations confirmed the phenotype of TM-specific ligands (e.g. P6.4) ( Figure 4b). Following cytotoxicity testing in MDCK culture ( Figure S3 (DP9 excluded as precise IC50 not determined). Eng195 replication was monitored by periodic titration and sequencing of M2 RNA in supernatants. The only change in the M2 sequence was detected starting from the first analysis of M2WJ332-selected supernatants (day 5); a U>C change at position 80 (M2 cDNA sequence) led to a Val27>Ala mutation (GUC>GCC, V27A) ( Figure 6b). This polymorphism enriched over time, becoming the dominant species apparent at day 14 ( Figure 6b). Sequencing of plaque-purified virus from day 5 supernatants confirmed the presence of V27A within 7/7 M2WJ332-selected plaques, whereas 9/9 L1.1-, and 5/5 rimantadine-selected plaques retained wild type M2-N31 sequence ( Figure 6b). Accordingly, far fewer plaques were derived from normalised L1.1 supernatants compared to M2WJ332 or rimantadine ( Figure S4), and these were eliminated by limited titration of the compound. By contrast, M2WJ332-selected supernatants still retained multiple plaques (~30 % of DMSO control) at much higher concentrations (80 M), likely reflecting the proportion of mutant virus (~30-40 %, see below) within the bulk population. Lastly, virus could only be expanded from L1.1 plaques at ≤20 M inhibitor, with cytopathic effects (CPE) taking at least 48 h to manifest. By contrast, M2WJ332 or rimantadine plaques readily expanded under 80 M inhibitor, with CPE evident by 24 h. To investigate further whether V27A was potentially linked to M2WJ332 resistance, the fold increase in titre was determined under selection (80 M compound) for passages six and seven. Eng195 under Rimantadine, Zanamivir (known to rapidly select resistance (Correia et al, 2015;LeGoff et al, 2012)) and M2WJ332 achieved similar fold-increases compared to DMSO controls, whereas both L1.1 and DL7 significantly suppressed viral replication leading to much reduced titres compared to input ( Figure 6c). We then introduced an evolutionary bottleneck at passage eight to enrich for any minor resistant variants present within bulk populations, normalising innoculae to a multiplicity of infection (MOI) of 0.001. After a further six passages, output titres (passage 14) again revealed a significant reduction in L1.1 selected Eng195 titre, whereas DL7 selected supernatants had recovered to a similar range as controls ( Figure 6d). Finally, deep sequencing of IAV genomes was performed comparing passage 5 and 14 supernatants to investigate minor M2 variant populations and mutations occurring elsewhere in the Eng195 genome ( Figure 6e). Only M2WJ332 selected virus showed changes in the M2-N31 sequence compared to controls, with V27A increasing from ~40 to ~80 % abundance between the two time points. Additional minor changes also occurred at position 31 (N31S/I), with another change located outside of the CD region (E70K). Low prevalence changes also occurred in the HA protein in M2WJ332 selected virus at passage 14, namely K226E, Y454H/S and N461D. L1.1 selected virus also showed a low prevalence change at HA Y454H, along with changes in PB2 D161N and M1 P55S. No variation distinct from controls was evident for DL7 selected virus at passage 5 or 14. Taken together, V27A was the only relevant polymorphism significantly enriched during chronic culture with novel M2-N31 inhibitors. This strongly suggests that Eng195 M2-A27/N31 confers resistance to M2WJ332. Synergistic antiviral effects using M2-N31 targeted inhibitor combinations. M2-N31 specific inhibitors with distinct binding properties provides the opportunity for antiviral combinations that could not only improve therapy, but also reduce the likelihood of resistance. Combinations of M2WJ332, DP9, DL7 and L1.1 were titrated in Eng195 MDCK plaque reduction assays and antiviral effects assessed using MacSynergy software (results corroborated using "Compusyn"). Remarkably, combinations of M2WJ332 with either L1.1 or DP9 yielded synergistic reductions of viral titre ( Figure 7a). M2WJ332 combined with L1.1 showed increased synergy proportionate to both inhibitor concentrations. However, synergy between M2WJ332 with DP9 only occurred in the lower M2WJ332 range and increased with DP9 concentration. By contrast, combinations involving the DL7 compound resulted in antagonism, whether combined with a lumenally (L1.1) or peripherally (M2WJ332) targeted partner ( Figure 7b). Lastly, L1.1 also achieved synergistic antiviral effects when combined with the NAi, Zanamivir (Figure 7c, S5), supporting that drug combinations between classes should be achievable. Discussion This work lays the foundation for future combination therapies targeting rimantadine-resistant influenza A viruses, which could form a vital addition to the current pandemic antiviral repertoire. We have moved beyond the controversy surrounding two potential binding sites within M2 channel complexes, instead showing that synergistic antiviral therapy is achievable using compounds targeting both regions, which ultimately should reduce the incidence of new resistance mutations. Finally, whilst adamantanes still contribute to the M2-N31 chemical toolbox, we describe multiple distinct scaffolds that should provide a start-point for the next steps in antiviral drug discovery. The question of how amantadine and/or rimantadine block the activity of M2 from sensitive influenza strains has been debated since two contrasting atomic structures were published in 2008 (Schnell & Chou, 2008;Stouffer et al, 2008). However, these and other ensuing studies generally compared TM with CD peptides, which is likely to bias where prototypic adamantanes bind. This is due to the C-terminal extension in CD peptides inducing a more compact helical bundle that is less favourable for lumenal interactions compared with the much broader structure seen for TM peptides, which also lack the majority of the peripheral binding in biophysical or other studies, likely due to the technical difficulties associated with the use of membrane bilayers compared with membrane-mimetic detergents. Notably, the lipidic 2L0J membrane bundle is less compact compared with other CD structures, and the orientation of the C-terminal extension, comprising basic helices that form the core of the peripheral binding site, differs significantly in 2LOJ compared to the detergent-solubilised template used for the present study, 2RLF (Schnell & Chou, 2008;Sharma et al, 2010). This may explain why fewer peripherally targeted compounds were selected compared to the lumen; accordingly, re-docking peripheral compounds into 2LOJ using E-Hits results in altered binding poses and affinity scores ( Figure S6). To our knowledge, the present study is the first to compare TM and CD peptides using an in vitro functional assay. Assuming that the presence/absence of the C-terminal extension discriminates peripheral binding, the identification of CD peptide-specific hits is the first direct evidence that the peripheral binding site represents a druggable target for M2. Whilst we cannot rule out that CD and TM peptides adopt altered conformations within liposomes, the equivalence seen for lumenally targeted compounds suggests that this is likely not the case, at least for the trans-membrane region. Many other M2-N31 targeted studies, such as that describing M2WJ332 (Wang et al, 2013b), combine TM-derived structures with compound efficacy data from full-length protein, thereby presuming that the two are directly related. Nevertheless, discovering that M2WJ332 showed a functional preference for the peripheral binding site was unexpected. It is conceivable that M2WJ332 interacts differently with Eng195 M2 compared to the protein in the reported TM structure (A/Udorn/307/1972 (H3N2) M2-S31N, PDB: 2LY0). Moreover, 2LY0 was solved using relatively harsh detergent conditions (Chipot et al, 2018;Wang et al, 2013b), rather than lipid, and in the presence of millimolar rather than micromolar inhibitor concentrations. Nonetheless, whether or not Eng195 and Udorn M2 are directly comparable as proteins, the present study serves as precedent for effective influenza A virus inhibitors targeting the M2-N31 periphery for at least one influenza A virus strain. Interestingly, a third class of compounds showed in vitro preference for TM peptides. However, with the exception of a mild antiviral effect for P6.4 (Table 1, Figure 5a), none of these displayed activity in Eng195 culture making the relevance of these compounds unclear. Eng195 chronic culture in the presence of M2WJ332 led to the rapid evolution of a V27A change within the M2 sequence. Both plaque purification and monitoring the titre of selected bulk populations supported that this change confers specific resistance. V27A is a known amantadine resistance mutation (Barniol-Xicota et al, 2017;Hu et al, 2017), albeit less prevalent compared to S31N. Given the ambiguity surrounding amantadine/rimantadine binding and the nuances of TM and CD peptide detergent structures, it is unclear how such resistance mutations relate to amantadine, or other inhibitor binding. Amantadine binds proximal to H37 in the central portion of the trans-membrane helical bundle (Cady et al, 2010), meaning that S31N and V27A are too distant for these mutations to affect direct contacts with the drug. However, lumenally docked compound D and M2WJ332, as well as the 2LY0 structure, predict direct contact with V27 and N31 ( Figure 2d, 3a). V27 is also proposed to form a secondary gate/constriction at the neck of the channel lumen (Yi et al, 2008), meaning that V27A might promote a more open-form channel complex. For Eng195, the channel is already less compact due to S31N pushing apart the trans-membrane helices (Figure 2a). Directly related to this alteration in structure, both polymorphisms also mediate resistance to peripherally bound rimantadine via the destabilisation of channel complexes (Pielak et al, 2009). Hence, it is neither possible to reinforce nor contest in vitro data on M2WJ332 peripheral binding based upon the location of V27A. Notably, naturally occurring M2-N31/A27 double variant isolates exist, implying a low genetic barrier in nature as well as in cell culture (Durrant et al, 2015). Interestingly, other minor M2 variants selected by M3WJ332 included the revertant N31S, which mediates resistance to another published M2-N31 specific adamantane derivative; a more dramatic N31D mutation also mediated resistance to a dual M2 -N31/S31 inhibitor (Ma et al, 2016). However, changes in M2, including at N31, did not occur in L1.1-selected Eng195, which forms predicted interactions with N31 but not V27 due to sitting lower in the lumen interacting with the H37 tetrad ( Figure S2). Accordingly, this compound maintained suppression of Eng195 bulk titre throughout the course of the experiment. DL7 appeared to behave similarly to L1.1 at early times, but titres recovered following the introduction of the evolutionary bottleneck at passage eight. No sequence changes relative to controls occurred at passage five or fourteen, making it unclear whether resistant variants were initially selected. Irrespective of whether resistance may or may not arise to new M2-N31 specific compounds, the most clinically important observation from this study is that both lumenal and peripheral binding sites are viable drug targets that allow combinations of inhibitors to be used for therapy. Moreover, synergistic, rather than additive, antiviral effects were achieved for two of the four combinations tested, and L1.1 behaved similarly when combined with Zanamivir. Hence, whilst the compounds herein represent the initial stages of hit identification for both binding sites, the indications are that further development will eventually enable double, triple, or even expanded therapeutic regimens upon inclusion of other agents. Such strategies are applied to antiviral treatment for other highly variable RNA viruses and there is growing consensus that such approaches represent the best way forward for influenza A virus. Furthermore, combinations should combat potential shortcomings for individual agents in terms of lower genetic barriers to potential resistance. Overall, future exploitation of both druggable sites within M2-N31 using specific inhibitors has considerable potential to rejuvenate this essential ion channel protein as a drug target, providing an important additional resource to combat emergence of future pandemics. Materials and Methods Peptide synthesis and reconstitution. Peptides (Eng195 M2-N31 CD and TM, Open access small molecules libraries were used for an unbiased molecular binding study, with eHiTS (SimBioSys Inc.) used to dock compounds onto the two pre-defined binding regions of the Eng195 homology model. Ranked by eHiTS score the top 1000 hits, at each site, were manually assessed for their binding pose and drug-like qualities, resulting in seven predicted lumenal binding compounds L1-L7 and six peripheral binding compounds P1-P6 being selected for testing. In addition, a biased screen was carried out utilising a rapid overlay of chemical structure (ROCS) approach and centred on compound D. ROCS (OpenEye Scientific) software was used and the top 1000 hits were docked against the homology model using eHiTS. Compound docking at both sites was validated using SPROUT (Keymodule Ltd.) software. A protocol of attrition was carried out to select DL and DP compounds, focussing on docking scores, molecular weight and specific interactions with the M2 tetramer. Full details are listed in (Section 3.5.1). Briefly, compounds were selected based on agreement between the two binding scores, molecular weight and specific interactions with the protein. Analogues of selected compounds were found via the online tool eMolecules (www.emolecules.com). Selected compounds were subsequently docked against the 2RLF-based E195 M2 homology model using eHiTS. Figure 7 are from MacSynergy, Compusyn data was comparable and is available upon request. Selection in culture using M2-specific compounds and plaque purification of single variants. For serial passage using increasing compound concentrations, MDCK cells were seeded into 6 well plates 4 h before the initial infection with Influenza virus was carried out as described above at an MOI of 0.001, with 2.5 µM compound. At 24 hpi virus containing media was removed, 1/10 th volume was used to infect freshly seeded MDCK cells, as a blind passage and the remainder was snap frozen. This process was repeated, each time increasing the concentration of compound present in the media two-fold, until 80 µM was reached. At selected time points the titre of viral supernatants was determined via plaque assay. These supernatants could then be used at MOI 0.001, with fresh 80 µM compound, in subsequent infections. For plaque purification, MDCK cells were seeded in 12 well plates 4 h before infection. Virus was diluted 1:250 in SF media containing between 5 and 80 µM compound and used to infect cells for 1 h, before cells were set under overlay media, containing 2 µg/ml TPCK, 0 -80 µM compound and agar. At 72 hpi, agar plugs were picked and placed in 300 µl SF media for 2 h prior to it being used to infect fresh MCDK cells, in the presence or absence of compound (5 -80 μM) for 1 h at 37 0 C, 5 % CO2. Once infectious supernatant was removed, it was replaced with SF media + 1 µg/ml TPCK and 0 -80 µM compound and plates returned to 37 0 C, 5 % CO2. Once > 40 % CPE was observed, infectious supernatant was clarified, prior to vRNA extraction. Extraction, purification and sequencing of virion RNA (vRNA). vRNA was extracted from clarified supernatants using a QIAamp Viral RNA Mini Kit (QIAGEN) according to the manufacturer's instructions. Resultant eluted vRNA was kept at -20 0 C for short term storage, or transferred to -80 0 C for long term storage. vRNA was synthesised into first strand cDNA using SuperScript® III (SSCIII) (Invitrogen™) and a Eng195 segment 7 specific forward primer (sequences available upon request). A negative control of vRNA but no SSCIII was included in each experiment. cDNA was amplified via polymerase chain reaction (PCR) using the proof reading Phusion® high fidelity (HF) polymerase (Phusion) (New England Biolabs). Reactions were heated to 98 0 C for 30 s, followed by 35 cycles of the following steps; denaturation at 98 0 C for 10 s, annealing at 48 0 C for 30 s and extension at 72 0 C for 40 s and a final incubation at 72 0 C for 7 min. Amplified cDNA was purified using a QIAquick PCR Purification Kit (QIAGEN) according to the manufacturer's instructions, with eluted DNA concentrations determined using a nanodrop spectrophotometer and DNA visualised by Tris-acetate-EDTA buffered agarose gel electrophoresis. Samples were stored at -20 0 C. Direct sequencing of virus-derived cDNA. Standard dsDNA sequencing was conducted using the Mix2Seq kit (Eurofins Genomics), with forward internal primer Eng195_s7_Fint (5'-GGCTAGCACTACGGC-3') or reverse primer Flu_s7_R2 (5'-AGTAGAAACAAGGTAGTTTTTTACTCTAGC-3'). Next Generation sequencing of total genomic viral RNA. vRNA was reverse transcribed by Superscript III (Invitrogen) and amplified by Platinum Taq HiFi Polymerase (Thermo Fisher) and influenza specific primers (Zhou et al, 2009) in a 1-step reaction. Library preparation was performed using a Nextera kit (Illumina). Libraries were sequenced on an Illumina MiSeq using a v2 kit (300-cycles; Illumina) giving 150-bp paired end reads. Reads were mapped with BWA v0.7.5 and converted to BAM files using SAMTools (1.1.2). Variants were called using QuasiBAM, an in-house script at Public Health England. with "?" indicating potential binding to lumen or partial peripheral binding site based upon compound titrations; bold text indicates differences from predicted binding. Several compounds were tested versus Eng195 in culture (80 M) and the order of magnitude titre reduction across at least three assays is shown. Finally, IC50 was determined for four compounds selected for synergy experiments.
2019-08-16T06:18:23.697Z
2019-07-24T00:00:00.000
{ "year": 2019, "sha1": "88865dcd12e323100ccbd74bc312737adbc41528", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1008716&type=printable", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "f0d5779f57814863a84be6cca1871bd6a485990f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Biology" ] }
252350509
pes2o/s2orc
v3-fos-license
An Internal-Electrostatic-Field-Boosted Self-Powered Ultraviolet Photodetector Self-powered photodetectors are of significance for the development of low-energy-consumption and environment-friendly Internet of Things. The performance of semiconductor-based self-powered photodetectors is limited by the low quality of junctions. Here, a novel strategy was proposed for developing high-performance self-powered photodetectors with boosted electrostatic potential. The proposed self-powered ultraviolet (UV) photodetector consisted of an indium tin oxide and titanium dioxide (ITO/TiO2) heterojunction and an electret film (poly tetra fluoroethylene, PTFE). The PTFE layer introduces a built-in electrostatic field to highly enhance the photovoltaic effect, and its high internal resistance greatly reduces the dark current, and thus remarkable performances were achieved. The self-powered UV photodetector with PTFE demonstrated an extremely high on–off ratio of 2.49 × 105, a responsivity of 76.87 mA/W, a response rise time of 7.44 ms, and a decay time of 3.75 ms. Furthermore, the device exhibited exceptional stability from room temperature to 70 °C. Compared with the conventional ITO/TiO2 heterojunction without the PTFE layer, the photoresponse of the detector improved by 442-fold, and the light–dark ratio was increased by 8.40 × 105 times. In addition, the detector is simple, easy to fabricate, and low cost. Therefore, it can be used on a large scale. The electrostatic modulation effect is universal for various types of semiconductor junctions and is expected to inspire more innovative applications in optoelectronic and microelectronic devices. Introduction With the rapid development of the Internet of Things (IoT), ensuring a large-scale distributed power supply has become a challenge because of the numerous sensors and detectors used in IoT systems. Conventional power sources, such as batteries, have limited life and cause environmental pollution. Furthermore, it is difficult to manage, replace, and maintain power sources. The self-powered device without an external power supply is a promising solution for the development of a green and sustainable IoT [1,2], which has attracted considerable attention. Ultraviolet (UV) photodetectors, as basic units of photoelectric information systems and IoT, are widely used in fields such as fire warning, astronomical exploration, environmental monitoring, chemical/biological sensing, and optoelectronic storage [3][4][5][6][7]. Generally, most UV photodetectors, such as UV phototubes, and semiconductor-based diodes [8,9], require external power sources for operation. For example, the working voltage of the ultraviolet phototube reached as high as one hundred volts. Although a p-n junction device can function without a power source, its dark current is high, and the detection performance is limited. Therefore, a reverse bias is typical for the p-n junction Figure 1a displays the design of the proposed IEFB-SP ultraviolet photodetector. The photodetector was a four-layer film of ITO/TiO 2 /PTFE/Cu. Among them, ITO/TiO 2 functioned as a conventional n-type heterojunction. The PTFE was an electret layer that generated static electricity on the interface and also functioned as a high-resistance layer to reduce the dark current. ITO and Cu were used as the electrodes of the detector. The working principle of the device is displayed in Figure 1b,c. On the one hand, a difference in electron affinities between ITO and TiO 2 produces a built-in electric field in the ITO/TiO 2 junction. On the other hand, the PTFE, owning high electronegativity, gains electrons from TiO 2, and a built-in electrostatic potential generates on the TiO 2 /PTFE interface. Based on the first principles calculation and molecular dynamics simulation, the intermolecular transferred charges on the TiO 2 /PTFE and the TiO 2 /PDMS interface were quantitatively analyzed (see supplementary information in note.1), and the varying contact electrification properties with intermolecular distances are shown in Figures S1 and S2, revealing an existing electrostatic field on the TiO 2 /PTFE and TiO 2 /PDMS interface. Experimentally, based on the coupling of contact electrification and electrostatic induction, a triboelectric nanogenerator (TENG) composed of TiO 2 and PTFE was fabricated, and the charge transfer properties between the TiO 2 and PTFE surfaces were investigated when they were in contact. As shown in Figures S3 and S4, the surface of PTFE is negatively charged, and the surface of TiO 2 is positively charged after contact. Without UV lights, photo-generated carriers were not developed in the TiO 2 layer, and the device outputted a small short-circuit dark current ( Figure 1d) because of a weak contact potential difference of ITO/TiO 2 heterojunction and the high impedance of the PTFE layer. With UV illumination, photo-generated carriers were produced in TiO 2 . Under the simultaneous actions of the built-in electric field of ITO/TiO 2 heterojunction and the electrostatic field of TiO 2 /PTFE interface, photo-generated carriers were accelerated to separate and diffuse to generate a boosted photovoltaic effect. The negative potential of PTFE remarkably promoted the optical response and considerably improved output performances. The equivalent circuit diagrams of the detector in light-off and light-on states are illustrated in Figure 1e,f. triboelectric nanogenerator (TENG) composed of TiO2 and PTFE was fabricated, and the charge transfer properties between the TiO2 and PTFE surfaces were investigated when they were in contact. As shown in Figures S3 and S4, the surface of PTFE is negatively charged, and the surface of TiO2 is positively charged after contact. Without UV lights, photo-generated carriers were not developed in the TiO2 layer, and the device outputted a small short-circuit dark current (Figure 1d) because of a weak contact potential difference of ITO/TiO2 heterojunction and the high impedance of the PTFE layer. With UV illumination, photo-generated carriers were produced in TiO2. Under the simultaneous actions of the built-in electric field of ITO/TiO2 heterojunction and the electrostatic field of TiO2/PTFE interface, photo-generated carriers were accelerated to separate and diffuse to generate a boosted photovoltaic effect. The negative potential of PTFE remarkably promoted the optical response and considerably improved output performances. The equivalent circuit diagrams of the detector in light-off and light-on states are illustrated in Figure 1e,f. Material Characterizations Figure 2a displays the scanning electron microscopy (SEM) images of the ITO/TiO2/PTFE/Cu film and the photograph of the fabricated device. The thickness of the TiO2 film was 236 nm, and that of the PTFE film was 150 nm. The optical transmission spectra of ITO with the glass substrate and the sample after the growth of the TiO2 film are displayed in Figure 2d. The cut-off absorption wavelength of ITO was 303 nm. After the growth of the TiO2 film, the cut-off absorption edge was red-shifted at 344 nm. The bandgap of the ITO film was 3.75 eV, and the bandgap of the TiO2 film was 3.44 eV, which was calculated by the optical absorption spectra. Figure 2e displays the X-ray diffraction pattern of the TiO2 film. Taking the XRD standard card of PDF #89-4920 and PDF #21-1272 as a reference, the prepared TiO2 film was a mixture of anatase and rutile. Figure 2b,c display the surface SEM images of the TiO2 and PTFE films. The atomic force microscopy (AFM) images of the TiO2 surface are displayed in Figure 2f. The root mean square roughness of the TiO2 surface, average roughness, and maximum roughness were 2.26, 1.81, and 15.5 nm, respectively. Certain surface roughness of TiO2 was conducive to the generation of the electrostatic field in the interface between TiO2 and PTFE films. Figure 2a displays the scanning electron microscopy (SEM) images of the ITO/TiO 2 / PTFE/Cu film and the photograph of the fabricated device. The thickness of the TiO 2 film was 236 nm, and that of the PTFE film was 150 nm. The optical transmission spectra of ITO with the glass substrate and the sample after the growth of the TiO 2 film are displayed in Figure 2d. The cut-off absorption wavelength of ITO was 303 nm. After the growth of the TiO 2 film, the cut-off absorption edge was red-shifted at 344 nm. The bandgap of the ITO film was 3.75 eV, and the bandgap of the TiO 2 film was 3.44 eV, which was calculated by the optical absorption spectra. Figure 2e displays the X-ray diffraction pattern of the TiO 2 film. Taking the XRD standard card of PDF #89-4920 and PDF #21-1272 as a reference, the prepared TiO 2 film was a mixture of anatase and rutile. Figure 2b,c display the surface SEM images of the TiO 2 and PTFE films. The atomic force microscopy (AFM) images of the TiO 2 surface are displayed in Figure 2f. The root mean square roughness of the TiO 2 surface, average roughness, and maximum roughness were 2.26, 1.81, and 15.5 nm, respectively. Certain surface roughness of TiO 2 was conducive to the generation of the electrostatic field in the interface between TiO 2 and PTFE films. Photodetection Performance of IEFB-SP Device The ultraviolet detection performance of the IEFB-SP device at 0 V is displayed in Figure 3. A UV light-emitting diode (LED) with a wavelength of 365 nm fixed on a linear motor periodically illuminated the detector by linear motion, producing turn-on and turnoff states (see video 1). In the measurements, the humidity of the air environment was between 30 RH and 40 RH, and the UV LED moved to illuminate the device and stayed for 3 s and then moved away at a speed of 1 m/s. The proposed device exhibited excellent self-powered photodetection performances, as displayed in Figure 3. Without UV light, the dark current of the device was extremely low, 8.03 × 10 −12 A. With UV light, due to the high resistance of the PTFE layer, the dark current (Id) of the IEFB-SP device was extremely low, 8.03 × 10 −12 A (see Figure 3a). With UV irradiation, the photocurrent (Ip) was approximately 2.00 × 10 −6 A, and the light-to-dark current ratio reached 2.49 × 10 5 . The currentvoltage (I-V) curves ( Figure S8) show an obvious increase in the magnitude of photocurrent at zero bias in the light state compared with the dark state. As illustrated in Figure 2b. the photovoltage (Vp) and dark voltage (Vd) of the IEFB-SP device were 0.02 V and 8.52 × 10 −5 V, respectively. To figure out the functions of the PTFE layer, we measured the internal resistances for photodetectors using the impedance matching method and the photodetection performance of a conventional ITO/TiO2/Cu heterojunction without a PTFE Photodetection Performance of IEFB-SP Device The ultraviolet detection performance of the IEFB-SP device at 0 V is displayed in Figure 3. A UV light-emitting diode (LED) with a wavelength of 365 nm fixed on a linear motor periodically illuminated the detector by linear motion, producing turn-on and turnoff states (see Video S1). In the measurements, the humidity of the air environment was between 30 RH and 40 RH, and the UV LED moved to illuminate the device and stayed for 3 s and then moved away at a speed of 1 m/s. The proposed device exhibited excellent self-powered photodetection performances, as displayed in Figure 3. Without UV light, the dark current of the device was extremely low, 8.03 × 10 −12 A. With UV light, due to the high resistance of the PTFE layer, the dark current (I d ) of the IEFB-SP device was extremely low, 8.03 × 10 −12 A (see Figure 3a). With UV irradiation, the photocurrent (I p ) was approximately 2.00 × 10 −6 A, and the light-to-dark current ratio reached 2.49 × 10 5 . The current-voltage (I-V) curves ( Figure S8) show an obvious increase in the magnitude of photocurrent at zero bias in the light state compared with the dark state. As illustrated in Figure 2b. the photovoltage (V p ) and dark voltage (V d ) of the IEFB-SP device were 0.02 V and 8.52 × 10 −5 V, respectively. To figure out the functions of the PTFE layer, we measured the internal resistances for photodetectors using the impedance matching method and the photodetection performance of a conventional ITO/TiO 2 /Cu heterojunction without a PTFE layer for comparison, as displayed in the supplementary information (see Note S3, and Figures S6 and S7), and the performance comparisons are listed in Table S1. We can see that, without PTFE, the average photocurrent (I p ) and dark current (I d ) of the ITO/TiO 2 /Cu device at 0 V are 4.76 × 10 −6 A and 3.67 × 10 −6 A, respectively, and the on-off ratio is only 1.30. Connecting the TiO 2 and Cu, the I-V curve (see Figure S7c) reveals there is an excellent ohmic contact between the Cu electrode and the TiO 2 , and the main contact potential difference was generated by the ITO/TiO 2 interface. In contrast, the light-dark ratio of the device with PTFE was increased by 8.40 × 10 5 times. The photovoltage increment (V p -V d ) is increased by 665 times, as presented in Note S4. It is interesting that the internal resistance of the device with PTFE increased by~1.3 × 10 5 times, but the photocurrent only decreased by 2.4 times. This is because the strong built-in electrostatic field significantly improved the photocurrent. Therefore, the PTFE has double functions. One is the high internal resistance of PTFE, which greatly reduces the dark current, and the other is the built-in electrostatic field, which remarkably enhances the photovoltaic effect. Thus, the IEFB-SP device achieved high performance of low dark current and large photocurrent. Figures S6 and S7), and the performance comparisons are listed in Table S1. We can see that, without PTFE, the average photocurrent (Ip) and dark current (Id) of the ITO/TiO2/Cu device at 0 V are 4.76 × 10 −6 A and 3.67 × 10 −6 A, respectively, and the on-off ratio is only 1.30. Connecting the TiO2 and Cu, the I-V curve (see Figure S7c) reveals there is an excellent ohmic contact between the Cu electrode and the TiO2, and the main contact potential difference was generated by the ITO/TiO2 interface. In contrast, the light-dark ratio of the device with PTFE was increased by 8.40 × 10 5 times. The photovoltage increment (Vp-Vd) is increased by 665 times, as presented in Note S4. It is interesting that the internal resistance of the device with PTFE increased by ~ 1.3 × 10 5 times, but the photocurrent only decreased by 2.4 times. This is because the strong built-in electrostatic field significantly improved the photocurrent. Therefore, the PTFE has double functions. One is the high internal resistance of PTFE, which greatly reduces the dark current, and the other is the built-in electrostatic field, which remarkably enhances the photovoltaic effect. Thus, the IEFB-SP device achieved high performance of low dark current and large photocurrent. To further quantify the photocurrent properties enhanced by the PTFE, the transferred charge was measured, as displayed in Figure 3c. As the UV light periodically irradiated on the device, the transferred charge increased stepwise with the irradiation times, and the amount of transferred charge for each illumination was 5.97 µC. The current calculated by I = q/t was 1.99 × 10 −6 A, which is consistent with the detection current. As for the device without PTFE, the amount of transferred charge in a light switching cycle is 14.08 µC, as shown in Figure S9. When we reversed the positive and negative connections between the ITO and Cu electrodes, the current value remained unchanged, but the current direction was opposed (see Figure S10), indicating that the measured signals were produced by the absorbed lights. Figure 3d displays the photocurrent dependence on the UV intensity in the intensity range of 0.29-15.94 mW/cm 2 . The photocurrent increased gradually with the UV power density. With a low optical power density of 0.29 mW/cm 2 , the device still exhibited a superior response. The responsivity R and the detectivity D* of the self-powered photodetector were evaluated using the following expression [37,38]: where I P is the photocurrent, I D is the dark current of the device, P is the incident optical power, A is the effective detection area, and e is the amount of electronic charge. At 365-nm wavelength, the incident light power was 0.64 mW/cm 2 , and the beam size was 1 mm 2 . The responsibility and the detectivity of the IEFB-SP photodetector were 72.41 mA/W and 4.51 × 10 12 jones, respectively. The photocurrent and responsivity as a function of illumination intensity are displayed in Figure 3e. The photocurrent followed the power law [39,40] of I P ∝ P β , where the β value is 1.03, indicating the photocurrent was almost linearly increased with the incident optical power. The responsivity exhibited excellent stability with the increase in the optical power. Figure 3f displays the responsivity of the IEFB-SP device at various light wavelengths. As the optical wavelengths were 325, 365, 396, 457, 532, and 607 nm, the responsivity of the device was 19.10, 72.24, 28.93, 14.14, 4.20, and 2.35 mA/W, respectively. Without regard to the absorption of the ITO layer in the UV region, the device exhibits a good UV/visible rejection ratio. A photodetector testing system was used to reveal the spectral response characteristics of the device (ITO/TiO 2 /PTFE/Cu) and the conventional heterojunction (ITO/TiO 2 /Cu) in the wavelength range of 250-700 nm (see Figure S11) in the supporting information. At the UV wavelength of 360 nm, the responsivity of our device reached as high as 76.874 mA/W, while that of the device without PTFE was only 0.174 mA/W. The responsivity increased by 442 times. The external quantum efficiency (EQE) of the photodetector is another key parameter for evaluating the photodetection performance and is expressed as follows [41,42]: where h represents the Planck constant, c is the velocity of the incident light, e is the charge, and λ is the wavelength of the incident light. The EQE was 24.54% for the proposed device at a wavelength of 365 nm. Under the illumination of a xenon lamp of a photodetector testing system, the EQEs of our detector (ITO/TiO 2 /PTFE/Cu) and the conventional heterojunction (ITO/TiO 2 /Cu) in the wavelength range of 250-700 nm are displayed in Figure S12. As the ultraviolet wavelength is 360 nm, the EQE of our device reached as high as 26.48%, whereas that of the conventional heterojunction device without PTFE was 0.06%. We evaluated the response speed of the IEFB-SP device. The rise time (T R90 ) is defined as the time required by the photovoltage to reach 90% of the maximum photovoltage, and the decay time (T D10 ) is the time at which the photovoltage decreases to 10% of the maximum photovoltage. The rise and decay times of this photodetector were 7.44 and 3.75 ms, respectively (see Figure 4a,b), and it had a fast response. The rise and fall times of the conventional ITO/TiO 2 /Cu heterojunction were 0.83 s and 1 s, respectively, as displayed in Figure S13, which is slower by 111 and 261 times, respectively, than those of the device with PTFE. Nanomaterials 2022, 12, x FOR PEER REVIEW 8 of 13 maximum photovoltage. The rise and decay times of this photodetector were 7.44 and 3.75 ms, respectively (see Figure 4a,b), and it had a fast response. The rise and fall times of the conventional ITO/TiO2/Cu heterojunction were 0.83 s and 1 s, respectively, as displayed in Figure S13, which is slower by 111 and 261 times, respectively, than those of the device with PTFE. Furthermore, we assessed the temperature stability and environmental stability of the IEFB-SP device. Figures 4c and S14 exhibited the photocurrent and dark current characteristics at various temperatures from room temperature to 70 °C, respectively. With the increase in the temperature, the photocurrent and dark current both increased slightly, but the light-to-dark ratio remained unchanged. Compared with the conventional photodetector of the semiconductor heterojunction, it exhibited exceptionally superior temperature stability. Furthermore, we irradiated the device continuously for 2 h and tested it repeatedly for 96 circles, and its photocurrent was invariable, as displayed in Figure S15. Furthermore, we assessed the temperature stability and environmental stability of the IEFB-SP device. Figures 4c and S14 exhibited the photocurrent and dark current characteristics at various temperatures from room temperature to 70 • C, respectively. With the increase in the temperature, the photocurrent and dark current both increased slightly, but the light-to-dark ratio remained unchanged. Compared with the conventional photodetector of the semiconductor heterojunction, it exhibited exceptionally superior temperature stability. Furthermore, we irradiated the device continuously for 2 h and tested it repeatedly for 96 circles, and its photocurrent was invariable, as displayed in Figure S15. After placing in an air environment for 60 days, the IEFB-SP photodetector with PTFE maintained its excellent photodetection performance (see Figure 4d). Influence of Dielectric Materials We studied the influence of dielectric layer thickness and material on the photodetection performances of IEFB-SP devices. Figure 4e displays the photocurrent properties of the device with PTFE films of thicknesses 50, 150, 300, and 800 nm. The thinner the thickness of PTFE was, the larger photocurrent and dark current were. Thicker PTFE films not only increased the internal resistance but also reduced the effect of electrostatic induction, which resulted in a considerable decrease in the photocurrent. As the thickness increased, the voltage increased ( Figure S16). Among the four thicknesses of PTFE membranes, the detector with a thickness of 150 nm presented the best photodetection performance. Figure 4f displays a comparison of the photocurrent properties with dielectric layer materials PTFE, polydimethylsiloxane (PDMS), and silicone rubber. Under similar UV illumination, the photocurrents of the IEFB-SP devices with PDMS, PTFE, and silicone rubber were 118.30 nA, 1.97 µA, and 0.04 nA, respectively. The response speeds of the devices with PDMS or with silicone rubber were slower than the self-powered photodetector with PTFE. Since the built-in electrostatic field is derived from the charge transfer between TiO 2 and the dielectric materials, the ability to gain electrons from TiO 2 plays a key role in the photodetection performance of the IEFB-SP photodetector. It is closely dependent on the electronegativity of materials. The better the electronegativity of a material is, the higher its ability to gain electrons from other material is, and the stronger the electrostatic field it yields. A stronger electrostatic field can be generated in the interface when a material of high electronegativity is in contact with a material of high electropositivity. In the triboelectric series [43,44], among these three materials, PTFE exhibited the strongest capability for gaining electrons and the highest electronegativity [45]. Therefore, the device with PTFE has the best photodetection performance. Performance Comparison Finally, Table 1 lists a comparison of state-of-the-art photodetectors and our proposed device. Our IEFB-SP photodetector exhibited excellent comprehensive detection performance, including high optical responsivity and sensitivity, especially with large photocurrent and ultra-low dark current. It is worth noting that we do not need a high-quality semiconductor junction for to achieve high-performance photodetection. Moreover, universal devices can be devised for other types of junctions to improve function. We provide a novel strategy for the development of high-performance, easy-to-use, and low-cost photodetectors. Conclusions In summary, the charge transfer characteristics between the TiO 2 and the dielectric interfaces are quantitatively analyzed both from theoretical simulations and experiments. Furthermore, an electret of PTFE was inducted to remarkably boost the photovoltaic effect of a semiconductor heterojunction to develop self-powered UV photodectors. The proposed IEFB device achieved outstanding photodetection performances without a power source. Under the illumination of a xenon lamp, the optical responsivity of the IEFB device with PTFE was 76.87 mA/W, the specific detection rate was 4.79 × 10 12 jones, and the EQE reached 26.48% at a wavelength of 360 nm. It also exhibited a rapid rise and decay time of 7.44 and 3.75 ms. Compared with a conventional heterojunction device under the same experimental conditions, the IEFB-SP photodetector demonstrated its photoresponse, lightdark ratio, rise speed, and decayed time improved by 442, 8.40 × 10 5 , 111, 267 times those of traditional device. Furthermore, the IEFB-SP device exhibited excellent temperature and circumstance stability. The photodetection performance remained unchanged from room temperature to 70 • C and did not degrade after 60 days of exposure under air surroundings without packaging. Overall, the IEFB-SP device achieves excellent photodetection performance without the requirement of high-quality junctions. It has the advantages of stable high-performance, simplicity, ease-of-fabrication, and low-cost. The strategy of using a built-in electrostatic field on the semiconductor-electret interface to modulate the photovoltaic effect is universal for various types of semiconductor junction devices. It is of significant guidance to improve the performance of self-powered photodetectors. The results in this work deepened the understanding of the electrostatic effect of the dielectric interface in the micro-region and are expected to inspire more applications in the future. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12183200/s1, Supplementary Note S1: Quantitative analyse of the intermolecular transferred charges on TiO 2 /PTFE and TiO 2 /PDMS interfaces; Supplementary Note S2: Investigation of charge transfer properties in TiO 2 /PTFE and TiO 2 /PDMS interfaces through triboelectric nanogenerators (TENGs) [57]; Supplementary Note S3: The measurements of internal resistances for photodetectors through impedance matching method; Supplementary Note S4: Calculation of photovoltage increment for the device with PTFE; Figure S1: Results of molecular dynamics simulation and charge properties of TiO 2 and PTFE molecules; Figure S2: Results of molecular dynamics simulation and charge properties of TiO 2 and PDMS molecules; Figure S3: Working mechanism of TENG composed of PTFE and TiO 2 films; Figure S4: Output characteristic curves of triboelectric nanogenerators composed of ITO/TiO 2 and Cu/PTFE and Cu/PDMS, respectively; Figure S5: Tauc's plots analysis image of ITO and TiO 2 . Figure S6: The output power of the internal electrostatic field boosted device with different dielectric layer under different loads; Figure S7: Photodetection performances of conventional ITO/TiO 2 /Cu junction device without PTFE; Figure S8: Current-voltage (I-V) curves for the ESPB-SP photodetector under 365 nm light illumination with intensity of 15.94 mW/cm 2 and dark condition; Figure S9: The amount of transferred charges in each illumination for the photodetector without PTFE; Figure S10: Performance of ESPB-SP device with reversed the positive and negative connections between ITO and Cu electrodes; Figure S11: Comparison of the responsivity of devices with and without a PTFE layer; Figure S12: Comparison of the EQE of devices with and without a PTFE layer; Figure S13: Response speed of conventional device without a PTFE layer; Figure S14: Dark current of the ESPB-SP device from room temperature to 70 • C; Figure S15: Repeatability measurement for the ESPB-SP device with PTFE for 2 h; Figure S16: Photovoltage of the device with different PTFE thicknesses (50, 150, 300, and 800 nm); Table S1: Performance comparison of three PDs (irradiated by a UV light with the wavelength of 365 nm and the optical power density of 15.94 mW/cm 2 ). Ip/Id: the ratio of photocurrent to dark current; Vp/Vd: the ratio of photovoltage to dark voltage.; Video S1: Selfpowered detection performance of An Internal-Electrostatic-Field-Boosted Self-Powered Ultraviolet Photodetector. Conflicts of Interest: The authors declare no conflict of interest.
2022-09-18T15:03:01.159Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "7600530db4d8aac479e27fa680bdd3c59e33b4eb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/18/3200/pdf?version=1663234633", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21c10c5476dcc74b3ebcaf38c8a9aef1159a827e", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
18799502
pes2o/s2orc
v3-fos-license
A comparison of liver protection among 3-D conformal radiotherapy, intensity-modulated radiotherapy and RapidArc for hepatocellular carcinoma Purpose The analysis was designed to compare dosimetric parameters among 3-D conformal radiotherapy (3DCRT), intensity-modulated radiotherapy (IMRT) and RapidArc (RA) to identify which can achieve the lowest risk of radiation-induced liver disease (RILD) for hepatocellular carcinoma (HCC). Methods Twenty patients with HCC were enrolled in this study. Dosimetric values for 3DCRT, IMRT, and RA were calculated for total dose of 50 Gy/25f. The percentage of the normal liver volume receiving >40, >30, >20, >10, and >5 Gy (V40, V30, V20, V10 and V5) were evaluated to determine liver toxicity. V5, V10, V20, V30 and Dmean of liver were compared as predicting parameters for RILD. Other parameters included the conformal index (CI), homogeneity index (HI), and hot spot (V110%) for the planned target volume (PTV) as well as the monitor units (MUs) for plan efficiency, the mean dose (Dmean) for the organs at risk (OARs) and the maximal dose at 1% volume (D1%) for the spinal cord. Results The Dmean of IMRT was higher than 3DCRT (p = 0.045). For V5, there was a significant difference: RA > IMRT >3DCRT (p <0.05). 3DCRT had a lower V10 and higher V20, V30 values for liver than RA (p <0.05). RA and IMRT achieved significantly better CI and lower V110% values than 3DCRT (p <0.05). RA had better HI, lower MUs and shorter delivery time than 3DCRT or IMRT (p <0.05). Conclusion For right lobe tumors, RapidArc may have the lowest risk of RILD with the lowest V20 and V30 compared with 3DCRT or IMRT. For diameters of tumors >8 cm in our study, the value of Dmean for 3DCRT was lower than IMRT or RapidArc. This may indicate that 3DCRT is more suitable for larger tumors. Introduction Hepatocellular carcinoma (HCC) is the third cause of cancer related death following lung and stomach cancer [1]. Resection and liver transplantation are generally regarded as curative treatments for HCC in the early stage and have shown effective results [2]. However, surgical resection accompanies high recurrence rate, and transplantation cannot be universally applicable. Now Radiotherapy technology has evolved remarkably and plays an important role in the treatment of HCC. During the past decade, improvement of survival had been observed from a high increase of radiation dose [3,4]. However, a high radiation dose to the liver would give rise to acute and late hepatic toxicity. Radiation-induced liver disease (RILD) is the most severe radiation-induced complication which may result in hepatic failure and death. The occurrence of RILD is associated with Child-Pugh grade, hepatic cirrhosis and the volume of liver receiving radiotherapy (RT). Cheng et al. [5] showed that both Child-Pugh Class B and the presence of hepatitis B virus were associated with the risk of RILD. What is more, chronic infection with HBV is responsible for 60% of HCC in Asia and Africa [6]. In Liang et al.'s study [7], the severity of hepatic cirrhosis was proved to be a unique independent predictor for RILD. Son et al. [8] suggested that the total liver volume receiving <18Gy should be greater than 800 cm 3 to reduce the risk of the deterioration of hepatic function. Therefore, the study of predicting parameters for RILD risks and sparing more normal liver during RT is essential for HCC patients. Now 3DCRT can irradiate the target volume accurately while minimizing the dose to normal liver and may offer a chance of long survival for some HCC patients [9]. With the development of an advanced form of 3DCRT, intensity-modulated radiotherapy (IMRT) can improve radiation plan quality by using an inverse planning algorithm to generate complex spatial dose distributions to conform more closely to the target volume. Recent years, RapidArc (RA) was developed to improve the time efficiency of dose delivery and produce highly conformal dose spacial distribution by changing treatment apertures (defined by dynamic multiple leaf collimators) and a modulated dose rate [10]. Poon et al. [11] have reported a significant improvement in sparing OAR and better conformity using RA compared with IMRT. But others may not. Kan et al. [12] showed that double-arc RA plans produced slightly inferior parotid sparing and dose homogeneity than IMRT. The purpose of this study was to compare the predicting parameters for RILD among 3DCRT, IMRT and RA for HCC. Patient selection Patients who underwent RT for primary HCC were registered and the database was retrospectively reviewed from January 2010 to March 2013 at Shandong Cancer Hospital. Eligibility criteria were as follows: (1) All patients underwent alpha-fetoprotein examination, contrastenhanced computed to tomography, and ultrasonography to confirm the diagnosis. (2) No one had cirrhosis or portal vein thrombosis; (3) All patients had centrally located lesions on the right liver lobe; (4) Computed tomography scanning included whole liver, and bilateral kidney with a 3-mm slice thickness. (5) The patients experienced transarterial chemoembolization (TACE) or not. Informed consent was obtained from all patients, and the local Ethical Board approved the study protocol (Shandong tumor prevention and control institute ethics committee). Target delineation and planning techniques The patients were fixed using vacuum casts in a supine position with both arms raised above their heads. There was no respiratory control training or other means to decrease degree of excursion of the liver. We defined the gross tumor volume (GTV) as the volume of primary tumor evident on contrast-enhanced CT images. The clinical target volume (CTV) was delineated on the basis of the GTV expanded by 5 mm. The planning target volume (PTV) was defined as the CTV with a 5-mm radial expansion and a 10-mm craniocaudal expansion to account for errors caused by the daily setup process and internal organ motion [13]. The OARs considered were healthy liver (whole liver minus PTV), kidneys, spinal cord and stomach. The target delineation was performed by the same experienced oncologist. Three sets of plans were all designed on the Varian Eclipse version 8.6.23 treatment planning system which was equipped with a Millennium multileaf collimator (MLC) (Varian) with 120 leaves. For 3DCRT and IMRT plans, all the gantry angles and radiation fields were confirmed according to the relationship of the PTVs and OARs to different situations, and the number of fields varied from 4 to 7. For RA, the plan was generated using two arcs rotating from 55°to 181°anticlockwise and from 181°to 55°clockwise with the dose rate varied between 0 MU/min and 600 MU/min (upper limit). A fixed DR of 300 MU/min was selected for IMRT and 3DCRT. All three sets of plans were designed by the same experienced physicist using 6-or 15-MV photon beams. Planning objectives and evaluation tools The total prescribe dose was 50 Gy/25f. The planning objectives were to cover at least 95% of the PTV with the 90% isodose, to have minimum dose > 90% and maximum dose <110%. All plans were normalized to the mean dose of PTV to avoid any bias. For OARs, the tolerated maximum dose of spinal cord was 40 Gy. The mean dose of liver was limited to 30 Gy and V 30 <50%. The mean dose of kidneys were 23 Gy (at least one side) and V20 <20%, the mean dose of stomach <20 Gy [13,14]. For PTV, V x% means the volume receiving ≥ x% of the prescribed dose. For example, the V 95% means the volume receiving at least 95% of the prescribed dose and V 110% is used to represent the hot spot in the PTV. The conformal index (CI) where V t was the volume of PTV, V ref was the volume enclosed by the prescription dose line, and V t,ref is the volume of PTV within V ref [15]. The target homogeneity was defined as: HI = D 5% /D 95% where D 5% and D 95% are the minimum doses delivered to 5% and 95% of the PTV [16,17]. The value of HI and CI range from 0 to 1. The more approximate to 1, the better [18]. For OARs, the parameters included the mean dose, the maximum dose expressed as D 1% and a set of appropriate V x , and D y , where V x means the volume of the OARs receiving the dose > x Gy. For example, V 5 of liver means the volume of normal liver receiving >5 Gy and presents low-dose exposure for the normal liver. D 1% of spinal cord presents the maximum dose spinal cord received. What is more, the number of monitor units (MUs) per fraction and beam-on time were also analyzed to compare the efficiency of three sets of the plans. The treatment delivery time was defined as the time recorded between beam-on for the first field and beam-off for the last field. Statistics analysis The statistical significance of difference in the outcome between the three techniques was evaluated using Paired t-test. All statistical tests were two-tailed and the software performed for assessment was SPSS 13.0 for Windows (SPSS Inc, Chicago, Illinois, USA). P < 0.05 was considered significant. Patient characteristics The characteristics of patients are summarized in Table 1. There were 16 males and 4 females, and their median age was 60 years (range, 41-65 years). The PTV was 775.39 ± 361.98 (range, 107.53-3568.03 cm 3 ). We divided our patients into two groups according to the median value (D = 8 cm) of the tumor diameter. There was no whole liver included into the PTVs. Table 2 showed the results with the mean value ± standard deviation for the considered parameters of OARs. Table 3 showed the parameters of dose-volume histograms (DVHs) with the mean value ± standard deviation for PTV, MU and delivery time. Table 4 showed the predictive parameters for RILD with the mean value ± standard deviation of three techniques for larger (D > 8 cm) and smaller (D ≤ 8 cm) tumors of our study. Figures 1 and 2 Target coverage, dose homogeneity and conformity The coverage of PTVs of the three plans were evaluated by prescribed dose (V 100% ), HI and CI. All 95% of prescribed dose could cover at least 99% of the PTV without any significant difference for three plans. The value of CI Figures 3 and 4, Right figure revealed similar homogeneity of the PTV for 3 plans and 3DCRT obtained highest volume of hot spot. In Figure 3, left figure showed that RA obtained the highest low-dose distribution in the normal liver compared with 3DCRT and IMRT. 3DCRT obtained the highest high-dose distribution in the normal liver compared with IMRT and RA. In Figure 4, left figure showed that the low-dose distributions for three techniques were similar. For V 20 and V 30 , the value of 3DCRT was higher than IMRT or RA, but no statistical significance was observed (Table 4). For D mean of stomach, bilateral kidneys and the maximum dose spinal cord received (D 1% ), there were no significant differences. Comparison of predicting parameters for RILD between smaller and larger tumors For smaller tumors (D ≤ 8 cm), no difference was observed among three techniques for D mean ,V 20 , and V 30 . For V 5 and V 10 Discussion Historically, the role of RT in HCC had been always limited for the risk of RILD. There have been efforts to identify the risk factors and the predicting parameters in the literatures that indicate increased risk of RILD after RT. In the study of Kim et al., V 30 was demonstrated as a significant parameter in patients treated with conventional fractionated RT [19]. According to Liang et al., V 20 was a significant parameter in patients treated with conformal radiotherapy therapy [20]. In our study, there was significantly higher V 30 of liver for 3DCRT compared with RA (p = 0.013) or IMRT (p = 0.002). For V 20 , the values of 3DCRT was also higher than RA (p = 0.012). For V 40 in present study, the value was higher for 3DCRT when compared with the other two plans but no significant difference was observed. Therefore, these may indicate that RA was superior to 3DCRT or IMRT at the risk of RILD in consideration of lower V 20 and V 30 . For the issue of higher low-dose region, a meta-analysis [21] showed that larger low-dose volume of V 5 on total lung might contribute to radiation pneumonitis. Kim et al. [22] reported that the low-dose coverage V 5 , V 10 to the stomach were associated with the toxicity. But the potential risk of RILD caused by low-dose irradiation is unclear. In present study, there was significant difference for V 5 of liver among three techniques. The result was as follows: RA > IMRT >3DCRT. For V 10 , the value of RA was higher than 3DCRT (p = 0.004) while the value of IMRT was the highest (p < 0.05). These parameters should not be overlooked and the role of V 5 and V 10 for RILD needs to be elucidated in further studies. There are many studies demonstrating the relationship between D mean and RILD. Dawson et al. reported that a 5% and 50% probability of RILD for patients treated in their analysis were associated with the mean liver dose of 31 Gy and 43 Gy [23]. Cheng [24] et al. reported that the mean liver dose of patients with RILD was significantly higher than those without (25.04 Gy vs 19.65 Gy, p = 0.02). In consideration of the influence of PTV size to the radiation tolerance [7], we divided the patients into two groups according to median value (8 cm) of the tumor diameters. For smaller tumors (D ≤ 8 cm), no difference was observed except for higher V5 of RA compared with IMRT (p = 0.017) and 3DCRT (p = 0.019). For larger tumors (D > 8 cm), 3DCRT achieved lower D mean compared with IMRT (p = 0.014) or RA (p = 0.026). But for V 5 , V 10 , V 20 and V 30 , there were no differences. This may indicate that 3DCRT may be superior to RA or IMRT at the risk of RILD in consideration of lower D mean . Therefore, for larger tumors in our study, 3DCRT may be more suitable among three techniques. Recent years, RA has gained more interest. Many studies have showed that RA can achieve superior target coverage, better conformity, shorter treatment time and less MUs compared with IMRT or 3DCRT [13,14,25]. In present study, among the three techniques, RA achieved better CI and lower V 110% compared with 3DCRT. The hot spots in our study were almost located on tumors, so there is not much influence of hot spot among three plans. Moreover, RA had lower V 20 and V 30 (p < 0.05) for liver. For V 95% , V 100% , mean dose of the stomach, kidneys and D 1% of the spinal cord, there were no significant differences for three techniques. What is more, RA achieved the lowest MUs and shortest delivery time which is in line with other reports [13,14,25]. The reduction of total treatment time may improve patients' comfort on the couch, reduce the risk of inter-fraction movements and minimize organ displacement. But for larger tumors in our study, RA and IMRT had higher D mean of liver compared with 3DCRT. What is more, the treatment of RA was much more expensive than 3DCRT. In our study we had only 20 patients enrolled in our study which is a small sample. What is more, we did not combine each technique with respiratory gating and this might result in a proportion of the liver shifting between the high-and low-dose regions during RT. Conclusion In consideration of lower V20, V30, lower MUs and shorter delivery time, RA may be superior to 3DCRT or IMRT in terms of risk of RILD for right liver lobe tumors, but for larger tumors (D > 8 cm), 3DCRT had the lowest value of D mean and may be more suitable among three techniques. More clinical comparison about the predicting parameters for RILD risks are needed among different plans and this may be beneficial to HCC patients.
2016-05-04T20:20:58.661Z
2014-02-06T00:00:00.000
{ "year": 2014, "sha1": "4aea447ada3c90c96029df10b75a00086a441b27", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/1748-717X-9-48", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4aea447ada3c90c96029df10b75a00086a441b27", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
218595641
pes2o/s2orc
v3-fos-license
{\pi} with leftovers: a mechanisation in Agda Linear type systems need to keep track of how programs use their resources. The standard approach is to use context splits specifying how resources are (disjointly) split across subterms. In this approach, context splits redundantly echo information which is already present within subterms. An alternative approach is to use leftover typing, where in addition to the usual (input) usage context, typing judgments have also an output usage context: the leftovers. In this approach, the leftovers of one typing derivation are fed as input to the next, threading through linear resources while avoiding context splits. We use leftover typing to define a type system for a resource-aware {\pi}-calculus, a process algebra used to model concurrent systems. Our type system is parametrised over a set of usage algebras that are general enough to encompass shared types (free to reuse and discard), graded types (use exactly n number of times) and linear types (use exactly once). Linear types are important in the {\pi}-calculus: they ensure privacy and safety of communication and avoid race conditions, while graded and shared types allow for more flexible programming. We provide a framing theorem for our type system, generalise the weakening and strengthening theorems to include linear types, and prove subject reduction. Our formalisation is fully mechanised in about 1850 lines of Agda. Introduction The π-calculus [28,27] is a computational model for communication and concurrency that boils concurrent processing down to the sending and receiving of data over communication channels. Notably, it features channel mobility: channels themselves are first class values and can be sent and received. Kobayashi et al. [23] introduced a typed version of the π-calculus with linear channel types, where channels must be used exactly once. Linearity in the π-calculus guarantees privacy and safety of communication and avoids race conditions. More broadly, linearity allows for resource-aware programming and more efficient implementations [36], and it inspired unique types (as in Clean [4]), and ownership types (as in Rust [25]). A linear type system must keep track of what resources are used in which parts of the program, and guarantee that they are neither duplicated nor discarded. To do so, the standard approach is to use context splits: typing rules for terms with multiple subterms add an extra side condition specifying what resources to allocate to each of the subterms. The typing derivations for the subterms must then use the entirety of their allocated resources. A key observation here is that each subterm already knows about the resources it needs. Context splits contain usage information that is already present in the subterms. Moreover, the subterms cannot be typed until the context splits have been defined. On top of that, using binary context splits means that typing rules with n subterms require n − 1 context splits, which considerably clutters the type system. An alternative approach is leftover typing, a technique used to formulate intuitionistic linear logic [24] and to mechanise the linear λ-calculus [2]. Leftover typing changes the shape of the typing judgments and includes a second leftover output context that contains the resources that were left unused by the term. As a result, typing rules thread the resources through subterms without needing context splits: each subterm uses the resources it needs, and leaves the rest for its siblings. The first subterm in this chain of resources immediately knows what resources it has available. In this paper, we use leftover typing to define for the first time a resourceaware type system for the π-calculus, and we fully mechanise our work in Agda [37]. All previous work on mechanisation of linear process calculi uses context splits instead [16,19,17,34,8]. We will further highlight the benefits of leftover typing as opposed to context splits in contributions and the rest of the paper. Below we present two alternative typing rules for parallel composition in the linear π-calculus: the one on the left uses context splits, while the one on the right does not, and uses leftover typing instead: Contributions and Structure of the Paper 1. Leftover typing for resource-aware π-calculus. Our type system uses leftover typing to model the resource-aware π-calculus ( § 4.3) and satisfies subject reduction (Theorem 5). In addition to making context splits unnecessary, leftover typing allows for a framing theorem (Theorem 1) to be stated and is naturally associative, making type safety properties considerably easier to reason about ( § 5). Thanks to leftover typing, we can now state weakening (Theorem 2) and strengthening (Theorem 3) for the whole framework, not just the shared fragment. This give a uniform and complete presentation of all the meta-theory for the resource-aware π-calculus. 2. Shared, graded and linear unified π-calculus. We generalise resource counting to a set of usage algebras that can be mixed within the same type system. We do not instantiate our type system to only work with linear resources, instead we present an algebra-agnostic type system, and admit a mix of user-defined resource aware algebras [21,35] ( § 4.1). Any partial commutative monoid that is decidable, deterministic, cancellative and has a minimal element is a valid such algebra. Multiple algebras can be mixed in the type system -usage contexts keep information about what algebra to use for each type ( § 4.2). In particular, this allows for type systems combining linear (use exactly once), graded (exact number of n times) and shared (free to reuse and discard) types under the same framework. 3. Full mechanisation in Agda. The formalisation of the π-calculus with leftover typing, from the syntax to the semantics and the type system, has been fully mechanised in Agda in about 1850 lines of code, and is publicly available at [37]. We have fully mechanised all meta-theory and the details of a proof of subject reduction can be found in Appendix B. We use type level de Bruijn indices [12,15] to define a syntax of π-calculus processes that is well scoped by construction: every free variable is accounted for in the type of the process that uses it ( § 2). We then provide an operational semantics for the π-calculus, prior to any typing ( § 3). This operational semantics is defined as a reduction relation on processes. The reduction relation tracks at the type level the channel on which communication occurs. This information is later used to state the subject reduction theorem. The reduction relation is defined modulo structural congruence -a relation defined on processes that acts as a quotient type to remove unnecessary syntactic minutiae introduced by the syntax of the π-calculus. We then define an interface for resource-aware algebras ( § 4.1) and use it to parametrise a type system based on leftover typing ( § 4.3). Finally, we present the meta theoretical properties of our type system in § 5. Notation Data type definitions (N) use double inference lines and index-free synonyms (Nat) as rule names for ease of reference. Constructors (0 and 1+) are used as inference rule names. We maintain a close correspondence between the definitions presented in this paper and our mechanised definitions in Agda: inference rules become type constructors, premises become argument types and conclusions return types. Universe levels and universe polymorphism are omitted for brevity -all our types are of type SET. Implicit arguments are mentioned in type definitions but omitted by constructors. We use colours to further distinguish the different entities in this paper. TYPES are blue and uppercased, with indices as subscripts, constructors are orange, functions are teal, variables are black, and some constructor names are overloaded -and disambiguated by context. Syntax In order to mechanise the π-calculus syntax in Agda, we need to deal with bound names in continuation processes. Names are cumbersome to mechanise: they are not inherently well scoped, one has to deal with alpha-conversion, and inserting new variables into a context entails proving that their names differ from all other names in context. To overcome these challenges, we use de Bruijn indices [12], where a natural number n (aka index ) is used to refer to the variable introduced n binders ago. That is, binders no longer introduce names; terms at different depths use different indices to refer to the same binding. While de Bruijn indices are useful for mechanisation, they are not as readable as names. To overcome this difficulty and demonstrate the correspondence between a π-calculus that uses names and one that uses de Bruijn indices, we provide conversion functions in both directions and prove that they are inverses of each other up to α-conversion. Further details can be found in Appendix A. Definition 1 (Var and Process). A variable reference occurring under n binders can refer to n distinct variables. We introduce the indexed family of types [15] VAR n : for all naturals n, the type VAR n has n distinct elements. We index processes according to their depth: for all naturals n, a process of type PROCESS n contains free variables that can refer to n distinct elements. Every time we go under a binder, we increase the index of the continuation process, allowing the variable references within to refer to one more thing. Process 0 denotes the terminated process, where no further communications can occur; process ν P creates a new channel and binds it at index 0 in the continuation process P ; process P Q composes P and Q in parallel; process x ( ) P receives data along channel x and makes that data available at index 0 in the continuation process P ; process x y P sends variable y over channel x and continues as process P . Example 1 (The courier system). We present a courier system that consists of three roles: a sender, who wants to send a package; a receiver, who receives the package sent by the sender; and a courier, who carries the package from the sender to the receiver. Our courier system is defined by four π-calculus processes composed in parallel instantiating the above three roles: we have two sender processes, send x and send y, sending data over channels x and y, respectively; one receiver process, recv z, which receives over channel z the data sent from each of the sendershence receives twice; and a courier process carry x y z, which synchronises communication among the senders and the receiver. The courier process first receives data from the two senders along its input channels x and y, and then sends the two received bits of data to the receiver along its output channel z. The sender and receiver roles are defined below, parametrised by the channels on which they operate. The sender creates a new channel to be sent as data, and sends it over channel c, and then terminates. Processes send x and send y are an instantiation of send c. The receiver receives data twice on a channel c and then terminates. The receiver process recv z is an instantiation of recv c. The courier role is defined below as carry x y z. It sequentially receives on the two input channels x and y, instantiated as in0 and in1, and then outputs the two pieces of received data on the output channel z, instantiated as out. Finally, we create three communication channels and compose all four processes together: the first channel is shared between the one sender and the courier, the second between the other sender and the courier, and the third between the receiver and the courier. The result is the courier system defined below. carry in0 in1 out = in0 ( ) (1+ in1) ( ) (1+1+ out) 1+0 (1+1+ out) 0 0 system = ν (send 0 ν (send 0 ν (recv 0 carry (1+1+0) (1+0) 0))) We continue this running example in § 4.3, where we provide typing derivations for the above processes and use a mix of linear, graded and shared typing to type the courier system. Operational Semantics Thanks to our well-scoped grammar in § 2, we now define the semantics of our language on the totality of the syntax. Definition 2 (Unused). We consider a variable i to be unused in P (UNUSED i P ) if none of the inputs nor the outputs refer to it. UNUSED i P is defined as a recursive predicate on P , incrementing i every time we go under a binder, and using i ≡x ( which unfolds to the negation of propositional equality on Var, i.e. i≡x → ⊥) to compare variables. Definition 3 (StructCong). We define the base cases of a structural congruence relation ∼ = as follows: The first three rules (comp−*) state associativity, symmetry, and 0 as being the neutral element of parallel composition, respectively. The last three (scope−*) state garbage collection, scope extrusion and commutativity of restrictions, respectively. In scope-ext the side condition UNUSED i Q makes sure that i is unused in Q (see Definition 2). The function lower i Q uQ traverses Q decrementing every index greater than i. In scope-comm the function exchange i P traverses P (of type PROCESS 1+1+n ) and swaps variable references i and 1+i. In all the above, i is incremented every time we go under a binder. Definition 4 (Equals). We lift the relation StructCong ∼ = and close it under equivalence and congruence in ≃ . This relation is structurally congruent under a context C[·] [32] and is reflexive, symmetric and transitive. Definition 5 (Reduces). The operational semantics of the π-calculus is defined as a reduction relation −→ c indexed by the channel c on which communication occurs. We keep track of channel c so we can state subject reduction (Theorem 5). We distinguish between channels that are created inside the process (internal), and channels that are created outside (external i), where i is the index of the channel variable. In rule comm, parallel processes reduce when they communicate over a common channel with index i. As a result of that communication, the continuation of the input process P has all the references to its most immediate variable substituted with references to 1+j, the variable sent by the output process i j Q. After this substitution, P [ 0 → 1+j ] is lowered -all variable references are decreased by one (and we derive the proof UNUSED 0 (P [ 0 → 1+j ])). Reduction is closed under parallel composition (rule par), restriction (rule res) and structural congruence (rule struct) -notably, not under input nor output, as doing so would not preserve the sequencing of actions [32]. Rule res uses dec to decrement the index of channel c as we wrap processes P and Q inside a binder. It is defined as expected below: dec internal = internal dec (external 0) = internal dec (external (1+n)) = external n Resource-aware Type System In § 4.1 we characterise a usage algebra for our type system. It defines how resources are split in parallel composition and consumed in input and output. We define typing and usage contexts in § 4.2. We provide a type system for a resource-aware π-calculus in § 4.3. Multiplicities and Capabilities In the linear π-calculus each channel has an input and an output capability, and each capability has a given multiplicity of 0 (exhausted) or 1 (available). We generalise over this notion by defining an algebra for multiplicities [21,35] that is satisfied by linear, graded and shared types alike. We then use pairs of multiplicities as usage annotations for a channel's input and output capabilities. Definition 6 (Algebra). A usage algebra is a ternary relation x := y · z that is partial (as not any two multiplicities can be combined), deterministic and cancellative (to aid equational reasoning) and associative and commutative (following directly from subject congruence for parallel composition). In addition, we ask that the leftovers can be computed so that we can automatically update the usage context every time input and output occurs -this is purely for usability. It has a neutral element ·-0 that is absorbed on either side, and that is also minimal (so that new resources cannot arbitrarily spring into life). It has an element ·-1 that is used to count inputs and outputs. Below we define such an algebra as a record ALGEBRA C on a carrier C. (We use ∀ for universal quantification. The dependent product ∃ uses the value of its first argument in the type of its second. The type DEC P is a witness of either P or P → ⊥, where ⊥ is the empty type with no constructors.) We sketch the implementation of linear, graded and shared types as instances of our usage algebra below. Their use in typing derivations is illustrated in Example 3. Typing Contexts We use indexed sets of usage algebras to allow several usage algebras to coexist in our type system with leftovers ( § 4.3). Definition 7 (Algebras). An indexed set of usage algebras is a type IDX of indices that is nonempty (∃IDX) together with an interpretation USAGE of indices into types, and an interpretation ALGEBRAS of indices into usage algebras of the corresponding type. We keep typing contexts (PRECTX) and usage contexts (CTX) separate. The former are preserved throughout typing derivations; the latter are transformed as a result of input, output, and context splits. Definition 8 (Type and PreCtx: types and typing contexts). A type is either a unit type (½), or a channel type (C[ t ; x ]). The unit type ½ serves as a base case for types. The type C[ t ; x ] of a channel determines what type t of data and what usage annotations x are sent over that channel -we use the notation C 2 to stand for a C×C pair of input and output multiplicities, respectively. This channel notation aligns with [t] chan (i y ,o z ) , where y, z are the input and output multiplicities, respectively [22]. Henceforth, we use ℓ ∅ to denote the multiplicity pair ·-0 , ·-0, ℓ i for the pair ·-1 , ·-0, ℓ o for ·-0 , ·-1, and ℓ # for ·-1 , ·-1. This notation was originally used in the linear πcalculus [23,32]. A typing context PRECTX n is a length-indexed list of types that is either empty ([]) or the result of appending a type t : TYPE to an existing context (γ,t). Definition 9 (Idxs and Ctx: contexts of indices and usage contexts). A context of indices IDXS n is a length-indexed list that is either empty ([]) or the result of appending an index i : IDX to an existing context (idxs,i). A usage context is a context CTX idxs indexed by a context of indices idxs : IDXS n that is either empty ([]) or the result or appending a usage annotation pair u : USAGE 2 idx with index idx : IDX to an existing context (Γ ,u). Typing with Leftovers We present a resource-aware type system for the π-calculus based on leftover typing [2], a technique that, in addition to the usual typing context PRECTX n and (input) usage context CTX idxs , adds an extra (output) usage context CTX idxs to the typing rules. This output context contains the leftovers (the unused multiplicities) of the process being typed. These leftovers can then be used as input to another typing derivation. Leftover typing inverts the information flow of usage annotations so that it is the typing derivations of subprocesses which determine how resources are allocated. As a result, context split proofs are no longer necessary. Leftover typing also allows framing to be stated, and weakening and strengthening to cover linear types too. Our type system is composed of two typing judgments: one for variable references (Definition 10) and one for processes (Definition 11). Both judgments are indexed by a typing context γ, an input usage context Γ , and an output usage context ∆ (the leftovers). The typing judgement for variables γ ; Γ ∋ i t ; y ⊲ ∆ asserts that "index i in typing context γ is of type t, and subtracting y at position i from input usage context Γ results in leftovers ∆". The typing judgement for processes γ ; Γ ⊢ P ⊲ ∆ asserts that "process P is well typed under typing context γ, usage input context Γ and leftovers ∆". We lift the operation x := y · z and its algebraic properties to an operation (x l , x r ) := (y l , y r ) · 2 (z l , z r ) on pairs of multiplicities. The base case 0 splits the usage annotation x of type USAGE 2 idx into y and z (the leftovers). Note that the remaining context Γ is preserved unused as a leftover. This splitting x := y · 2 z is as per the usage algebra provided by the developer for the index idx. In our Agda implementation, x := y · 2 z is actually a trivially satisfiable implicit argument if x := y · 2 z is inhabited and an unsatisfiable argument otherwise. The inductive case 1+ appends the type t ′ to the typing context, and the usage annotation x ′ to both the input and output usage contexts. Example 2 (Variable reference). egVar defines a variable reference 1+0 with type C[ ½ ; ℓ i ] and usage ℓ i . We must show that this variable is well typed in an environment with a typing context γ = [] , C[ ½ ; ℓ i ] , ½ and a usage context Γ = [] , ℓ # , ℓ # . The VarRef constructors are completely determined by the variable index 1+ 0 in the type. The constructor 1+ steps under the outermost variable in the context, preserving its usage annotation ℓ # from input to output. The constructor 0 asserts that the next variable is of type C[ ½ ; ℓ i ], and that the usage annotation ℓ # can be split such that ℓ # := ℓ i · ℓ o -using ·-compute r to automatically fulfill the proof obligation. Definition 11 (Types: typing processes). The Types typing relation for the resource-aware π-calculus processes is presented below. For convenience, we reuse the constructor names introduced for the syntax in § 2. The inaction process in rule 0 does not change usage annotations. The scope restriction in rule ν expects three arguments: the type t of data being transmitted; the usage annotation x of what is being transmitted; and the multiplicity y given to the channel itself. This multiplicity y is used for both input and output, so that they are balanced. The continuation process P is provided with the new channel with usage annotation y , y, which it must completely exhaust. The input process in rule ( ) requires a channel chan at index i with usage ℓ i available, such that data with type t and usage x can be sent over it. Note that the index i is determined by the syntax of the typed process. We use the leftovers Ξ to type the continuation process, which is also provided with the received element -of type t and multiplicity x -at index 0. The received element x must be completely exhausted by the continuation process. Similarly to input, the output process in rule requires a channel chan at index i with usage ℓ o available, such that data with type t and usage x can be sent over it. We use the leftover context ∆ to type the transmitted data, which needs an element loc at index j with type t and usage x, as per the type of the channel chan. The leftovers Ξ are used to type the continuation process. Note that both indices i and j are determined by the syntax of the typed process. Parallel composition in rule uses the leftovers of the left-hand process to type the right-hand process. Indeed, Theorem 4 shows that an alternative rule where the resources are first threaded through Q is admissible too. Example 3 (Typing derivation (Continued)). We provide the typing derivation for the courier system defined in Example 1. For the sake of simplicity, we instantiate these processes with concrete variable references before typing them. The receiver defined by the recv process receives data along the channel with index 0, which needs to be of type C[ t ; u ] for some t and u. After receiving twice, the process ends: we must not be left with any unused multiplicities, thus u = ℓ ∅ . We will use graded types to keep track of the exact number of times communication happens. Whatever the input multiplicity of the channel, we will consume 2 of it and leave the remaining as leftovers. The sender defined by the send process sends data along the channel with index 0, which needs to be of type C[ t ; u ] for some t and u. We instantiate t (the type of data that the sender sends) to the trivial channel C[ ½ ; ω ]. As per the type of the process recv, u = ℓ ∅ . We will transmit once, thus use 1+0 output multiplicity, and leave the rest as leftovers. Agda can uniquely determine the arguments required by the ν constructor. Dually, the courier defined by the carry process expects input multiplicities for the channels shared with send and output multiplicities for the channel shared with recv. We can now compose these processes in parallel and type the courier system. Meta-Theory We have mechanised subject reduction for our π-calculus with leftovers in 850 lines of Agda code. The meta-theory of resource-aware type systems often needs to reason on typing derivations modulo associativity in the allocation of resources. For type systems using context splitting side conditions, this means applying associativity lemmas to recompute context splits; for type systems using leftover typing it does not. As an example, the proof that comp-asssoc preserves typing proceeds by deconstructing the input derivation into P (Q R) and reassembling it as (P Q) R without the need of any extra reasoning. All the reasoning carried out in our type safety proofs is based on the algebraic properties introduced in § 4.1 -the exception to this is ·-compute r , only there for the user's convenience. We lift the operation x := y · 2 z and its algebraic properties to an operation Γ := ∆ ⊗ Ξ on usage contexts that have the same underlying context of indices. The algebraic properties of the algebras allow us to see a typing derivation γ ; Γ ⊢ P ⊲ ∆ as a unique arrow from Γ to ∆, and to freely compose and reason with arrows with the same typing context and a matching output and input usage contexts. Leftover typing also allows us to state a framing theorem showing that adding or subtracting arbitrary usage annotations to the input and output usage contexts preserves typing -one can understand a typing derivation independently from its unused resources. With framing one can show that comp-comm preserves typing: in P Q the typing of P and Q is independent of one another. Leftover typing allows weakening and strengthening to acquire a more general form where linear variables can freely be added or removed from context tooas long as they are added and removed to and from both the input and output contexts. Theorem 2 (Weakening). Let ins i insert an element into a context at position i. Let P be well typed in γ ; Γ ⊢ P ⊲ Ξ. Then, lifting every variable greater than or equal to i in P is well typed in ins i t γ ; ins i x Γ ⊢ lift i P ⊲ ins i x Ξ. Theorem 3 (Strengthening). Let del i delete the element at position i from a context. Let P be well typed in γ ; Γ ⊢ P ⊲ Ξ. Let i be a variable not in P , such that uP : UNUSED i P . Then lowering every variable greater than i in P is well typed in del i γ ; del i Γ ⊢ lower i P uP ⊲ del i Ξ. Subject congruence states that structural congruence (Definition 4) preserves the well-typedness of a process. Finally, subject reduction states that reducing on a channel c (Definition 5) preserves the well-typedness of a process -after consuming ℓ # from c if c is an external channel. Below we use Γ ∋ i x ⊲ ∆ to stand for γ ; Γ ∋ i t ; x ⊲ ∆ for some γ and t. We refer to Appendix B for a more detailed account of the mechanised proofs. Conclusions, Related and Future Work Extrinsic Encodings Extrinsic encodings define a syntax (often well-scoped) and a runtime semantics prior to any type system. This allows one to talk about ill-typed terms, and defers the proof of subject reduction to a later stage. To the best of our knowledge, leftover typing makes its appearance in 1994, when Ian Mackie first uses it to formulate intuitionistic linear logic [24]. Allais [2] uses leftover typing to mechanise in Agda a bidirectional type system for the linear λcalculus. He proves type preservation and provides a decision procedure for type checking and type inference. In this paper, we follow Allais [2] and apply leftover typing to the π-calculus for the first time. We generalise the usage algebra, leading to linear, graded and shared type systems. Drawing from quantitative type theory (by McBride and Atkey [26,3]), in our work we too are able to talk about fully consumed resources -e.g., we can transmit ℓ ∅ multiplicities of a fully exhausted channel. Recent years have seen an increase in the efforts to mechanise resource-aware process algebras, but one of the earliest works is the mechanisation of the linear π-calculus in Isabelle/HOL by Gay [16]. Gay encodes the π-calculus with linear and shared types using de Bruijn indices, a reduction relation and a type system posterior to the syntax. However, in his work typing rules demand user-provided context splits, and variables with consumed usage annotations are erased from context. We remove the demand for context splits, preserve the ability to talk about consumed resources, and adopt a more general usage algebra. Orchard et al. introduce Granule [29], a fully-fledged functional language with graded modal types, linear types, indexed types and polymorphism. Modalities include exact usages, security levels and intervals; resource algebras are pre-ordered semirings with partial addition. The authors provide bidirectional typing rules, and show the type safety of their semantics. The work by Goto et al. [19] is, to the best of our knowledge, the first formalisation of session types which comes along with a mechanised proof of type safety in Coq. The authors extend session types with polymorphism and pattern matching. They use a locally-nameless encoding for variable references, a syntax prior to types, and an LTS semantics that encodes session-typed processes into the π-calculus. Their type system uses reordering of contexts and extrinsic context splits, which are not needed in our work. Intrinsic Encodings Intrinsic encodings merge syntax and type system. As a result, one can only ever talk about well-typed terms, and the reduction relation by construction carries a proof of subject reduction. Significantly, by merging the syntax and static semantics of the object language one can fully use the expressive power of the host language. Thiemann formalises in Agda the MicroSession (minimal GV [17]) calculus with support for recursion and subtyping [34]. As Gay does in [16], context splits are given extrinsically, and exhausted resources are removed from typing contexts altogether. The runtime semantics are given as an intrinsically typed CEK machine with a global context of session-typed channels. In their recent paper, Ciccone and Padovani mechanise a dependentlytyped linear π-calculus in Agda [8]. Their intrinsic encoding allows them to leverage Agda's dependent types to provide a dependently-typed interpretation of messages -to avoid linearity violations the interpretation of channel types is erased. Message input is modeled as a dependent function in Agda, and as a result message predicates, branching, and variable-length conversations can be encoded. In contrast to our work, their algebra is on the multiplicities 0, 1, ω, and top-down context splitting proofs must be provided. In another recent work, Rouvoet et al. provide an intrinsic type system for a λ-calculus with session types [31]. They use proof relevant separation logic and a notion of a supply and demand market to make context splits transparent to the user. Their separation logic is based on a partial commutative monoid that need not be deterministic nor cancellative. Their typing rules preserve the balance between supply and demand, and are extremely elegant. They distill their typing rules even further by modelling the supply and demand market as a state monad. Other Work Castro et al. [6] provide tooling for locally-nameless representations of process calculi in Coq, where de Bruijn indices are less popular than in Agda or Idris. They use their tool to help automate proofs of subject reduction for a type system with session types. Orchard and Yoshida [30] embed a small effecftul imperative language into the session-typed π-calculus, showing that session types are expressive enough to encode effect systems. Based on contextual type theory, LINCX [18] extends the linear logical framework LLF [7] by internalising the notion of bindings and contexts. The result is a meta-theory in which HOAS encodings with both linear and dependent types can be described. The developer obtains for free an equational theory of substitution and decidable typechecking without having to encode context splits within the object language. Further work on mechanisation of the π-calculus [13,20,5,14,1], focuses on non-linear variations, differently from our range of linear, graded and shared types. Conclusions and Future Work We provide a well-scoped syntax and a semantics for the π-calculus, extrinsically define a type system on top of the syntax capable of handling linear, graded and shared types under the same unified framework and show subject reduction. We avoid extrinsic context splits by defining a type system based on leftover typing [2]. As a result, theorems like framing, weakening and strengthening can now be stated also for the linear π-calculus. Our work is fully mechanised in around 1850 lines of code in Agda [37]. As future work we intend to expand our framework to include infinite behaviour by adding process replication, which is challenging, as to prove subject congruence one needs to uniquely determine the resources consumed by a process -e.g., by adding type annotations to the syntax. Orthogonally, we aim to investigate making our typing rules bidirectional which would allow us to provide a decision procedure for type checking processes in a given set of algebras. Finally, we will use our π-calculus with leftovers as an underlying framework on top of which we can implement session types, via their encodings into linear types [9,11,33] and other advanced type theories. A From names to de Bruijn indices and back The syntax of the π-calculus [32] using channel names is given by the RAW grammar below: RAW : SET = = = = = = = = = = = Raw RAW ::= 0 (inaction) | (ν NAME) RAW (restriction) | RAW RAW (parallel) | NAME ( NAME ) RAW (input) | NAME NAME RAW (output) Channel names and variables range over x, y, z in NAME and processes over P, Q, R in RAW. Process 0 denotes the terminated process, where no further communications can occur. Process (ν x) P creates a new channel x bound with scope P . Process P Q is the parallel composition of processes P and Q. Processes x ( y ) P and x y P denote respectively, the input and output processes of a variable y over a channel x, with continuation P . Scope restriction (ν x) P and input x ( y ) P are binders, they are the only constructs that introduce bound namesx and y in P , respectively. In order to demonstrate the correspondence between a π-calculus that uses names and one that uses de Bruijn indices, we provide conversion functions in both directions and prove that they are inverses of each other up to α-conversion. From names to de Bruijn indices When we translate into de Bruijn indices we keep the original binder names around -they will serve as name hints for when we translate back. The translation function fromRaw works recursively, keeping a context ctx : NAMES n that maps the first n indices to their names. Named references within the process are substituted with their corresponding de Bruijn index. We demand that the original process is well-scoped: that all its free variable names appear in ctxthis is decidable and we therefore automate the construction of such a proof term. fromRaw : (ctx : NAMES n ) (P : RAW) → WELLSCOPED ctx P → PROCESS n From de Bruijn indices to names The translation function toRaw works recursively, keeping a context ctx : NAMES n that maps the first n indices to their names. As some widely-used languages do, this translation function produces unique variable names. These unique variable names use the naming scheme < namehint > <n> , where <n> denotes that the name < namehint > has already been bound n times before. toRaw : (ctx : NAMES n ) → PROCESS n → RAW Example 4 (fromRaw and toRaw). We illustrate the conversion functions from names to de Bruijn indices (fromRaw) and back (toRaw) with three processes P, Q, R below. Process P uses names x, y, z and is translated via the conversion function fromRaw into process Q, which uses de Bruijn indices. Process Q is then translated via toRaw into process R, which follows the Barendregt convention 3 and is α-equivalent to the original process P . In the following we present the main results that our conversion functions satisfy. Lemma 1. Translating from de Bruijn indices to names via toRaw results in a well-scoped process. Lemma 2. Translating from de Bruijn indices to names via toRaw results in a process that follows the Barendregt convention. Proof. All the above results are proved by induction on Process, Var (Definition 1) and Raw. Complete details can be found in our mechanisation in Agda in [37]. B Type Safety Exchange This property states that the exchange of two variables preserves the well-typedness of a process. We extend exchange i introduced in Definition 4 to exchange types in typing contexts and usage annotations in usage contexts. Proof. All the above theorems are proved by induction on Types and VarRef. For details, refer to our mechanisation in Agda [37]. Subject Congruence This property states that applying structural congruence (Definition 4) to a welltyped process preserves its well-typedness. To prove this result, we must first introduce lemmas that establish that certain syntactic manipulations can be inverted (Lemma 5, Lemma 6) and how unused variables relate to the preservation of leftovers (Lemma 7). Lemma 5. The function lower i P uP has an inverse lift i P that increments every Var greater than or equal to i, such that lift i (lower i P uP ) ≡ P . Proof. By structural induction on Process and Var. Lemma 6. The function exchange i P is its own inverse: exchange i (exchange i P ) ≡ P . Proof. By structural induction on Process and Var. Lemma 7. For all well-typed processes γ ; Γ ⊢ P ⊲ Ξ, if the variable i is unused within P , then Γ at i is equal to Ξ at i. Proof. By induction on Process and Var. We are now in a position to prove subject congruence. Proof. The proof is by induction on Equals ≃ . Here we only consider those cases that are not purely inductive: the base cases for struct and their symmetric variants. Full proof in [37]. We proceed by induction on StructCong ∼ = : -Case comp-assoc: trivial, as leftover typing is naturally associative. -Case comp-sym for P Q: we use framing (Theorem 1) to shift the output context of P to the one of Q; and the input context of Q to the one of P . -Case comp-end: trivial, as the typing rule for 0 has the same input and output contexts. -Case scope-end: we show that the usage annotation of the newly created channel must be ℓ ∅ , making the proof trivial. In the opposite direction, we instantiate the newly created channel to a type ½ and a usage annotation ℓ ∅ . -Case scope-ext for ν (P Q): we need to show that P preserves the usage annotations of the unused variable (Lemma 7) and then use strengthening (Theorem 3). In the reverse direction, we use weakening (Theorem 2) on P and show that lowering and then lifting P results in P (Lemma 5). -Case scope-comm: we use exchange (Theorem 6), and for the reverse direction exchange and Lemma 6 to show that exchanging two elements in P twice leaves P unchanged. ⊓ ⊔ Substitution This result is key to proving subject reduction. In Theorem 8 we prove a generalised version of substitution, where the substitition P [ i → j ] is on any variable i. Then, in Theorem 9 we instantiate the generalised version to the concrete case where i is the most recently introduced variable 0, as required by subject reduction. Theorem 8 (Generalised substitution). Let process P be well-typed in γ ; Γ i ⊢ P ⊲ Ψ i . The substituted variable at position i can be split into m in Γ i , and into n in Ψ i . Substitution will take these usages m and n away from i and transfer them to the variable j we are substituting for. In other words, let there be some Γ , Ψ , Γ j and Ψ j such that: Let Γ and Ψ be related such that Γ := ∆ ⊗ Ψ for some ∆. Let ∆ have a usage annotation ℓ ∅ at position i, so that all consumption from m to n must happen in P . Then substituting i to j in P will be well-typed in γ ; Proof. By induction on the derivation γ ; Γ i ⊢ P ⊲ Ψ i . -For constructor 0 we get Γ i ≡ Ψ i . From ∆ i ≡ ℓ ∅ follows that m ≡ n. Therefore Γ j ≡ Ψ j and end can be applied. -For constructor ν we proceed inductively, wrapping arrows ∋ i m, ∋ j m, ∋ i n and ∋ j n with 1+. -For constructor ( ) we must split ∆ to proceed inductively on the continuation. Observe that given the arrow from Γ i to Ψ i and given that ∆ is ℓ ∅ at index i, there must exist some δ such that m := δ · 2 n. l • If the input is on the variable being substituted, we split m such that m := ℓ i · 2 l for some l, and construct an arrow Ξ i ∋ i l ⊲ Γ for the inductive call. Similarly, we construct for some Ξ j the arrows Γ j ∋ j ℓ i ⊲ Ξ j as the new input channel, and Ξ j ∋ j l ⊲ Γ for the inductive call. • If the input is on a variable x other than the one being substituted, we construct the arrows Ξ i ∋ i m ⊲ Θ (for the inductive call) and Γ ∋ x ℓ i ⊲ Θ for some Θ. We then construct for some Ξ j the arrows Γ j ∋ x ℓ i ⊲ Ξ j (the new output channel) and Xi j ∋ j m ⊲ Θ (for the inductive call). Given there exists a composition of arrows from Ξ i to Ψ , we conclude that Θ splits ∆ such that Γ := ∆ 1 ⊗ Θ and Θ := ∆ 2 ⊗ Ψ . As ℓ ∅ is a minimal element, then ∆ 1 must be ℓ ∅ at index i, and so must ∆ 2 . -applies the ideas outlined for the ( ) constructor to both the VarRef doing the output, and the VarRef for the sent data. -For we first find a δ, Θ, ∆ 1 and ∆ 2 such that Ξ i ∋ i δ ⊲ Θ and Γ := ∆ 1 ⊗ Θ and Θ := ∆ 2 ⊗ Ψ . Given ∆ is ℓ ∅ at index i, we conclude that ∆ 1 and ∆ 2 are too. Observe that m := δ · 2 ψ, where ψ is the usage annotation at index i consumed by the subprocess P . We construct an arrow Ξ j ∋ j δ ⊲ Θ, for some Ξ j . We can now make two inductive calls (on the derivation of P and Q) and compose their results. Diagrammatic representation of the case for substitution. Continuous lines represent known facts, dotted lines proof obligations. Subject Reduction Finally we are ready to present our main result, stating that if P is well typed and it reduces to Q, then Q is well typed. The relation between the typing contexts used to type P and Q will be explained in Theorem 10. In the π-calculus we distinguish between a reduction P −→ internal Q on a channel internal to P , and a reduction P −→ external i Q on a channel i external to P (refer to § 3). We first introduce an auxiliary lemma: Lemma 8. Every input usage context Γ of a well-typed process γ ; Γ ⊢ P ⊲ ∆ that reduces by communicating on a channel external (that is, P −→ external i Q for some Q) has a multiplicity of at least ℓ # at index i. Proof. By induction on the reduction derivation P −→ external i Q. Theorem 10 (Subject reduction). Let P be well typed in γ ; Γ ⊢ P ⊲ Ξ and reduce such that P −→ c Q. Proof. By induction on P −→ c Q. For the full details refer to our mechanisation in Agda. -Case comm: we apply framing (Theorem 1) (to rearrange the assumptions), substitution (Theorem 9) and strengthening (Theorem 3). -Case par: by induction on the process that is being reduced. -Case res: case split on channel c: if internal proceed inductively; if external 0 (i.e. the channel introduced by scope restriction) use Lemma 8 to subtract ℓ # from the channel's usage annotation and proceed inductively; if external (1+i) proceed inductively. -Case struct: we apply subject congruence (Theorem 7) and proceed inductively.
2020-05-13T01:00:59.906Z
2020-05-08T00:00:00.000
{ "year": 2020, "sha1": "10ec7df5999989260c93bc3babb69b78da571019", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d90aa0a6d6e01d15e6bea9353f01bc68d22f3364", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255468676
pes2o/s2orc
v3-fos-license
DGKB mediates radioresistance by regulating DGAT1-dependent lipotoxicity in glioblastoma Summary Glioblastoma (GBM) currently has a dismal prognosis. GBM cells that survive radiotherapy contribute to tumor progression and recurrence with metabolic advantages. Here, we show that diacylglycerol kinase B (DGKB), a regulator of the intracellular concentration of diacylglycerol (DAG), is significantly downregulated in radioresistant GBM cells. The downregulation of DGKB increases DAG accumulation and decreases fatty acid oxidation, contributing to radioresistance by reducing mitochondrial lipotoxicity. Diacylglycerol acyltransferase 1 (DGAT1), which catalyzes the formation of triglycerides from DAG, is increased after ionizing radiation. Genetic inhibition of DGAT1 using short hairpin RNA (shRNA) or microRNA-3918 (miR-3918) mimic suppresses radioresistance. We discover that cladribine, a clinical drug, activates DGKB, inhibits DGAT1, and sensitizes GBM cells to radiotherapy in vitro and in vivo. Together, our study demonstrates that DGKB downregulation and DGAT1 upregulation confer radioresistance by reducing mitochondrial lipotoxicity and suggests DGKB and DGAT1 as therapeutic targets to overcome GBM radioresistance. Correspondence bhyoun72@pusan.ac.kr In brief Kang et al. report that radioresistant GBM cells that express low levels of DGKB and high levels of DGAT1 prefer to store FAs in TG instead of utilizing them as an energy source to reduce mitochondrial ROS. Pharmacological or genetic regulation of DGKB and DGAT1 sensitizes GBM cells to radiotherapy. INTRODUCTION Glioblastoma (GBM) is the most prevalent and lethal primary tumor of the central nervous system (CNS). The median survival of GBM patients is only 15 months, which has not improved over the last two decades, and the 5-year recurrence rate of GBM after treatments is nearly 90%. The current standard of care for GBM patients is surgical resection followed by radiotherapy and temozolomide (TMZ). Notably, about 80% of GBM recurrences occur within radiation treatment fields. 1,2 In addition, GBM cells that survive radiotherapy become more aggressive and invasive. Therapeutic strategies to overcome the radioresistance are therefore urgently needed. Recent studies report that altered metabolism, a hallmark of cancer, is closely associated with the radioresistance. Activations of glycolysis and its parallel pathway, the pentose phosphate pathway, promote the repair of ionizing radiation (IR)-induced DNA strand breaks and sustain rapid DNA metabolism, thereby minimizing the IR-induced cytotoxicity. 3 Glycolysis is highly activated by IR in GBM. 4,5 Likewise, mitochondrial metabolism is tightly regulated to reduce oxidative damage. 6 Leveraging the altered metabolism might inform the development of novel therapeutics by enhancing the radiosensitivity of GBM. Fatty acids (FAs) are major structural components of membrane phospholipids and are also used to produce ATP by mitochondria-mediated b-oxidation. Although glucose is a major fuel for most brain tumor cells, GBM cells acquire large amounts of FAs to promote cell growth, and inhibition of b-oxidation reduces their proliferation. 7 In addition, lipid droplets (LDs), the lipid storage organelles, are prevalent in GBM but undetectable in the normal brain, suggesting that lipid metabolism is also highly involved in GBM progression. 8,9 LDs are mainly composed of triglycerides (TGs), an ester derived from glycerol and three fatty acid molecules, and are known to play a role in maintaining lipid homeostasis. In a nutrient-poor condition, tumor cells quickly activate lipolysis to release FAs from LDs for structural lipid synthesis and energy production, facilitating tumor cell survival. 10,11 On the other hand, LDs also protect against excessive lipid catabolism by storing FAs in the form of TGs to avoid lipotoxicity in cancer cells. [12][13][14] Because IR-induced reactive oxygen species (ROS) production renders cancer cells more sensitive to oxidative stress-induced cell damage, cellular processes that prevent lipotoxicity are essential for GBM maintenance. 15 Therefore, maintaining lipid homeostasis between FA oxidation and structural lipid synthesis is important for GBM growth and radioresistance. Diacylglycerol kinases (DGKs) are a family of enzymes that catalyze the conversion of diacylglycerol (DAG) to phosphatidic acid (PA). 16 DGKs reduce the DAG level in the cell membrane, limiting DAG's functions as a secondary messenger and as a biosynthetic precursor of phospholipids and TGs. In the brain, most DGKs are abundantly expressed, with subtype-specific regional distribution. 17 Among 10 known DGK isozymes, diacylglycerol kinase B (DGKB) is mainly expressed in the cerebral cortex, 18 where GBM is predominantly located; however, the function of DGKB in GBM has rarely been studied. Diacylglycerol acyltransferase 1 (DGAT1), which catalyzes the esterification of acyl-coenzyme A (CoA) with diacylglycerol (DAG) to form TGs, is upregulated in GBM to reduce FA oxidation and protect against mitochondrial lipotoxicity by storing excess FAs into TGs. 18 Since DGKB regulates the level of DAG, a substrate for TG, it might also play an important role in maintaining the lipid homeostasis and thereby regulates GBM growth. In this study, we demonstrate that DGKB is significantly downregulated in radioresistant GBM cells, leading to DAG accumulation and thus preventing lipotoxicity. Activating DGKB or inhibiting IR-induced DGAT1 sensitizes GBM cells to radiotherapy by promoting FA catabolism and oxidative stress. Our study suggests a critical regulatory mechanism of lipid homeostasis and a strategy to overcome therapeutic resistance in GBM. RESULTS DGKB downregulation is associated with radioresistance in GBM cell lines and patient-derived glioblastoma stem-like cells To explore factors contributing to radioresistance in GBM, we established radioresistant GBM cells using the human GBM cell line U87MG ( Figure 1A). U87MG cells stably expressing luciferase were subcutaneously implanted in a BALB/c nude mouse (C) DGKB mRNA and protein levels in U87MG and U87MG-RR (left) as well as in U87MG, A172, BCL20-HP02, BCL21-HP03, and GSC11 without and with IR as indicated (right). Data are represented as mean ± SEM of three biological replicates. (D) The mRNA level of DGK isoforms DGKG, DGKH, DGKI, and DGKZ in U87MG-RR and U87MG. Data are represented as mean ± SEM of three biological replicates. Statistical analysis was performed with one-way ANOVA plus a Dunnett's multiple comparisons test for (C) and Student's t test for (C) and (D). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. and exposed to IR (2 Gy/day for 5 days). In vivo bioluminescent imaging showed that the tumor markedly shrunk on day 14 after IR but regrew on day 21 after IR ( Figure S1A, first cycle). Cells obtained from the tumor tissue that survived IR were again implanted subcutaneously in a BALB/c nude mouse and exposed to IR (2 Gy/day for 5 days). The tumor shrunk slightly on day 7 after IR but rapidly regrew ( Figure S1A, second cycle). Tumor cells obtained from dissociating the tumor were orthotopically implanted in a BALB/c nude mouse and exposed to IR (2 Gy/day for 5 days). The tumor did not shrink by IR ( Figure S1A, third cycle). Cells dissociated from the tumor were regarded as radioresistant U87MG cells (hereafter referred to as U87MG-RR cells). We next assessed the radiosensitivity of U87MG and U87MG-RR cells after orthotopic xenograft in BALB/c nude mice. As shown in Figure S1B, the growth of U87MG-driven tumor was significantly inhibited by IR but that of U87MG-RR-driven tumor was hardly inhibited by IR. Furthermore, histological analysis showed that tumor infiltration was clearly increased in U87MG-RR xenografts compared with U87MG xenografts ( Figure S1C). Because therapeutic resistance is closely related to cancer stemness, we analyzed stemness properties of U87MG and U87MG-RR cells in a serum-free stem cell medium. The mRNA levels of canonical stem cell transcription factors NANOG, OCT4, and SOX2 were significantly upregulated in U87MG-RR cells ( Figure S1D). Moreover, limiting dilution assay showed that the frequency of glioblastoma stem-like cells (GSCs) capable of forming tumor spheres was remarkably high in U87MG-RR cells ( Figure S1E). In sum, U87MG-RR cells not only were more resistant to IR but also were more invasive and had higher stem-like properties than their parental cells. Next, we performed RNA-sequencing analysis of U87MG and U87MG-RR cells to find genes that might play a significant role in GBM radioresistance. Additionally, we analyzed GBM poor prognosis-associated genes using The Cancer Genome Atlas (TCGA) database and lipid metabolic process-associated genes using the Gene Ontology (GO) database (GO: 0006629). We identified 51 significantly differentially expressed genes overlapped with these three analyses ( Figure 1B, left). Among them, we focused on DGKB, the major form of the DGK family in the brain ( Figure 1B, right), and investigated the roles of DGKB in GBM radioresistance. Both mRNA and protein levels of DGKB were significantly lower in U87MG-RR cells compared with the control ( Figure 1C, left). In addition, DGKB mRNA and protein levels were reduced in U87MG and A172 GBM cell lines and in BCL20-HP02, BCL21-HP03, and GSC11 patient-derived GSCs after treatment with a single fraction of 3 Gy or three fractions of 1 Gy (over a 24-h interval) ( Figure 1C, right). On the other hand, mRNA levels of other DGK isoforms that expressed in the brain were similar in U87MG-RR and U87MG cells ( Figure 1D). 19 DGKB SV3 0 is an isoform of DGKB that lacks the C-terminal part encoded by the last exon, loses membrane localization, and has characteristics different from those of the full-length isoform ( Figure S2A). 20,21 Because the expression of DGKB SV3 0 barely changed in U87MG-RR or in the irradiated cells, only the full-length isoform of DGKB appeared to be involved in GBM radioresistance (Figure S2B). Collectively, these results demonstrate that IR-induced downregulation of DGKB may contribute to GBM radioresistance. Downregulation of DGKB is important for radioresistant cancer cell proliferation and tumor growth To examine the roles of DGKB in GBM cell proliferation and tumor growth, we established GBM cell lines with DGKB knockdown, knockout, or overexpression. DGKB overexpression reduced the cell viability and the colony-forming ability of irradiated U87MG-RR cells, whereas either knockdown or knockout of DGKB promoted the cell viability and the colony-forming ability of irradiated U87MG and the patient-derived GSCs (Figures 2A and 2B). Interestingly, the basic clonogenicity was unchanged between untreated control and DGKB knockdown or overexpression cells, suggesting that the effect of DGKB on GBM cell proliferation only plays a role under radiation (Figure 2B). To examine the role of DGKB in vivo, we established orthotopic xenograft mouse model using U87MG-RR cells with stable expression of luciferase. In vivo bioluminescent imaging showed that IR alone had minimal effect on the tumor growth, whereas IR combined with DGKB overexpression significantly reduced the tumor growth by 62.19% compared with IR alone ( Figures 2C and 2D). H&E staining confirmed that the tumor size was remarkably decreased by combining IR with DGKB overexpression ( Figure 2E). Immunohistochemistry (IHC) staining showed that the DGKB level was decreased by IR, and DGKB overexpression markedly enhanced IR-induced apoptosis, as determined by the level of cleaved caspase 3 (Figure 2F). Consequently, IR combined with DGKB overexpression conferred a significant survival benefit compared with untreated control (28 days of median survival) or IR alone (29 days of median survival) ( Figure 2G). In summary, downregulated DGKB in GBM enhanced radioresistance by promoting cell proliferation and tumor growth. Increased DAG, the substrate of DGKB, confers radioresistance to GBM cells DGKB phosphorylates DAG to generate PA. To assess whether the role of DGKB in GBM radioresistance depended on its kinase activity, we generated a kinase-dead mutant (G495D) of DGKB that lacked the ability to convert DAG to PA ( Figure 3A). 22 DGKB knockdown in U87MG-RR, U87MG, and patient-derived GSCs promoted cell viability after IR, which was abrogated by the ectopic expression of the wild-type but not mutant DGKB ( Figure 3B). Because the enzymatic function of DGKB contributed to GBM cell survival, we focused on levels of the substrate and the product of DGKB. In U87MG-RR, U87MG, and patientderived GSCs with DGKB knockdown, intracellular DAG levels were decreased, whereas PA levels were increased upon overexpressing wild-type but not mutant DGKB ( Figures 3C and 3D). Next, we investigated whether increased DAG levels or decreased PA levels contributed to radioresistance by downregulated DGKB in GBM. Supplementation of DAG increased the cell viability in cells overexpressing DGKB, whereas PA supplementation did not affect the cell viability in DGKB knockdown cells after IR ( Figure 3E). Furthermore, intracellular DAG levels in U87MG-RR cells are significantly higher than in U87MG cells, indicating that DAG accumulation is involved in GBM radioresistance ( Figure 3F). Taken together, DAG accumulation resulting from downregulated DGKB contributes to GBM radioresistance. Cell Reports Medicine 4, 100880, January 17, 2023 3 Article ll OPEN ACCESS DGKB downregulation contributes to radioresistance by decreasing mitochondrial lipotoxicity To explore whether DGKB downregulation provided a metabolic advantage for radioresistance, we investigated changes in bioenergetics and biosynthesis. Consistent with our previous studies, 4,5 IR significantly increased the glycolytic rate but reduced the lipid synthesis in U87MG-RR and patient-derived GSCs ( Figures S3A and S3B). However, knockdown or overexpression of DGKB did not affect glycolytic rates, RNA synthesis, or lipid synthesis ( Figures S3A-S3C). We then assessed b-oxidation by monitoring the release of 3 H 2 O from [9,10-3 H] oleic acid. Interestingly, by the end of the labeling period (pulse), the release of 3 H 2 O was reduced by IR, which was reversed by DGKB overexpression ( Figure 4A). After removing oleic acid from the media (chase), the release of 3 H 2 O into the media was generally decreased but DGKB overexpression still increased the release of 3 H 2 O, indicating that DGKB facilitates b-oxidation in GBM. Similarly, the 14 CO 2 production from the complete b-oxidation of [1-14 C] oleic acid decreased by IR and restored by DGKB overexpression ( Figure S3D). Additionally, after IR, the intracellular level of [9,10-3 H] oleic acid was unchanged within the first 10 min, which was not affected by DGKB expression, but was increased by 30 min, which was reduced by DGKB expression ( Figure 4B). These results indicated that the alteration of b-oxidation after IR was not because of altered cellular uptake of oleic acid and suggested that b-oxidation rather than fatty acid uptake was affected by IR-induced DGKB downregulation. Likewise, as shown in Figure S3E, basal b-oxidation is significantly reduced in U87MG-RR cells compared with parental cells. Collectively, IR-induced DGKB downregulation inhibits b-oxidation in GBM. The changes in colony formation after shDGKB alone or DGKB overexpression alone, IR alone, or IR combined with DGKB overexpression in U87MG-RR, DGKB knockdown or knockout in U87MG, and DGKB knockdown in BCL20-HP02 and BCL21-HP03. Data are represented as mean ± SEM of three biological replicates. (C and D) In vivo bioluminescence images (C) and relative luminescence units (D) of orthotopic xenografts derived from U87MG-RR in the untreated (control) group, IR group, and IR combined with DGKB overexpressing group (n = 20). (E and F) Representative H&E staining (E) and IHC staining for DGKB and cleaved caspase 3 (F) of orthotopic xenograft GBM mouse models. Data are represented as mean ± SEM of three biological replicates. Scale bars, 2,000 mm (E) or 50 mm (F). (G) Survival plots of mice with orthotopic xenograft GBM with DGKB overexpression and without (control) or with indicated treatment (IR treatments started 7 days after xenograft). Statistical analysis was performed with Student's t test for (A) one-way ANOVA plus a Tukey's multiple comparisons test for (B and D) compared with luminescence values of IR alone at 28 days after irradiation for (D), and Log rank (Mantel-Cox) test for (G). NS, non-significant; *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001. Because upregulating DGKB activated b-oxidation, we next tested whether upregulating DGKB could increase the ATP level by the activation of b-oxidation. In accordance with the b-oxidation activity, intracellular ATP levels were decreased upon downregulation of DGKB and increased by DGKB overexpression ( Figure 4C). However, IR alone significantly increased ATP levels despite the inhibition of b-oxidation induced by DGKB downregulation, probably because other metabolic pathways, such as glucose metabolism, affected cellular bioenergetics. Accordingly, the oxygen consumption rate (OCR) was significantly reduced by DGKB downregulation and increased by its overexpression, whereas the extracellular acidification rate (ECAR) hardly changed ( Figures 4D and S3F). Together, downregulation of DGKB attenuated the ATP production generated by b-oxidation. Recent studies have shown that excessive b-oxidation induces tumor cell death, 23 and that increased storage of TG and LDs can be beneficial for tumor cell survival. 24 Because we confirmed that downregulation of DGKB conferred radiore-sistance through DAG accumulation, we hypothesized that DAG accumulation might allow acyl-CoA to be used for TG storage rather than b-oxidation to induce radioresistance ( Figure 4E). As we expected, TG levels after IR were increased with or without knockdown of DGKB but did not change when DGKB was overexpressed ( Figure 4F). Consistent with changes in TG levels, BODIPY staining showed that DGKB knockdown increased LDs, whereas DGKB overexpression diminished LDs after IR ( Figure 4G). In correspondence with levels of DAG and TG following DGKB regulation, acyl-CoA (C16:0) and acylcarnitine, which is converted from FAs and shuttled into mitochondria for b-oxidation, were decreased by DGKB knockdown and increased by DGKB overexpression after IR ( Figures S3G and S3H). Consequently, the level of acetyl-CoA (C2:0), a major product of FA degradation by b-oxidation, was also decreased by DGKB knockdown and increased by DGKB overexpression after IR ( Figure S3I). In sum, radioresistant GBM cells that express low levels of DGKB prefer to store FAs in TG instead of utilizing them as an energy source. (E) The cell viability of U87MG-RR, U87MG, BCL20-HP02, and BCL21-HP03 following IR alone or together with DGKB overexpression or knockdown without or with supplementation of DAG or PA. (F) Intracellular DAG levels in U87MG and U87MG-RR. WT, WT DGKB; MT, kinase-dead mutant DGKB (G495D). DGKB knockdown experiments were processed by lentiviral vectors pLKO-Control shRNA or pLKO-shDGKB. All data shown are mean ± SEM of three biological replicates. Statistical analysis was performed with Student's t test for (F) and one-way ANOVA plus a Tukey's multiple comparisons test for the others. ns, non-significant; *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001. Because radioresistant GBM cells prefer not to use FAs as an energy source despite their high energy demand, we hypothesized that excessive b-oxidation might contribute to radiosensitivity. To investigate whether excessive b-oxidation induced mitochondrial dysfunction, we measured mitochondrial membrane potential (MMP). As shown in Figure 4H, DGKB overexpression contributed to aberrant MMP after IR. Likewise, DGKB overexpression severely damaged mitochondria after IR, as determined by transmission electron microscopy (TEM) imaging ( Figure 4I). Furthermore, we assessed mitochondrial ROS and H 2 O 2 through mitoSOX and mitoPY staining and found that DGKB downregulation attenuated the mitochondrial ROS and H 2 O 2 levels, whereas DGKB overexpression highly increased their levels after IR ( Figure 4J). However, cellular ROS and H 2 O 2 levels were not affected by DGKB overexpression or downregulation after IR (Figures S3J and S3K), suggesting that regulation of DGKB affects ROS and H 2 O 2 through b-oxidation in mitochondria. In accordance with ROS and H 2 O 2 results, apoptosis was decreased by DGKB downregulation and increased by its overexpression after IR ( Figures 4K and S3L). Next, we tested whether DAG accumulation induces radioresistance by reducing b-oxidation-derived ROS. As shown in Figure 4L, treatment of DAG derives radioresistance, but induction of b-oxidation through palmitate treatment diminishes the effect and confers radiosensitivity. Likewise, it was confirmed that mitochondrial ROS is increased by the b-oxidation induction even when DAG is accumulated ( Figure 4L). Collectively, DAG accumulation resulting from DGKB downregulation in GBM conferred radioresistance by reducing excessive b-oxidation and inducing the storage of TG and LDs to prevent cell death from mitochondrial ROS. IR-induced DGAT1 confers radioresistance by increasing TG formation Because IR-induced DAG accumulation requires the activation of diacylglycerol acyltransferase (DGAT), which catalyzes DAG to TG, to increase the TG formation, we hypothesized that the expression of DGAT was also regulated by IR. We assessed levels of DGAT subtypes (DGAT1 and DGAT2) in U87MG-RR and patient-derived GSCs. Because the lipid metabolism mainly occurs in the liver, we also analyzed the human hepatocellular carcinoma HepG2 cells. DGAT1 expression was overwhelmingly higher than DGAT2 in U87MG-RR and patient-derived GSCs, whereas DGAT1 expression was similar to DGAT2 in HepG2 cells ( Figure 5A). Furthermore, higher DGAT1, but not DGAT2, expression in GBM correlated with worse overall survival of GBM patients in the TCGA database ( Figure 5B). We therefore then evaluated whether the DGAT1 expression was increased by IR. Consistent with our hypothesis, DGAT1 was increased by IR and its level was higher in U87MG-RR than in the control cells ( Figure 5C). Next, we tested whether DGAT1 directly regulates b-oxidation activity. As shown in Figure 5D, DGAT1 knockdown facilitates b-oxidation in GBM, suggesting that IR-induced DGAT1 upregulation inhibits b-oxidation in GBM. In addition, the TG level was significantly reduced by DGAT1 knockdown, with or without DAG addition, in the irradiated GBM cells ( Figure 5E). Consequently, knockdown of DGAT1 increased the cell apoptosis after IR, suggesting that inhibition of storing FA induces cell death by excess b-oxidation ( Figure 5F). To investigate the mechanism of IR-induced changes in DGAT1 expression, we performed a luciferase reporter assay to determine the DGAT1 promoter activity and observed no significant change in GBM cells treated with IR ( Figure 5G). These results suggested that IR did not increase the transcription of Article ll OPEN ACCESS DGAT1. Next, we hypothesized that microRNAs (miRNAs) targeting DGAT1 might be changed by IR. We identified 18 miR-NAs as candidates that targeted DGAT1 using miRDB, TargetScan, and miRWalk ( Figure 5H, left) but found that only mi-croRNA-3918 (miR-3918) was significantly decreased by IR (Figure 5H, right). To validate that miR-3918 affected the DGAT1 expression, we first confirmed that miR-3918 reduced the expression of a reporter carrying the wild-type 3 0 UTR of DGAT1 but not a reporter carrying the 3 0 UTR of DGAT1 with mutated miR-3918 targeting site ( Figures S4A and S4B). We then found that miR-3918 effectively inhibited IR-induced DGAT1 expression in GBM ( Figure 5I). Consistent with the effect of the knockdown data, miR-3918 suppressed TG levels (Figure 5J) and induced ROS and H 2 O 2 production ( Figure 5K) and apoptosis after IR ( Figure 5L). Likewise, acylcarnitine and acetyl-CoA levels were rescued by miR-3918 after IR ( Figures S5A and S5B). These results together showed that DGAT1 attenuated b-oxidation through TG formation and that miR-3918, which targets DGAT1, contributed to sensitizing the IR effect through DGAT1 inhibition. Genetic inhibition of DGAT1 significantly suppresses radioresistance and prolongs overall survival in GBM xenograft mouse models To examine the role of DGAT1 in vivo, we established orthotopic xenograft GBM mouse models using U87MG-RR and BCL21-HP03 cells, then treated tumor-bearing mice with IR together with shDGAT1 or lentiviral-miR-3918 mimic ( Figure 6A). In vivo bioluminescent imaging showed that, compared with IR alone, IR combined with shDGAT1 or miR-3918 reduced U87MG-RR tumor growth by 65.34% or 53.64%, respectively, and reduced BCL21-HP03 tumor growth by 64.48% or 52.94%, respectively ( Figures 6B and 6C). Consistently, H&E staining showed that tumor size was remarkably decreased by IR combined with DGAT1 knockdown or miR-3918 treatment ( Figure 6D). Moreover, the DGAT1 level was highly increased by IR and DGAT1 knockdown or miR-3918 treatment markedly enhanced IR-induced apoptosis, as determined by IHC staining of cleaved caspase 3 ( Figure 6E). Consistent with our in vitro analysis, LDs were increased by IR, which was decreased by DGAT1 knockdown or miR-3918 treatment, as determined by immunofluorescence (IF) staining of TIP47, suggesting that lipotoxicity induced by b-oxidation increased under conditions of DGAT1 downregulation. Consequently, the overall survival of mice with U87MG-RR xenografts was significantly improved by IR combined with DGAT1 downregulation compared with untreated control (28 days of median survival) or IR alone (30 days of median survival) ( Figure 6F). Likewise, the overall survival of mice with BCL21-HP03 xenografts was significantly improved by IR combined with DGAT1 downregulation compared with untreated control (19 days of median survival) or IR alone (25 days of median survival). Notably, because a previous study showed that DGAT1 knockdown solely suppressed tumor growth, we established orthotopic xenograft GBM mouse models using U87MG-RR untreated control cells and shDGAT1-treated cells then treated tumor-bearing mice with IR. Although DGAT1 knockdown alone reduced the tumor growth to some extent, shDGAT1 combined with IR reduced the tumor growth by 48.82% compared with shDGAT1 alone (Figures S6A and S6B). Consequently, the overall survival of mice with U87MG-RR xenografts was significantly improved by shDGAT1 combined with IR compared with untreated control (27 days of median survival) or shDGAT1 alone (33 days of median survival) ( Figure S6C). Collectively, DGAT1 downregulation by its short hairpin RNA (shRNA) or miR-3918 significantly reduced radioresistance and extended overall survival in both models. Pharmacological alteration of DGKB and DGAT1 expressions sensitizes GBM cells to IR and attenuates tumor growth in GBM xenograft mouse models Although TMZ is the only chemotherapeutic agent currently used in GBM patients, approximately half of treated patients do not respond to TMZ due to their tumors overexpressing O 6 -methylguanine-DNA methyltransferase (MGMT). 25 Moreover, most drugs developed for GBM treatment over the past 20 years failed in clinical trials due to various challenges, including inefficient drug delivery and severe side effects. In this regard, we investigated existing blood-brain barrier (BBB)-penetrating radiosensitizers with dual functions of activating DGKB and inhibiting DGAT1 by interrogating The Connectivity Map (CMap). 26 Of the various candidates, we found that cladribine (2-chloro-2 0 -deoxyadenosine), a US Food and Drug Administration (FDA)-approved drug for leukemia, showed the best enrichment scores and significantly increased the DGKB mRNA level and decreased the DGAT1 mRNA level ( Figure 7A). We then tested whether mRNA levels of DGKB and DGAT1 were regulated by cladribine treatment in U87MG-RR Figure 6. Genetic inhibition of DGAT1 significantly suppresses radioresistance and prolongs overall survival in GBM xenograft mouse models (A) The schedule of U87MG-RR or BCL21-HP03 orthotopic xenograft mouse model treated with IR together with shDGAT1 or miR-3918. (B) In vivo bioluminescent images of orthotopic xenografts derived from U87MG-RR and BCL21-HP03 in control mice and in mice treated with IR or IR together with miR-3918 or shRNA (n = 16). (C) The relative luminescence units of orthotopic xenograft derived from U87MG-RR and BCL21-HP03 in control mice, with IR or IR together with miR-3918 or shRNA. (D) Representative H&E staining images of orthotopic U87MG-RR and BCL21-HP03 xenograft tissues with control, IR, IR combined with DGAT1 knockdown, and IR together with miR-3918. Scale bars, 2,000 mm. (E) Representative images of IHC for DGAT1 or cleaved caspase 3 and IF for TIP47 in tumor tissues from mice orthotopically xenografted with U87MG-RR and BCL21-HP03 then without (control) or with IR, IR combined with DGAT1 knockdown, or IR combined with miR-3918. Scale bars, 50 mm (upper) or 20 mm (lower). (F) Kaplan-Meier survival curve of mice with orthotopic U87MG-RR or BCL21-HP03 xenograft untreated (control) or treated with IR, IR combined with DGAT1 knockdown, and IR together with miR-3918 (IR treatments started 7 days after xenograft). Statistical analysis was performed with one-way ANOVA plus a Tukey's multiple comparisons test for (C) compared with luminescence values of IR alone at 28 days after irradiation, and Log rank (Mantel-Cox) test for (F). ns, non-significant; *p < 0.05; **p < 0.01; ****p < 0.0001. Article ll OPEN ACCESS cells and patient-derived GSCs. Cladribine significantly increased DGKB and decreased DGAT1 mRNA levels ( Figure 7B) and increased the TG level after IR ( Figure 7C). Moreover, cladribine treatment increased mitochondrial ROS and H 2 O 2 levels ( Figure 7D) and apoptosis ( Figure 7E) after IR. To determine whether cladribine can affect tumor growth in vivo and animal survival, we established orthotopic xenograft GBM mouse models using U87MG-RR and BCL21-HP03 cells and treated them with IR or IR combined with cladribine or TMZ ( Figure 7F). In vivo bioluminescent imaging showed that cladribine markedly sensitized the GBM cells to IR in both models ( Figures 7G and 7H). When combined with IR, TMZ was more effective than cladribine in inhibiting the growth of U87MG-RR-driven tumors (reduced by 96.67% and 75.78%, respectively) but was less effective than cladribine in BCL21-HP03-driven tumors (reduced by 18.97% and 63.43%, respectively). This difference presumably is because that U87MG-RR cells are MGMT negative, whereas BCL21-HP03 cells are MGMT positive. Likewise, H&E staining showed that tumor size was remarkably decreased by IR combined with cladribine ( Figure 7I). Moreover, consistent with our in vitro analysis, cladribine restored IR-induced DGKB downregulation and reduced IR-induced DGAT1 ( Figure 7J). Likewise, cladribine increased the level of cleaved caspase 3 and decreased TIP47 after IR, indicating that cladribine highly sensitizes GBM cells to IR by reducing LDs and promoting apoptosis. Consequently, overall survival of tumor-bearing mice was significantly improved by IR combined with cladribine compared with untreated control (20.5 days of median survival) or IR alone (27 days of median survival), and it was much more effective than TMZ combined with IR (27 days of median survival) in BCL21-HP03 xenografts ( Figure 7K). Collectively, cladribine induced DGKB upregulation and DGAT1 downregulation, significantly sensitized GBM cells to IR, decreased tumor growth, and increased overall survival in both models. DISCUSSION Rewiring of lipid metabolism is important for ATP production and maintenance of redox homeostasis in GBM. However, how GBM cells acquire radioresistance by regulating lipid metabolism has not been elucidated. In this study, we demonstrate that radioresistant GBM cells maintain lipid homeostasis through DGKB downregulation and DGAT1 upregulation to reduce the FA oxidation-mediated ROS after radiation. DGKB knockdown or the expression of DGKB kinase-dead mutant induces DAG accumulation and IR-induced DGAT1 promotes the accumulation of TGs and LDs to prevent FAs from entering the mitochondria to undergo FA oxidation. Conversely, DGKB overexpression or DGAT1 inhibition by miR-3918 mimic activates FA oxidation to increase ROS-induced mitochondrial damage and GBM cell radiosensitivity. Additionally, cladribine, which increases the expression of DGKB and decreases that of DGAT1, significantly improves the survival of GBM-bearing mice in combination with IR, indicating that targeting the lipid homeostasis could be a promising strategy to overcome radioresistance of GBM. Reprogramming of the lipid metabolism is closely linked to alterations in energy production by FA oxidation. Because the mitochondrial electron transport chain is a major source of ROS production and GBM tissue contains large amounts of TGs, 13 FA oxidation can be harmful due to the unavoidable production of ROS in GBM cells. In this regard, therapeutic approaches promoting lipid catabolism can possibly be effective when the ROS levels cross the death threshold. 27,28 Notably, antioxidantrelated genes are highly upregulated in our radioresistant GBM cell model according to the RNA-sequencing analysis. For example, the level of manganese superoxide dismutase (MnSOD) increases by 3.778-fold and the interleukin (IL) 6 level increases by 12.394-fold in U87MG-RR cells compared with their parental cells. MnSOD is the main antioxidant enzyme protecting cells from mitochondrial ROS, and a recent study shows that the (I) Representative H&E staining images of orthotopic U87MG-RR and BCL21-HP03 xenograft tissues from control mice or mice treated IR or IR together with cladribine or TMZ. Scale bars, 2,000 mm. (J) Representative images of IHC staining for DGKB, DGAT1, and cleaved caspase 3 and IF images of TIP47 in orthotopic U87MG-RR and BCL21-HP03 xenograft tissues from control mice or mice treated with IR or IR together with cladribine or TMZ. Scale bars, 50 mm (upper) or 20 mm (lower). (K) Kaplan-Meier survival curve of mice with orthotopic U87MG-RR and BCL21-HP03 xenograft without treatment (control) or treated with IR or IR together with cladribine or TMZ (IR treatments started 7 days after xenograft). Statistical analysis was performed with one-way ANOVA plus a Tukey's multiple comparisons test for (B)-(E). In addition, one-way ANOVA plus a Tukey's multiple comparisons test for (H) compared with luminescence values of IR alone at 28 days after irradiation, and Log rank (Mantel-Cox) test for (K). ns, non-significant; *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001. increased activity of MnSOD improves cell viability after IR and induces radioresistance. 29 Furthermore, treating glioma cells with IL6 has been shown to induce radioresistance by reducing mitochondrial ROS. 30 Thus, our RNA-sequencing data support the fact that radioresistant GBM cells prefer to store FAs rather than use them for energy production to minimize damage from mitochondrial ROS. Cladribine is a synthetic purine nucleoside analogue that is approved by the FDA to treat hairy cell leukemia and B cell chronic lymphocytic leukemia as well as multiple sclerosis, supporting that cladribine is safe for CNS functioning. Cladribine is phosphorylated by deoxycytidine kinase (DCK) to cladribine triphosphate instead of being broken down by adenosine deaminase. Cladribine triphosphate is then incorporated into and accumulates in DNA, resulting in an imbalanced triphosphorylated deoxynucleotide (dNTP) pools and subsequent apoptosis. Therefore, the more a cell accumulates intracellular cladribine triphosphate, the more vulnerable it is to cladribine-mediated apoptosis. The accumulation of cladribine triphosphate depends on the ratio of DCK to 5 0 -nucleotidase, which turns cladribine triphosphate back to the inactive cladribine. The DCK to 5 0 -nucleotidase ratio is high in immune cells but is considerably low in other cell types, including glial cells. 31 Cladribine triphosphate is therefore hardly accumulated in GBM cells. Indeed, our cell viability data show that the half maximal inhibitory concentration (IC 50 ) of cladribine is significantly higher in U87MG-RR than that in THP-1 cells ( Figure S7A). However, our results show that cladribine triggers ROS-induced apoptosis by regulating the expression of DGKB and DGAT1, indicating that unphosphorylated cladribine may have a role in GBM cells independent of its conventional function. In addition, as an FDAapproved oral drug, side effects of cladribine are quite manageable and it has been well evaluated for pharmacokinetics and efficacy in previous clinical trials. 31,32 According to Cladribine Tablets Treating Multiple Sclerosis Orally (CLARITY) and CLARITY Extension, approximately 90% of actively treated patients completed each study, and there were relatively few study discontinuations due to adverse effects. 33 Moreover, the bioavailability of oral cladribine is 37%-51% compared with subcutaneous administration and the terminal half-life of cladribine is 5.7-19.7 h. 34 Furthermore, cladribine effectively penetrates the BBB, and approximately 25% of the plasma concentration of cladribine reaches the cerebrospinal fluid (CSF). 34 The effects of cladribine are sustained for more than 10 months following the last dose of both parenteral cladribine and oral cladribine tablets. 31 Therefore, the pharmacokinetics of cladribine may not limit its application to GBM therapy. Overall, our study suggests that radioresistant GBM cells efficiently prevent mitochondrial lipotoxicity by downregulating DGKB and upregulating DGAT1, and provides a strong basis to regulate them for clinical application against GBM. DGKB has also been reported to play a major role in the small intestine, 35 and our results show that the expression of DGAT1 is also upregulated by IR in other cancers such as pancreas, lung, and gallbladder, so it will be important to determine whether targeting either or both enzymes affects other cancer types ( Figure S4C). Considering our preclinical data from xenograft mouse models using the established radioresistant GBM cells and MGMT-positive GSCs, regulating DGKB and DGAT1 or repurposing cladribine for GBM treatment may overcome resistance to conventional therapies. Limitations of the study Despite the considerable radiosensitizing effect of cladribine, the mechanism of its regulatory effect on DGKB and DGAT1 is still unclear. Thus, further studies are needed to identify the mechanism that can translate our findings to the clinic for the treatment of GBM. Nevertheless, most drugs developed for GBM treatment failed over the past 20 years in clinical trials due to inefficient drug delivery and severe side effects even though the mechanisms have been elucidated. Our goal was to discover a radiosensitizing drug that has manageable side effects and the ability to penetrate the BBB. We believe that its clinical trials to treat GBM may demonstrate a long-term benefit and therapeutic value for GBM patients. Another limitation of this study is that we only used GBM xenograft mouse models but not syngeneic GBM mouse models. Even though xenograft mouse models are more generally used in GBM studies than syngeneic mouse models, 36 the effect of the immune system on radioresistance can be neglected when immunodeficient mice are used. Notably, because cladribine preferentially targets B and T lymphocytes, there is a need to investigate the radiosensitizing effect of cladribine using immunocompetent syngeneic GBM mouse models. Touching on this issue, it will be possible not only to verify the efficacy of cladribine as a radiosensitizer but also to elucidate the relationship between radioresistance and the immune system in GBM. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Dr. BuHyun Youn (bhyoun72@pusan.ac.kr). Materials availability This study did not generate new reagents. Data and code availability d RNA sequencing data has been deposited at NIH Gene Expression Omnibus (GEO): GSE207002 and are publicly available as of the date of publication. d All original code is available in this paper's supplemental information Data S1. d Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request. Establishment of radioresistant cells To acquire radioresistant GBM cells, in vivo selection was processed 3 times in total. First, 2 3 10 6 U87MG cells-expressing luciferase were subcutaneously implanted in six-week-old male BALB/c athymic nude mice and exposed to IR (2 Gy/day for 5 days, F1). Implanted U87MG cells were monitored once a week by mouse bioluminescence imaging until 35 days after IR. Then, implanted U87MG cells were extracted from F1 mice and cultured in DMEM with 10% FBS and 1% antibiotics. Next, 2 3 10 6 F1-derived U87MG cells were subcutaneously implanted in six-week-old male BALB/c athymic nude mice and exposed to IR (2 Gy/day for 5 days, F2). Like U87MG cells of the F1 mouse, cells were monitored once a week by mouse bioluminescence imaging until the mice were dead. Then, implanted U87MG cells were extracted from F2 mice and cultured in DMEM with 10% FBS and 1% antibiotics. Lastly, 5 3 10 5 F2-derived U87MG cells were orthotopically implanted in six-week-old male BALB/c athymic nude mice and exposed to IR (2 Gy/day for 5 days). Cells were monitored once a week by mouse bioluminescence imaging until the mice were dead. Then, implanted U87MG cells were extracted from mice, cultured in DMEM with 10% FBS and 1% antibiotics, and referred to as U87MG-RR. Additionally, U87MG-RR cells were validated by being compared with parental U87MG cells in xenograft, tissue imaging, stemness marker, and limited dilution assay. Animal care protocol and orthotopic xenograft mouse model Six-week-old male BALB/c athymic nude mice were used for generating xenograft mouse model following the previous study. 4 All experiments were performed in accordance with the provisions of the NIH Guide for the Care and Use of Laboratory Animals. The mice were housed individually or in groups of up to five in sterile cages, and were maintained in animal care facilities in a temperature regulated room (23 ± 1 C) with a 12 h light-dark cycle. All animals were fed water and standard mouse chow ad libitum. U87MG-RRluciferase expressing cells and HP03-luciferase expressing cells were harvested and suspended at a density of 1 3 10 5 cells per mL in serum free media. Then, 5 3 10 5 cells were injected into the mice brains using 10 mL syringe with stereotactic surgery. 7 days after the injection, the brain of injected mice were irradiated with 2 Gy daily for five days at a dose rate of 600 MU/min using a TrueBeam STx. Xenograft growth was monitored by bioluminescent imaging using VISQUE Invivo Smart LF. Mice were sacrificed upon manifestation of neurological symptoms. Cell lines, cell culture, and irradiation U87MG, A172, and HepG2 cell lines were obtained from the Korea Cell Line Bank (KCLB, Seoul, Republic of Korea). The phenotypes of these cell lines have been authenticated by the KCLB. All cells were free of mycoplasma contamination and were authenticated by short tandem repeat profiling within the past 12 months. U87MG-luciferase expressing cells were transferred via a material transfer agreement from Severance Hospital (Yonsei University, Seoul, Republic of Korea). The cells were grown in DMEM supplemented with 10% FBS, penicillin (100 U/ml), and streptomycin (100 mg/mL) at 37 C in a humidified atmosphere of 95% air and 5% CO 2 . The cells were exposed to a single dose of X-ray using an X-ray generator M-150WE at a dose rate of 0.38 Gy/min. The radiation was delivered by using an 8 mm-diameter collimator. RNA-sequencing To obtain RNA samples from U87MG and U87MG-RR, U87MG and U87MG-RR cell lines were incubated upon 1 3 10 6 cells. RNA samples from U87MG and U87MG-RR were obtained through RNA extraction kit and 3 biological samples of each cell lines were prepared. RNA sequencing was performed through Illumina sequencing platform by ebiogen (Seoul, Republic of Korea). All RNA sequencing results of samples were assessed by triplicate. For quantifications, Fisher exact test was used for comparing U87MG and U87MG-RR groups. Genes were classified depend on significancy (p < 0.05) and fold change (>1.8 or <0.55). Analysis of GBM poor prognosis-related genes To obtain GBM poor prognosis-related gene, we utilized the list of gene expression in GBM patients through R language. First, we installed and utilized the packages, including 'ggplot2', 'dplyr', 'TCGAbiolinks', 'GEOquery', 'SummarizeExperiment', 'biomaRt', 'stringr', and 'tidyverse' on R system. Next, we performed 'TCGAbiolinks' and downloaded TCGA-GBM cohort. Then, the data was queried by 'GEOquery' as data category, data type, sample type, experimental strategy, and workflow type. We performed the gene naming and tubulin normalization. Because our focus of TCGA-GBM cohort is poor prognosis in primary tumor, we extracted clinical data through specifying data category is 'clinical' and sample type is 'primary tumor'. Next, to obtain the information of prognosis, we arranged the data as days to death and divided as quartile (top 25% or bottom 25%) or half (top 50% or bottom 50%). Finally, we selected statistically significant genes (both quartile p value <0.05 and half p value <0.05). The codes for analyzing GBM poor prognosis-related genes have been uploaded as supplemental information (Related to Data S1). OPEN ACCESS Quantitative real-time PCR For mRNA expression assessment, qRT-PCR was performed following the previous study. 41 Briefly, RNA was isolated with TRIzol following the manufacturers' instructions and real-time qRT-PCR was performed using an Applied Biosystems StepOne Real-Time PCR System. It was performed for 40 cycles of 95 C for 15 s and 60 C for 1 min followed by thermal denaturation. The expression of each gene relative to GAPDH mRNA was determined using the 2 ÀDDCt method. The sequences of the primers used are listed in Table S2. Each sample was assessed by triplication. Western blots The protein expression was validated as previously described. 42 Briefly, whole cell lysates (WCL) were prepared using radioimmunoprecipitation assay (RIPA) lysis buffer (50 mM Tris, pH 7.4, 150 mM NaCl, 1% Triton X-100, 25 mM NaF, 1 mM dithiothreitol, and 20 mM ethylene glycol tetraacetic acid supplemented with protease inhibitors) and the protein concentrations were determined using a BioRad protein assay kit (BioRad Laboratories, Hercules, CA, USA). Protein samples were subjected to SDS-PAGE, transferred to a nitrocellulose membrane and then blocked with 5% BSA in tris-buffered saline with Tween 20 (10 mM Tris, 100 mM NaCl, and 0.1% Tween 20). The membranes then were probed using the specific primary antibodies and peroxidase-conjugated secondary antibodies from Thermo Fisher Scientific. For all western immunoblot experiments, blots were imaged using an ECL detection system (Roche Applied Science, Indianapolis, IN, USA) with iBright FL1000 Imaging System from Thermo Fisher Scientific. Lentiviral transduction HEK-293 cells were seeded at 1 3 10 6 cells with DMEM with FBS in 150mm plates 1 day before lentiviral transfection. When cell confluency was approximately 50%, transfection mixture, Opti-MEM with pLKO.1, psPAX2, and pMD2.G, was added through TransIT-LT1 reagent agent. After 18 h, media was exchanged to DMEM with FBS and antibiotics. Then, 6 h later, media was harvested and centrifuged at 1,500 g for 15 min. After centrifugation, supernatant was transferred other fresh tube and Virus Precipitation Solution was added to tube and incubated at 4 C for 12 h. To obtain lentiviral particles, supernatant with solution was centrifugated at 1,500 g for 5 min, and the pellet was diluted at 1/10-1/100 with PBS. The virus titer was quantified by real-time PCR by using Ultra-Rapid Lentiviral Global Titering Kit. For lentiviral transduction, target cells were seeded at appropriate plates and incubated up to 50 to 70% confluency. Then, transduction solution, culture medium combined with Trans-Dux to a 1X final concentration, was added to cells as for the desired MOI. 3 days after transduction, cells were selected by puromycin. Cell viability assay and colony-forming assay For cell viability assay, cells were seeded at 10,000 cells per well in 96-well plates 1 day before IR, or IR combined with DGKB shRNA transfection or DGKB plasmid transfection or DGKB mutant transfection for 48 h. Cell viability was determined using CellTiter-Glo Luminescent Viability Assay kit. Colony-forming assay was performed following the previous study. 5 Briefly, the cells were seeded at a density of 600 cells in 35-mm culture dishes. After 24 h, the cells were treated with IR, or IR combined with DGKB knockdown or overexpression. 14 days after seeding, the cells were fixed with 10% methanol and 10% acetic acid, which were then stained with 1% crystal violet. Colonies containing more than 50 cells were identified using densitometry software and scored as survivors. Mouse bioluminescence imaging Mice implanted with U87MG-RR and BCL21-HP03 cells expressing luciferase were injected intraperitoneally with a Luciferin solution (3 mg/mL in PBS, dose of 15 mg/kg) by an intraperitoneal route. After 10 min, mice were anesthetized by inhalational way using isoflurane and bioluminescence images were acquired using VISQUE Invivo Smart LF. Hematoxylin and Eosin (H&E) and Immunohistochemistry (IHC) staining H&E staining and IHC were performed as previously described. 5 The brain samples were embedded in paraffin blocks, and the sections were prepared by HistoCore AutoCut. Next, the sections were cut into 4 mm sections and stained with H&E, following standard procedures. For IHC, sections were treated with 3% hydrogen peroxide/methanol and then with 0.25% pepsin to retrieve antigens. Next, samples were incubated in blocking solution, after which they were incubated at 4 C overnight with the specific primary antibodies diluted in the antibody diluent. The sections were subsequently washed with tris-buffered saline with 0.1% Tween 20 and then incubated with polymer-horseradish peroxidase conjugated secondary antibody. A 3,3 0 -diaminobenzidine substrate chromogen system was utilized to detect antibody binding. Stained sections were observed under an Olympus IX71 inverted microscope. The quantification of IHC was processed by ImageJ. IHC images were loaded into ImageJ, and a color threshold was adjusted. To measure the total area, the hue and saturation values were adjusted to the maximum, the brightness was adjusted to the point where all tissues were selected, and the selected area (total area) was measured. To measure the IHC stained area, the selected area (IHC stained area) was measured by adjusting the color value until only the IHC stained area was selected without changing the brightness. To calculate the IHC stained area, the IHC stained area value was divided by the total area value and multiplied by 100. Enzyme activity assay For DGKB enzymatic activity quantification, cells were seeded at 10,000 cells per well in 96-well plates 1 day before IR, or IR combined with DGKB mutant for 48 h. DGKB activity were assessed by DAG kinase Activity Assay Kit following the manufacturers'
2023-01-06T22:12:26.136Z
2022-12-22T00:00:00.000
{ "year": 2023, "sha1": "c8882b06a1bbeeb6c977e8889f4461117f0b17b0", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S266637912200444X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e39472e1efd73a749fd24d3fac8e924b4c9b3e1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231786730
pes2o/s2orc
v3-fos-license
Technoeconomic Supplement of P2G Clusters with Hydrogen Pipeline for Coordinated Renewable Energy and HVDC Systems Under the downward tendency of prices of renewable energy generators and upward trend of hydrogen demand, this paper studies the technoeconomic supplement of P2G clusters with hydrogen pipeline for HVDC to jointly consume renewable energy. First, the planning and operation constraints of large-capacity P2G clusters is established. On this basis, the multistage coordinated planning model of renewable energy, HVDCs, P2Gs and hydrogen pipelines is proposed considering both variability and uncertainty, rendering a distributionally robust chance-constrained (DRCC) program. Then this model is applied in the case study based on the real Inner Mongolia-Shandong system. Compared with energy transmission via HVDC only, P2G can provide operation supplement with its operational flexibility and long term economic supplement with increasing demand in high-valued transportation sector, which stimulates an extra 24 GW renewable energy exploration. Sensitivity analysis for both technical and economic factors further verifies the advantages of P2G in the presence of high variability due to renewable energy and downward tendency of prices of renewable energy generators. However, since the additional levelized cost of the P2G (0.04 RMB/kWh) is approximately twice the HVDC (0.02 RMB/kWh), P2G is more sensitive to uncertainty from both renewable energy and hydrogen demand.  Abstract-Under the downward tendency of prices of renewable energy generators and upward trend of hydrogen demand, this paper studies the technoeconomic supplement of P2G clusters with hydrogen pipeline for HVDC to jointly consume renewable energy. First, the planning and operation constraints of large-capacity P2G clusters is established. On this basis, the multistage coordinated planning model of renewable energy, HVDCs, P2Gs and hydrogen pipelines is proposed considering both variability and uncertainty, rendering a distributionally robust chance-constrained (DRCC) program. Then this model is applied in the case study based on the real Inner Mongolia-Shandong system. Compared with energy transmission via HVDC only, P2G can provide operation supplement with its operational flexibility and long term economic supplement with increasing demand in high-valued transportation sector, which stimulates an extra 24 GW renewable energy exploration. Sensitivity analysis for both technical and economic factors further verifies the advantages of P2G in the presence of high variability due to renewable energy and downward tendency of prices of renewable energy generators. However, since the additional levelized cost of the P2G (0.04 RMB/kWh) is approximately twice the HVDC (0.02 RMB/kWh), P2G is more sensitive to uncertainty from both renewable energy and hydrogen demand. Background and Motivation The sustainable exploration and utilization of renewable energy has been a worldwide trend. Due to the worldwide situation that there is a spatial discrepancy between energy sources and demand such as U. S. and China [1], [2], high-voltage direct current (HVDC) transmission lines are commonly deployed for a long-distance electricity delivery [3]. However, this mode would face both technical and economic issues [4]: 1) with the penetration of volatile renewable energy (RE) increases, more and more flexible resources are required for HVDC transmission to match the source and demand profiles and ensure its utilization rate; 2) with a sharp decreasing trend in the investment cost of wind and solar facilities in the future, the gap on unit electricity production cost between source and demand regions is narrowing, which means the economy of electricity transmission via HVDC is getting worse. P2G (power-to-gas) technology is another promising method to consume a large amount of renewable energy. The core of P2G is the energy conversion from electricity to hydrogen, and then hydrogen can be applied in chemical, transportation and heating industries. Substituting gray hydrogen from fossil fuels with green hydrogen from renewable energy-based sources, P2G can also help the decarbonization in downstream sectors of hydrogen. Compared to HVDC facilities: 1) P2G facilities are flexible resources that can cooperate with HVDC to follow the variability of renewable energy, furthermore, followed hydrogen pipeline (HP) can also provide enough buffer; 2) the small capacity and short lifetime of P2G facilities can reduce investment risks and respond to price changes more rapidly. The above advantages have been verified by many worldwide research [5][6][7] and demonstration projects [8]. Therefore, the combination of HVDC and P2G is a feasible solution for future renewable energy utilization and energy system decarbonization. Several coordinated studies have been performed on HVDC and P2G. The process should begin with the utilization of offshore wind energy, and an economic model has been established to calculate the cost of both technologies [9], [10]. [11] considers the expansion of both transmission networks and P2Gs; however, this expansion is determined in different planning stages with carbon-oriented objectives, which makes it difficult to reflect the technoeconomic supplement of P2G. In this paper, we discuss the technoeconomic supplement of P2G with HP for HVDC to explore and utilize renewable energy in the future. We try to answer the following questions of: 1) planning roadmaps of renewable energy, including both wind and solar energy, HVDCs, HPs and P2Gs in future decades; 2) operating combinations of centralized electricity transmission via HVDC and distributed P2G; and 3) the technical and economic advantages of P2G technology in future renewable energy systems. On this basis, a coordinated renewable energy, transmission (including both HVDC and HP) and P2G planning model is required. Literature Review Since there is little existing research on coordinated generation, transmission and P2G planning, the literature review is divided into two parts: research on coordinated generation and transmission planning and research on P2G planning. Extensive research has been performed on coordinated generation and transmission planning. Most research studies consider only single-stage planning and investment at the beginning of the planning horizon [12][13][14][15]. Single-stage planning is the most reasonable approach when dealing with short-time horizons where decisions are not going to be revisited. However, for longer time horizons, multistage planning is essential to closely reproduce the reality of the problem. Multistage planning can consider the trends in the investment costs and the scale of the demand in the long time planning horizon [16], [17]. In addition, the consideration of the uncertainty of renewable energy is an important part of planning models, especially in this research, which studies the sensitivity of HVDC and P2G to this technical factor. Compared to stochastic programming and robust optimization, distributionally robust chance-constrained (DRCC) optimization [18][19][20] is a kind of uncertainty modeling method that is more suitable for this research: 1) only limited statistical parameters are required which can be obtained from the evaluation results; 2) the conservativeness of the chance constraints is adjustable which is suitable for sensitivity analysis. DRCC optimization has been applied in generation expansion planning [18] and network planning [19], and its effectiveness and advantages have been verified. Compared to research on coordinated generation and transmission planning, in the literature on P2G planning, many opportunities exist for further modeling improvements. Existing research considers P2G as a kind of energy conversion facility in planning level and describes its model only with the energy conversion efficiency [11], [21][22][23][24]. On the one hand, in large-scale application in power systems, P2G should be in the form of clusters rather than a single facility, on the other hand, considering the variability of renewable generation, the start-up and shut-down actions of P2G facilities should be considered, which means the actual operation of P2G clusters should be similar to unit commitment problems [18], [25] of traditional generators. However, a gap remains in the existing studies to describe the unit commitment of P2G at the cluster level. Above all, due to the lack of satisfactory P2G modeling, there is little research on coordinated HVDC and P2G planning; therefore, the supplement of P2G for HVDC, especially from a technical perspective, is less studied. Based on the existing gap, in this paper, we first propose the complete planning and operation constraints of a P2G cluster. On this basis, the multistage coordinated renewable energy, transmission and P2G planning model is established and applied in the Inner Mongolia-Shandong case in China. The main contributions of this paper are threefold: 1) The complete planning and operation constraints of a P2G cluster considering retirement and unit commitment are first proposed which is essential for research on the large-scale application in renewable energy systems. Furthermore, the "equal-split" rule which determines the power distribution among facilities is verified and applied in model simplification at the cluster level. 2) A multistage coordinated planning model of renewable energy, transmission (HVDC and HP), and P2G is then established which considers multiple energy sectors including electricity, transportation and industry at the same time. In particular, typical characteristics of renewable energy systems are fully considered in the model: the variability of renewable energy is modeled with different scenarios, and uncertainty is modeled as a DRCC program. 3) The proposed coordinated planning model is applied in actual Inner Mongolia-Shandong case studies. The technical and economic advantages of P2G as well as its supplement for HVDC are verified with comparative cases. Furthermore, sensitivity analysis of technical factors (variability and uncertainty) and economic factors (prices and demand) further verifies the importance and limitation of P2G. The remainder of the paper is organized as follows: Section II describes the complete planning and operation constraints of the P2G cluster. Section III formulates the overall multistage coordinated planning model of renewable energy, transmission and P2G. In Section IV, case studies based on an industrial system of Inner Mongolia-Shandong Province are presented. The summary and conclusions follow in Section V. II. MODELING OF P2G CLUSTER In this section, the complete P2G planning and operation constraints at the cluster level, considering the retirement and unit commitment operation, is established. Fig. 1 shows the illustration of a P2G cluster. A single P2G facility can attain a maximum of 10 MW (alkaline) at the current stage. For future large-scale applications, we assume a P2G farm consisting of a fixed number of facilities (e.g., 100) as the minimum planning unit. On this basis, a P2G cluster with a larger capacity is formed with several farms. However, the minimum operational unit is still at the facility level. First, the operation status of P2G facilities is introduced, and then the planning and operation constraints at the cluster level is established. B. Three Statuses of P2G Facilities Most existing studies do not consider the different working statuses of the P2G. Considering the variability of renewable energy, P2G can be switched off to save stand-by power M,min P when there is not enough renewable energy available and switched back on later if needed. According to our previous research [26], there are three statuses in total, as shown in Fig. 2. 1) ON status When a P2G facility is in ON status, it is an energy conversion unit with adjustable power input and hydrogen output. Based on our previous work [5], the operational flexibility is from the adjustable current I and the temperature T. Different working points (I, T) correspond to different power inputs and hydrogen outputs. Furthermore, due to the overall thermal capacity of P2G, a constraint on the ramping rate exists between two periods. However, when we focus on the operation of the P2G facility from the perspective of power systems, we do not care about detailed operational parameters such as I and T; otherwise, we describe For simpler expressions, we omit the indices explanation in the following equations in this paper. 2) BOOTING status The shutdown of P2G can be very rapid from the perspective of hydrogen production, and the P2G will be stopped instantaneously once the DC circuit is opened. However, the startup must take some time. No hydrogen can be produced before the stack is heated to the acceptable temperature. Therefore, there is a BOOTING status before the P2G is completely booted, and during this process, a constant power M,boot P is required for auxiliary facilities with no hydrogen production. C. Planning and Operation Constraints of a P2G Cluster a. Farm-based Planning Constraints of P2G Cluster At the planning level, we consider a P2G farm as the minimum planning unit. Therefore, the planning variable M Fig. 2, for a P2G cluster with the unified P2G facilities type, the "equal-split" rule is verified, which means that the power of each P2G facility in ON status should be equally split by the sum power of the P2G cluster. The detailed proof of the "equal-split" rule is illustrated in the Appendix. Based on the "equal-split" rule, at the cluster level, M where (4) Above all, (1)- (7) consist of the complete planning and operation constraints of the P2G cluster. In the following sections, P2G is the abbreviation of P2G cluster without special instructions. III. MULTISTAGE COORDINATED PLANNING MODEL OF RENEWABLE ENERGY, TRANSMISSION AND P2G This section introduces the multistage coordinated planning model of renewable energy, transmission and P2G. First, the description and assumption of the whole model are illustrated, and then planning constraints on renewable energy, transmission and P2G are explained. The uncertainty of renewable energy output as well as overall operation constraints are modeled with the DRCC. Finally, the objective function is defined. A. Model Description and Assumption As described in the Introduction, we illustrate the problem and the configuration of the coordinated planning model, as shown in Fig. 3. In following expressions, i represents source region and j represents demand region. The coordinated planning model is based on the following assumptions [28], [29]: 1) Energy transmission loss via HVDC and energy consumption by compressors of HP are considered in the form of operation cost. 2) The recovery of the residual value of facilities is not considered. During the whole planning horizon, only the replacement of P2G farms is considered since the lifetimes of REs, HVDCs and HPs are generally no less than 30 years. 3) All the facilities are constructed and put into production in the first year in each planning epoch. B. Coordinated planning model a. Planning Constraints of Renewable Energy In source region i, there are upper limits on the new planning capacity of wind turbines (8), photovoltaics (9), and their sum (10) in each planning epoch y and the whole planning horizon: (15) shows that in source region i, the renewable generation of wind/solar power can be divided into two parts: electricity transmission via HVDC and hydrogen production via P2G. e. Operation Constraints of Transmission with DRCC For HVDC transmission lines, power flow is the sum of power for electricity transmission from both wind and solar as in (16). HVDC WT,E PV,E , , , , In this work, the HVDC transmission line is modeled as a link that carries active power within its power limits as a function of possible investments [29]. Hence, the constraint to limit the power flow in HVDC corridors in terms of the investment variables ,, ij y l σ is represented in (17 Similarly, the hydrogen input rate of pipelines in source side should not exceed the online maximum flow rate which is determined by compressors, as shown in (18) Considering the buffer from line packing [30], the operation constraint of the pipeline is shown in (19). HP HP D,U D,U HP , , , f. Operation Constraints of P2G with DRCC The P2G power is the sum of the power for hydrogen production from both wind and solar energy. independent of short-term uncertainty, i.e., these decisions are made before the time that the uncertainty is realized. Therefore, the operation constraint (7) on these variables is the same as the equations in Section II. However, other operation variables are required to respond to uncertainty, and the operation model of P2G with DRCC is shown below: M,ON M M,boot , , , , , , , , , y s t s t i y s t s t i y s t i y i y s t D i y s t s t i y s t i y s t s t m x b P c  γ γ (24) where (22) corresponds to the power constraint (4), (23) corresponds to the ramping constraint (5), and (24) corresponds to the energy conversion constraint (6) , since the concavity is not strong ( M a is small), for further model simplification, the linear relationship is considered. g. Operation Constraints of Demand with DRCC For electricity demand on the demand side, it can be described by typical load profiles which is supplied by HVDC as in (25): For hydrogen demand on both source side and demand side, considering the differences of hydrogen requirements in different hydrogen sectors, annual upper limits in each sector are required: where (26) describes the hydrogen balance between supply and demand, and U represents hydrogen downstream sectors. Here, we consider three main sectors: chemical (C), transportation (T) and heating (H). According to the prediction of future requirements, there are upper limits on requirements in each sector in source (S) and demand (D) regions, which are described by (27)- (28). h. Model Simplification To further mitigate the complexity of the proposed model, the continuous operation variables are simplified with linear decision rules [18]. In this way, all the continuous operation variables , st cov in (15)-(28) can be described as follows: With linear decision rules and Cantelli's inequality, the complete operation model can be reformulated into a MISOCP form, and the detailed equations are in the Appendix. i. Objective Function From the prospective of the government who concerns how to utilize renewable energy to help the decarbonization of energy sectors, the overall economy of coordinated renewable energy, transmission and P2G planning should be analyzed. Therefore, the objective function (31) IV. CASE STUDIES In this section, case studies are performed based on the actual industrial system of Inner Mongolia-Shandong Province. First proposed coordinated planning model is applied, and the supplement of P2G for HVDC is verified from the perspective of both technology and economy. Furthermore, sensitivity analysis on technical factors (variability and uncertainty) and economic factors (prices and demand) further verifies the advantages and limitations of P2G. The proposed coordinated planning model is a MISOCP optimization problem that is coded in MATLAB and solved with CPLEX12.6. A. Case Descriptions The proposed model is applied in the typical case of Inner Mongolia and Shandong Province in China. According to renewable energy evaluation based on GREAN platform [31], there are wind and solar resources in the Bayannaoer region in Inner Mongolia. At the same time, there is distributed hydrogen demand in Inner Mongolia from the chemical, transportation and heating sectors. For Shandong Province, which is approximately 1230 km from Bayannaoer, both electricity demand and hydrogen demand can be satisfied via HVDC and HP by renewable energy in Bayannaoer. Simple parameters in the model are shown directly in the nomenclature, and parameter-related P2Gs are obtained from [32] and [33]. In this case study, we discuss only alkaline technology. Other economic parameters in (31) and hydrogen demand parameters in (27)- (28) according to the research on realistic data of Inner Mongolia and Shandong are shown in Table I and Fig. 4. We assume that the prices of facilities decrease linearly in planning horizon. In Table II and the following discussion, the letter E represents energy for electricity transmission, and the letter H represents energy for hydrogen production. Table II shows that following the planning orders of HVDC and P2Gs, the ratio of E to H gradually decreases, and in the last epoch, E only accounts for 23% of the total renewable energy. The ratio of E to H also influences the annual utilization hours (AUH) of HVDC, P2G and HP, as shown in Table II. In summary, the planning of P2Gs and HPs are later than HVDC, and the ratio of E to H is gradually decreasing, which due mainly to 1) WT Fig. 7 shows the operation profiles of REs. For renewable energy, there are significant differences between different scenarios, especially for wind. Electricity transmission via HVDC requires the balance of source and demand, therefore P2G with HP provide the necessary operation supplement and buffer for HVDC to consume intraday and interday fluctuations since the transmission and utilization of hydrogen are bufferable. The intraday operation results of P2G cluster is shown in Fig. 8 below. c. Economic Analysis In the whole planning horizon, the levelized cost of electricity (LCOE), levelized cost of HVDC (LCOHVDC), levelized cost of P2G (LCOP2G), levelized cost of HP (LCOHP), levelized profit of electricity (LPOE) and levelized profit of hydrogen (LPOH) are calculated based on [28]. The results are shown in Fig. 9. P2G(S/D) represents the results for hydrogen demand in the source/demand region, respectively. 2) The levelized cost of hydrogen (LCOH) is approximately 0.30 RMB/kWh (20 RMB/kg); therefore, the profit of the P2G depends mainly on hydrogen application in the transportation sector (30 RMB/kg). d. Comparison with HVDC Only Considering the case in a renewable energy system without P2G, and energy can only be consumed via electricity transmission. The planning, operation and economic results are shown in Table III. Compared to the benchmark case, the advantages of P2G can be quantitatively concluded to be: 1) From the perspective of planning, an extra 24 GW of renewable energy can be explored economically. 2) From the perspective of operation, based on the operational flexibility of the P2G cluster, Fig. 8 verifies that P2G is a kind of flexible resource that can cooperate well with the power grid for extra above 34% renewable energy consumption. 3) From the perspective of economy, since the economy of HVDC is becoming worse with the decline of facility investment cost, it demonstrates advantages in the short term, but with the hydrogen required in the transportation sector increasing, P2G in fact occupies advantages in long term, which is an exact complement to HVDC. With more exploration of renewable energy at a lower cost, the present value of LCOE decreases from 0.33 RMB/kWh to 0.26 RMB/kWh, and the total profit in the whole planning horizon increases significantly. C. Sensitivity Analysis In this subsection, sensitivity analysis of technical factors and economic factors is quantitatively studied to verify the advantages and limitations of P2Gs. Considered factors are shown in detail in Table IV. First, the results of the sensitivity analysis are listed in Tables V, VI and VIII, and then factors that are beneficial/unfavorable to P2Gs are identified and discussed. Table V reveals that with the number of scenarios S increasing which represents the stronger variability of renewable energy, following rules can be seen: 1) In planning level, the number of new P2Gs increases, which means that the planning of P2Gs moves up. 2) In operation level, the percentage of energy for electricity transmission (E) decreases mainly due to the technical constraint from the stronger imbalance of source and demand profiles. Therefore compared to HVDC, P2Gs are more suitable for following renewable energy output with strong variability (such as wind power). b. Uncertainty Table VI shows that with the consideration of the uncertainty of renewable energy, following rules can be seen: 1) In planning level, the planning of HP and P2G are more conservative, HP would be no longer planned. 2) In operation level, the percentage of energy for electricity transmission (E) significantly increases. It is because compared to HVDC with the large capacity, P2G with small capacity is more sensitive to the uncertainty, besides additional levelized cost of P2G is higher than that of HVDC, therefore in the worst case overinvestment would lead to bad economy. Furthermore, from Table VI, the smaller ε is (the more conservative of the DRCC model), the less capacity of P2Gs, and the more energy is transported via HVDC rather than converted into hydrogen, which also reveals the limitation of P2Gs when facing uncertainty. c. Economic Factors We consider the four comparative cases on economic factors shown in Table VII, here "-" means that in this case parameters are unchanged in the planning horizon and "↓" means that parameters will decline. The results are shown in Table VIII. 1) Prices of RE In planning level, the reduction on cost significantly decreases the economy of HVDC and shrinks the planning of REs, HVDCs and P2Gs especially when the development of hydrogen transportation sector is pessimistic (Case3 and Case4). In operation level, the percentage of energy for electricity transmission (E) decreases with the reduction of cost (Case1 and Case2). The results verify the advantages of P2Gs following the decreasing tendency of RE's cost. 2) Hydrogen demand in transportation sector Compared Case1, 2 with Case3, 4, if the development of transportation sector is pessimistic, HP is not planned and the number of P2Gs also reduces, and the percentage of energy for electricity transmission (E) significantly increases. It verifies the conclusion from above economic analysis that the profit of P2G mainly depends on hydrogen application in transportation sector. d. Advantages and Limitations of P2G In summary, both technical factors (variability and uncertainty) and economic factors (prices and demand) influence the planning and operation of HVDCs, HPs and P2Gs, and the correlation can be concluded as shown in Table IX. From a technical perspective, the weak correlation of HVDC with technical factors shows its strong robustness with large capacity. In contrast, P2G shows its advantages in strong variability with small capacity and operation flexibility (Table V) and its disadvantage in uncertainty (Table VI), due mainly to the higher additional levelized cost, which is approximately twice HVDC. Therefore, in the worst case, the investment would be conservative. From an economic perspective, the reduction in cost significantly decreases the economy of the HVDC, which is beneficial to hydrogen production (Table VIII). However, when the development of the hydrogen transportation sector is pessimistic, P2G shows limitations in both the planning and operation levels, which verifies the conclusion that the profit of the P2G depends mainly on hydrogen application in the transportation sector. V. CONCLUSIONS Focusing on future large-scale renewable energy utilization, this paper considers two kinds of consumption modes (electricity and hydrogen) and studies the technoeconomic supplement of P2G with HP for HVDC. First the complete planning and operation constraints of a large-capacity P2G cluster considering retirement and unit commitment operation has been proposed. On this basis, a multistage coordinated planning model of REs, HVDCs, HPs, and P2Gs is established considering the variability and uncertainty of renewable energy. The industrial case of Inner Mongolia-Shandong is chosen for case studies. Multistage planning and operation results show the obvious temporal complementarity in which HVDC and P2G occupy advantages in the short term and long term, respectively. Compared to HVDC alone, P2G can provide both technical and economic supplements: 1) An extra 24 GW of renewable energy can be explored with profit; 2) P2G is a kind of flexible resource that can cooperate well with the power grid for extra (above 34%) renewable energy consumption; 3) With more exploration of renewable energy at a lower cost, the present value of LCOE decreases from 0.33 RMB/kWh to 0.26 RMB/kWh, which gains profits for both HVDV and P2G. Furthermore, sensitivity analysis on both technical and economic factors further verifies the advantages of P2G: considering the strong variability of renewable energy and downward tendency of facilities' cost, energy is prone to be consumed from HVDC-majored to P2G-majored. However, since the additional levelized cost of the P2G (0.04 RMB/kWh) is approximately twice the HVDC (0.02 RMB/kWh), and the profit of the P2G depends mainly on hydrogen application in the transportation sector, the P2G is more sensitive to the uncertainty from renewable energy and future hydrogen demand. A. Proof of "Equal-split" Rule For a simpler illustration, i m and i P represent the hydrogen production and the power of the P2G facility i in ON status in the P2G cluster, and the relationship of i m and i P can be described with i f which is a concave function as (32). m and P represent the sum hydrogen production and the sum power of the P2G cluster, respectively, which can be described as follows: max ( ) .. The objective function of the above maximum problem is a concave function, and the equality constraint is linear; therefore, the optimality condition of this problem is: where λ is the Lagrange multiplier of the equality constraint. Since λ is unique, and the optimality condition reveals that for an arbitrary P2G facility in ON status, the optimal i P should satisfy that all the ' () ii fP are the same. Especially when all P2G facilities are in the same type, i P should be equally split by the sum power of the P2G cluster for maximum hydrogen production, which is the "equal-split" rule.
2021-02-04T02:16:17.974Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "12bbafc3a43dd6819e805edb1dbee4fcc246151e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "12bbafc3a43dd6819e805edb1dbee4fcc246151e", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
235790607
pes2o/s2orc
v3-fos-license
Fedlearn-Algo: A flexible open-source privacy-preserving machine learning platform In this paper, we present Fedlearn-Algo, an open-source privacy preserving machine learning platform. We use this platform to demonstrate our research and development results on privacy preserving machine learning algorithms. As the first batch of novel FL algorithm examples, we release vertical federated kernel binary classification model and vertical federated random forest model. They have been tested to be more efficient than existing vertical federated learning models in our practice. Besides the novel FL algorithm examples, we also release a machine communication module. The uniform data transfer interface supports transferring widely used data formats between machines. We will maintain this platform by adding more functional modules and algorithm examples. The code is available at https://github.com/fedlearnAI/fedlearn-algo. Introduction Powerful AI model is built upon learning from sufficient training data. However, in many cases, the data owned by one data collector is insufficient to make an AI model well trained, leading to low overall model performance or model bias. One solution is increasing the training data scale by utilizing the data from different parties. This is a common solution to many use cases where the data from multiple sources are complementary. For example, customer can have purchasing and browsing history on multiple E-commerce platforms. Product recommendation models trained on all these data can definitely outperform models trained by each platform on its own data (Hu et al., 2019). In medical image analysis, data insufficiency is a common limitation for high performance AI model development. Emerging efforts are seen to collaboratively use the data from multiple health care institutions for joint model training and the benefits have been demonstrated on various tasks in the literature (Brisimi et al., 2018;Rieke et al., 2020;Xu et al., 2021). To build a feasible machine learning solution to cross-device or cross-platform data use, a desirable algorithm has to address the following challenges • Data privacy protection. Arbitrary data sharing tends to leak sensitive information like consumer privacy, leading to unpredictable future risk and hurting the customers' trust to-wards the data controller. Data privacy protection is progressively enforced by government legislation. GDPR requires a data protection impact assessment (DPIA) for any data use 1 . The assessment includes solving privacy risk. The data use for AI model learning purpose also subjects to this regulation. • Communication cost. The time cost of a multi-machine algorithm mainly comes from local computation and machine communication. Since currently there have been multiple ways to speedup the computation on single machine (e.g. parallel computing, well-studied efficient single machine model training algorithms), the major bottleneck is the machine communication cost. Communication time cost depends on a number of highly uncontrollable factors such as network workload, network topology and the overall workload of each machine, etc. Popular large models have millions or even billions of parameters, transferring float point numbers at such scale on public network environment takes a long time. Considering the iterative nature of multi-machine algorithms, the overall communication cost can be prohibitive. • Algorithm performance. The complicated multi-machine data properties and machine collaboration mechanism produce many new algorithm research issues. Several problems have aroused extensive research attention, such as data statistical heterogeneity (Nishio & Yonetani, 2019) and data imbalance (Duan et al., 2020), etc. Those issues are closely related to the model performance. To fully exploit the value of data in model learning, they have to be considered in algorithm design, deserving further research efforts. Federated Learning (FL) is among the emerging efforts that target at the above challenges. It was initially proposed by Google as an solution to using data from multiple mobile devices for next word prediction model learning (McMahan et al., 2017). The idea soon gains extensive attention from both industry and academia due to its significant practical value and the numerous research issues waiting to be solved. According to the data partition differences, most of existing FL algorithms can be mainly categorized into horizontal FL algorithms and vertical FL algorithms . Horizontal FL refers to the setting that samples on the involved machines share the same feature space while the machines have different sample ID space. Vertical FL refers to the setting that all machines share the same sample ID space and each machine has a unique feature space. Deploying a multi-machine algorithm is known to be more challenging than single machine algorithm as far as algorithm design and analysis, implementation, debugging and testing are concerned. In this work, we present Fedlearn-Algo, an open-source FL algorithm platform. We release this tool as a platform to demonstrate our current and future privacy-preserving machine learning algorithm research results. Meanwhile, we believe the extensible and flexible overall framework design make it helpful to FL research community by which a multi-machine algorithm can be easily developed. Specifically, Fedlearn-Algo is characterized by the following highlights. • Novel vertical FL algorithms. Most existing FL open-source softwares (e.g. FedML 2 , Flower 3 , TensorFlow Federated 4 , etc.) and algorithm research efforts are mainly dedicated in horizontal FL algorithm development. Vertically partitioned data is seen in many to Business (toB) and Government (toG) applications. Despite the existing vertical FL models such as SecureBoost (Cheng et al., 2021) and homomorphic encryption based logistic regression model (Hardy et al., 2017), their efficiency are found to be unsatisfactory in our real-world FL deployment practice. This motivates us design novel vertical FL algorithms including vertical federated kernel method and vertical federated random forest model. We release prototype of these algorithms. In the future we will release more vertical FL algorithm design results. • Easy-to-use machine communication module. Besides the released vertical FL algorithms, we believe the communication module serving all released algorithms is also friendly to contributors or researchers for their multi-machine algorithm implementation. The information format, parameter number and parameter size transferred between machines differ in FL algorithms. We design a uniform message data structure. It supports the widely used data formats (e.g. int, string, float, vector, matrix, etc.) and arbitrary number of parameters to be transferred within one message. Developers can use it conveniently in their own algorithm implementation for transferred message definition. An uniform message transfer interface is provided to transfer the message. Platform Overview A high level description of current Fedlearn-Algo is illustrated in Figure 2.1. Specifically, an algorithm implemented by Fedlearn-Algo is composed of two components, platform implementation part and user implementation part. The platform implementation part contains several common components shared by all algorithms, including machine communication module (e.g. gRPC Stub and gRPC Server) and a algorithm pipeline template. User implementation part mainly contains the algorithm specific modules. We will introduce the provided vertical FL algorithm examples in §3. In this part we describe the overall design of the platform implementation part. Machine communication. We design two message data structures RequestMessage and Re-sponseMessage. They are used to transfer information between server and clients in all implemented algorithms. Each message contains four variables, sender, receiver, body and phase id. The message body is designed to be a dictionary data structure. It supports transferring multiple information in one message. An uniform function call SendMessage is provided as the data transfer API by which the RequestMessage can be delivered from the sender to the receiver. An ResponseMessage containing the receiver 's response is sent back to the sender after client finish its computation. Algorithm pipeline template. A federated model training process can be generally partitioned into three stages, training initialization, training loop and training wrapping up (e.g. model saving etc.). For most FL algorithms, one iteration of the training loop contains several communication rounds. We define a phase id variable in RequestMessage and ResponseMessage to indicate the status of the corresponding communication round. The pattern is that the computation that server (client) needs to conduct can be identified by the phase id it received from the client (server). We are motivated by this pattern to design a generic training control pipeline template. For each specific algorithm's implementation, a map between phase id symbols and operation function needs to be defined in the function . The use is exemplified by the released vertical FL examples kernel binary classification algorithm and random forest algorithm. User Implementation Flatform Implementation Figure 2.1: An high-level illustration of the Fedlearn-Algo design. We provide a uniform gPRC communication module, including request message data structure, response message data structure and machine communication function call interface. Users can use it in their algorithm implementation. We provide algorithm examples to demonstrate its use. Vertical federated kernel binary classification Kernel method is an classical machine learning algorithm. Given a sample x ∈ R d , a kernel mapping ψ transforms x into a high dimension space such that in that feature space samples from different categories are more linearly separable. To alleviate the high dimension of kernel mapping, (Rahimi et al., 2007) proposes to approximate the kernel mapping with random feature mappings, such that the kernel evaluation of two samples can be approximated by the inner product of the transformed sample, that is where φ(x) denotes the kernel approximation transformation. In our example, we choose random Fourier feature approximation of RBF kernel where z 1 , z 2 , ..., z D ∈ R d are drawn from standard Gaussian distribution, b 1 , b 2 ,..., b D ∈ R are uniformly drawn from [0, 2π], γ is a scale parameter. The randomization property of kernel approximation algorithm make it applicable to protect the privacy of original feature. We leverage this property and propose a kernel vertical federated binary classification model. Assume the overall training samples X = {(x i , y i )} N i=1 are distributed on P parties and the N training samples' ID have been aligned. The active party owns dataset (X 1 , Y ), Y = [y 1 , ..., y N ] Algorithm 1: Federated kernel binary classification model training algorithm. Input : Pre-defined kernel feature mapping φ 1 , φ 2 , ..., φ p . Distributed training data X 1 , X 2 , ..., X P , where X p = {x i,p } N i=1 , p = 1, 2, ..., P . Training set ground truth on active party Y = [y 1 , ..., y N ] . Initialization Initialize model parameters w 0 1 , w 0 2 , ..., w 0 p . for p ∈ {1, 2, ..., P } in parallel do Apply φ p to X p , get φ p (X p ). end for t = 0, 1, ..., t max do /* For clients: the selected client updates model parameter by solving a linear regression task. */ for p ∈ {1, 2, ..., P } in parallel do If t = 0, cp = p and φ p (X p ) is not null, send φ p (X p )w (t) p directly, otherwise compute then send it to all parties, then assign one party for parameter update by setting cp. end Output: Model parameters w 1 , w 2 , ..., w P . and other parties are passive parties with sample features X 2 , X 2 , ..., X P . The learning target is w 1 , w 2 , ..., w P = arg min where w p denotes the model parameter on the p-th party, X p = {x i,p } N i=1 , x i,p denotes the i-th sample on the p-th party, φ(x i,p ) is the kernel approximation mapping of x i,p . For simplicity we assume y i ∈ {−1, 1}. The algorithm used in this example is derived from (Gu et al., 2020a,b). In (Gu et al., 2020a) a federated vertical doubly stochastic kernel learning algorithm is proposed. (Gu et al., 2020b) proposes a asynchronous vertical federated linear model training algorithm. The algorithm updates local models on all parties in parallel. The shown example makes the following modifications for efficiency concern without losing data privacy protection measure. First, we adopt local kernel mapping on involved parties for data privacy. The random matrix and vector used for kernel approximation mapping can also encrypt the local data. Second, we adopt a batch algorithm rather than the stochastic algorithm used in (Gu et al., 2020b) to improve training efficiency. The training algorithm is summarized in Algorithm 1. First, each involved party transforms the original feature with its kernel approximation mapping function φ. The training loop has two communication rounds. At the first round, one selected party updates its local model parameters by solving the local linear regression model learning task Eqn. 3.1, then all parties send either At the second round, master machine aggregate the client updates via Eqn.3.2 and chooses the client for local parameter update at the next iteration, then sends the aggregation result to the clients. Vertical federated random forest Algorithm 2: Main pipeline of building one federated decision tree Input : . Initialization Active party encrypts label and send the encrypted label { y i } N i=1 to all passive parties via server. for t = 0, 1, ..., t max do for p ∈ 1, 2, ..., P in parallel do Client p computes encrypted label quantile statistics S p by Algorithm 3, then send S p to server; end Server collects {S p } P p=1 and sends them to active party. Active party find the best split parameter (f t opt , v t opt ) from {S p } P p=1 , then send it to all other parties. for p ∈ 1, 2, ..., P in parallel do If f t opt ∈ F p , split the feature space into (F R ) by f t opt and create child nodes. end end Output: One decision tree Random forest (RF) is a popular tree structure model. Given a input sample x ∈ R d , the prediction function of a RF is an ensemble of multiple decision trees: where R denotes the RF prediction function, T i denotes the i-th decision tree, Agg denotes the aggregation strategy. Because the decision trees can be trained in parallel, by proper parallel programming implementation the training efficiency of an RF model can be significantly improved. The overall training algorithm of one vertical federated decision tree is shown in Algorithm 2. We denote the training samples' feature, instance feature space and label set as X ∈ R N ×d , F and Y = [y 1 , ..., y N ] respectively. Assume the feature space is distributed on P parties, that is X = {X p } P p=1 , F = {F p } P p=1 , and there is only one party holding Y as the active party. At the initialization step, active party sends encrypted labels Y to all passive parties via server machine. After receiving Y , each passive party calculates the encrypted label quantile statistics S p via Algorithm 3. We use l p to denote Algorithm 3: Encrypted label quantile statistics on the p-th party Compute quantiles of the k-th dimension feature, C k = c k,1 , c k,2 , ..., c k,lp . for v = 1, ..., l p do Compute label statistics where n kv denotes the sample number whose feature value lies in (c k,v−1 , c k,v ]. end end the pre-defined quantile number and d p to denote the feature dimension on p-th party. Therefore we have S p ∈ R dp×lp where the entry S p (i, j) denotes the average value of Y on the i-th dimension feature and j-th quantile. Active party receives {S p } P p=1 from master, then evaluate which feature dimension and quantile should be used for tree split, based on proper criterion like maximum information gain. The decided feature and quantile (f opt , v opt ) is sent to the corresponding party for tree split. At the initialization step, we adopt homomorphic encryption to encrypt the labels Y . A good property of homomorphic encryption is that it allows for computations such as addition or multiplication on the encrypted data. Therefore we compute the label quantile statistics {S p } P p=1 on Y then active party can decrypt {S p } P p=1 and compute the feature split based on the decrypted quantile label statistics. Conclusion and Future Work In this paper, we introduce Fedlearn-Algo, an open-source privacy-preserving machine learning algorithm platform. As the first part of release, we open-source two novel vertical FL models, kernel binary classification model and vertical FL random forest model. The platform is naturally compatible to existing machine learning tools (e.g. TensorFlow, PyTorch, Sklearn, etc.), by which researchers and contributors can implement their own algorithms. We believe the agnostic data format transfer interface and the algorithm template are flexible and easy-to-use. In the future, we will continue adding more functionality modules to Fedlearn-Algo. Our overall plan is shown in Figure 4.1. Specifically, our future efforts include but are not limited to the following aspects. • Adding more functional module support. We are working on adding asynchronous machine communication support and decentralized network topology support. We also plan to build an data encryption module, providing standard data cryptography algorithm implementations for user. • Releasing our novel algorithm research results. This platform is used to demonstrate our current and future algorithm research and development results, with emphasis on vertical FL algorithms. We will release those algorithm implementations on this platform in the future. • Providing standard algorithm implementations. Apart from the novel algorithm release, we will also provide standard privacy-preserving algorithm implementations, such horizontal FL algorithms, SMPC protocols, differential privacy methods and emerging methods such as distillation based method (e.g. (Wang et al., 2019)) and graph federated learning (e.g. (Meng et al., 2021)). • Applying privacy-preserving ML to specific use cases. Leveraging privacy preserving ML for cross-device data use is being observed in more and more domains (Pokhrel & Choi, 2020;Khan et al., 2021). Our team is currently exploiting the use in DS, CV and NLP. We will also consider other applications such as Speech and IoT, etc.
2021-07-12T01:15:59.989Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "ded38f70e97a9027b6ce954aca39d4887e0ed312", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ded38f70e97a9027b6ce954aca39d4887e0ed312", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2175481
pes2o/s2orc
v3-fos-license
Seasonal and spatial variability of the OM / OC mass ratios and high regional correlation between oxalic acid and zinc in Chinese urban organic aerosols We calculated the organic matter to organic carbon mass ratios (OM/OC mass ratios) in PM 2.5 collected from 14 Chinese cities during summer and winter of 2003 and analyzed the causes for their seasonal and spatial variability. The OM/OC mass ratios were calculated two ways. Using a mass balance method, the calculated OM/OC mass ratios averaged 1.92± 0.39 year-round, with no significant seasonal or spatial variation. The second calculation was based on chemical species analyses of the organic compounds extracted from the PM2.5 samples using dichloromethane/methanol and water. The calculated OM/OC mass ratio in summer was relatively high (1.75± 0.13) and spatially-invariant due to vigorous photochemistry and secondary organic aerosol (OA) production throughout the country. The calculated OM/OC mass ratio in winter (1.59± 0.18) was significantly lower than that in summer, with lower values in northern cities (1.51 ± 0.07) than in southern cities (1.65 ± 0.15). This likely reflects the wider usage of coal for heating purposes in northern China in winter, in contrast to the larger contributions from biofuel and biomass burning in southern China in winter. On average, organic matter constituted 36 % and 34 % of Chinese urban PM2.5 mass in summer and winter, respectively. We report, for the first time, a high regional correlation between Zn and oxalic acid in Chinese urban aerosols in summer. This is consistent with the formation of stable Zn oxalate complex in the aerosol phase previously proposed by Furukawa and Takahashi (2011). We found that many other dicarboxylic acids were also highly correlated with Zn in the summer Chinese urban aerosol samples, suggesting that they may also form stable organic complexes with Zn. Such formation may have profound implications for the atmospheric abundance and hygroscopic properties of aerosol dicarboxylic acids. Introduction The mass ratio of organic matter (OM) versus organic carbon (OC) in organic aerosol (OA) (hereafter referred to as OM/OC mass ratio) is an important bulk parameter for OA chemical composition.For OA not impacted by biomass burning OA, a high OM/OC mass ratio indicates a high degree of oxidation, which suggests that a large fraction of the OA is secondary (i.e., produced in the atmosphere from gaseous organic precursors) or significantly aged (Turpin and Lim, 2001).A higher degree of oxidation in the OA often corresponds to a higher degree of hygroscopicity and lower Published by Copernicus Publications on behalf of the European Geosciences Union. L. Xing et al.: Seasonal and spatial variability of the OM/OC mass ratios surface tension (e.g., Jimenez et al., 2009;Lambe et al., 2011), which in turn affects the radiative property of the OA as well as its potential to act as cloud condensation nuclei (CCN).In addition, OM/OC mass ratios are widely used to estimate the total OM mass from OC mass in the bulk aerosol (e.g., Hand et al., 2011). Three general methods have been used to calculate aerosol OM/OC mass ratios.The first is the mass balance method, where OM mass is determined by the difference between the total aerosol mass and the mass sum of measured aerosol inorganic components (e.g., El-Zanan et al., 2005;Bae et al., 2006a).OC mass is usually determined by thermal/optical techniques (Chow et al., 1993).The second method is by extraction of organic species, where aerosol samples are dissolved in solvents to extract compounds in the corresponding ranges of polarity.The extraction can be weighed to determine the bulk OM mass (e.g., El-Zanan et al., 2005;Polidori et al., 2008).Alternatively, the extractions can be analyzed with chromatography and mass spectrometry techniques to resolve the molecular composition.The OM/OC mass ratios can then be calculated based on the molecular formulae and concentrations of the identified species (e.g., Turpin and Lim, 2001).A third way to calculate OM/OC mass ratio is based on functional group densities, which can be measured using aerosol mass spectrometry (AMS) or Fourier transformed infrared spectroscopy (FTIR) (e.g., Zhang et al., 2005;Aiken et al., 2008).White and Roberts (1977) first reported an OM/OC mass ratio of 1.4 for urban OA, based on the fraction of polar compounds extracted from aerosol samples collected in Los Angeles (Grosjean and Friedlander, 1975).Later, Turpin and Lim (2001) reviewed several organic species extraction studies and calculated OM/OC mass ratios of 1.6 ± 0.2 for urban OA and 2.1 ± 0.2 for rural OA.They pointed out that the higher OM/OC mass ratios in rural OA indicate a larger secondary fraction and/or a higher degree of aging.Several studies also found higher aerosol OM/OC mass ratios in summer than in winter for both urban and rural OA not impacted by biomass burning, indicating stronger photochemistry and larger secondary contribution in summer (El-Zanan et al., 2005;Bae et al., 2006b;Malm et al., 2011;Simon et al., 2011).Aerosols impacted by biomass burning can have even higher OM/OC mass ratios (2.2-2.6)due to high sugar and carboxylic acid content (Turpin and Lim, 2001). Several studies have analyzed the aerosol OM/OC mass ratios at specific urban locations in China.Chen and Yu (2007) calculated an annual average OM/OC mass ratio of 2.1 ± 0.3 for PM 2.5 collected at a suburban site in Hong Kong using the mass balance method.Using the AMS, Huang et al. (2010) and He et al. (2011) found average PM 1 OM/OC mass ratios of 1.58 and 1.57 ± 0.08 in Beijing in summer and in Shenzhen in fall, respectively.To the best of our knowledge, there has not yet been a systematic analysis of the seasonal and spatial variability of OM/OC mass ratios for Chinese urban OA. In this study, we analyzed the OM/OC mass ratios in PM 2.5 collected from 14 cities throughout China during winter and summer of 2003.We calculated the OM/OC mass ratios by two methods (mass balance and extracted organic species analyses) and estimated the uncertainties associated with each method.We examined the organic species driving the spatiotemporal variability of the OM/OC mass ratios and discussed the implications for China urban OA sources.We report, for the first time, high correlations between Zn and aerosol dicarboxylic acids (in particular oxalic acid) in Chinese urban OA in summer and discussed the implications for the aqueous chemistry of dicarboxylic acids. 2 Data: chemical composition of PM 2.5 in 14 Chinese cities PM 2.5 samples were collected by Cao et al. (2007) 1 and illustrated in Fig. 1.The sampling sites were selected to represent urban-scale concentrations and were all > 100 m away from local sources such as major roads.Detailed descriptions of the sampling procedure and the analyses of PM 2.5 , OC, and elemental carbon (EC) concentrations were presented in Cao et al. (2007).Briefly, each sample of PM 2.5 was collected on a pre-fired quartzfiber filter by a mini-volume air sampler at a flow rate of 5 L min −1 for 24 h.PM 2.5 masses were determined gravimetrically against blank filters under controlled temperature and relative humidity.OC and EC concentrations were analyzed following the IMPROVE thermal/optical reflectance protocol on a DRI 2001 carbon analyzer (Chow et al., 1993).For each city, the average PM 2.5 , OC, and EC masses were determined based on 8 to 22 samples in summer and 13 to 16 samples in winter.Figure 1 shows the mean summertime and wintertime OC concentrations for the 14 Chinese cities reported in Cao et al. (2007).OC concentrations ranged from 6.3-35 µg m −3 (average 15.8 µg m −3 ) in summer and 15-99 µg m −3 (average 36.2 µg m −3 ) in winter.Highest OC concentrations were measured in the inland industrial cities of Chongqing and Xi'an.Lowest OC concentrations were measured in the coastal cities of Qingdao, Xiamen, and Hong Kong, reflecting the ventilating effects of marine air.Measured OC concentrations in all cities except Tianjin were higher in winter than in summer, likely reflecting the stronger emissions associated with residential heating in winter.About ten samples from each city were analyzed for inorganic compositions.Na + , NH + 4 , K + , SO 2− 4 , NO − 3 , and Cl − concentrations were determined by ion chromatography (Cao et al., 2012) The mass fraction of OA in PM 2.5 were calculated using the OM/OC mass ratio calculated for each city based on the extracted organic species analyses.X-ray fluorescence spectrometry (ED-XRF) (Cao et al., 2012).Total concentrations of Ca and Mg could not be accurately quantified by ED-XRF due to variable blank filter backgrounds and absorption biases.Instead, their watersoluble concentrations were determined by high-resolution inductively coupled plasma mass spectrometry (HR-ICP-MS) (Cheng et al., 2012).Water-soluble Al, Cd, Ni, and Mo concentrations were also determined by HR-ICP-MS (Cheng, unpublished data, 2012). For each city, two summertime samples and two wintertime samples were further analyzed for organic compositions.Wang et al. (2006a) extracted filter aliquots with a mixture of dichloromethane/methanol (2:1, v/v) under ultrasonication.The extracts were then filtered, concentrated, and treated with N,O-bis-(trimethylsilyl) trifluoroacetamide with 1 % trimethylsilyl chloride and pyridine to convert the extracts to their trimethylsilyl derivatives.The derivative extracts were analyzed by gas chromatography-mass spectrometry (GC-MS) to determine the molecular compositions.Resolved species include C 16−35 n-alkanes, C 9−34 fatty acids, sugars, phthalates, C 12−32 fatty alcohols, polyols and polyacids, lignin and resin products, sterols, polycyclic aromatic hydrocarbons (PAHs), and hopanes.The resulting concentrations measured for these species were either on the high end or exceeded the values reported for urban aerosols in other parts of the world (Wang et al., 2006b).On average, 6.5 % and 6.1 % of the total OC mass were chemically resolved by Wang et al. (2006a) in summer and winter, respectively.Ho et al. (2007) extracted water-soluble organic species from the PM 2.5 samples with pure water.Total water-soluble organic carbon (WSOC) in the extract was determined using the DRI Model 2001 carbon analyzer.WSOC constituted 48 % and 41 % of the total OC in the summertime and wintertime samples, respectively.The extracts were then filtered, concentrated, and treated with 14 % BF 3 /n-butanol at 100 • C to convert the carboxyl groups to butyl esters and the aldehyde groups to dibutoxy acetals.The derivatives were further extracted with n-hexane and analyzed with GC-MS.Resolved species included α, ω-dicarboxylic acids (C 2 ∼C 12 ), ω-oxocarboxylic acids(C 2 ∼C 9 ), phthalic acid, pyruvic acid, and dicarbonyls.The resulting concentrations measured for these species in Chinese urban aerosols were generally comparable to those measured in urban aerosols in other parts of the world (Ho et al., 2007).On average, 4.6 % and 2.7 % of the total WSOC mass was chemically resolved by Ho et al. (2007) in summer and in winter, respectively. 3 OM/OC mass ratios in Chinese urban PM 2.5 Mass balance method We first calculated OM/OC mass ratios using the mass balance method, where the OM mass was estimated as the difference between the total PM 2.5 mass and the mass sum of measured inorganic species: We assumed that most trace elements were present in PM 2.5 in the form of their common crustal oxide compounds (TE oxides), except As, Br, Mo, Pb, Ni, and Zn, which were assumed to be elemental (Bae et al., 2006a).Table 2 lists the conversion factors we used to calculate the masses of the oxide compounds from their respective trace element masses (Kleeman et al., 2000;El-Zanan et al., 2005;Bae et al., 2006a).Silicon oxides are important crustal components, but the mass of Si was not explicitly measured.We estimated Si mass by multiplying Al mass by 2.23 based on the mean measured Si/Al mass ratios in PM 2.5 sampled in Beijing and Chongqing (Zhao et al., 2010).Particle-bound water (PBW) is the water present in the PM 2.5 sample at the relative humidity under which the PM 2.5 mass is weighed (35-45 %).We assumed that the PBW was 5.8 % of the total PM 2.5 mass following Drewnick et al. (2004).For the 24 h-accumulated filter samples collected by Cao et al. (2007), there may be a positive artifact in OC mass due to the adsorption of organic gases on the filters, and a negative artifact due to the volatilization of semi-volatile OA from the filters.We estimated that the net artifact to be +3.7 % of the OC mass following Bae et al. (2006a). Table 1 shows the OM/OC mass ratios calculated for the 14 Chinese cities using the mass balance method.The 14city-mean OM/OC mass ratio was 1.94 ± 0.51 in summer, not significantly different from that in winter (1.91 ± 0.25) (paired t test, p value > 0.8).The ratio of summer versus winter OM/OC averaged 1.02 ± 0.26.In summer, the average OM/OC mass ratios were 1.95 ± 0.52 for northern cities and 1.93 ± 0.49 for southern cities.In winter, the average OM/OC mass ratios were 1.95 ± 0.30 for northern cities and 1.86 ± 0.14 for southern cities.The spatial differences were not statistically significant in either season (two sample t tests, p values > 0.5).The OM/OC mass ratio averaged for all cities year-round was 1.92 ± 0.39. The mass balance calculation described above likely overestimated the OM mass for two reasons.Firstly, previous studies have shown that significant fractions of the aerosol nitrate mass (50-70 % in summer and 10 % in winter) may evaporate from the quartz-fiber filters prior to analysis (Chow et al., 2005;Nie et al., 2010).Secondly, only the watersoluble concentrations of Ca, Mg, Al, Cd, Ni, and Mo were measured, but these species also exist in water-insoluble form in aerosols.Schleicher et al. (2011) showed that the water-insoluble fractions of Ca, Mg, Al, Cd, and Ni accounted for 13-84 %, 36-79 %, 87-94 %, 38-84 %, and 93-98 % of the total mass of these species, respectively, in PM 2.5 in Beijing.Adopting the mean estimates for the fractions of evaporated nitrate and water-insoluble transient metals, we estimated that the uncertainty in the OM/OC mass ratios calculated by the mass balance method to be 55 % in summer and 12 % in winter. Extracted organic species analyses We performed a second calculation of the OM/OC mass ratios by combining the extracted organic species analyses by Wang et al. (2006a) and Ho et al. (2007).A total of 129 organic species were resolved in the summertime PM 2.5 samples, constituting on average 3.9 % of the total PM 2.5 mass and 8.9 % of the total OC mass.A total of 143 organic species were resolved in the wintertime PM 2.5 samples, constituting on average 2.7 % of the PM 2.5 mass and 7.5 % of the total OC mass.The molecularly-resolved OC mass fractions were comparable to those of previous studies (Polidori et al., 2008).The OM/OC mass ratio was calculated as where X i is the mass concentration of organic compound i.M ci is molecular carbon weight in organic compound i, and M mi is molecular weight of organic compound i; n is the total number of identified organic compounds.For each city, we used the organic species concentrations averaged over two filter samples to calculate the OM/OC mass ratio in each season. Table 1 shows the OM/OC mass ratios calculated based on the extracted organic species.The average OM/OC mass ratio for all 14 cities in summer was 1.75 ± 0.13, significantly higher than that in winter, which was 1.59 ± 0.18 (paired t test, p value = 0.005).This difference was mainly driven by the seasonal changes in northern Chinese cities.In summer, the OM/OC mass ratios in northern (1.78 ± 0.14) and southern (1.72 ± 0.11) cities were not significantly different (two sample t test, p value = 0.4).In winter, the OM/OC mass ratios in northern cities (1.51 ± 0.07) were significantly lower than those in southern cities (1.65 ± 0.15) (two sample t test, p value = 0.025). For the OM/OC mass ratios calculated here using extracted organic species analyses, both low and high biases are possible.Some potentially abundant, high-molecular-weight oxygenated organics were not resolved by either Wang et al. (2006a) or Ho et al. (2007).Examples include humic-like substances (HULIS) and oligomers in OA, both present in large amounts in Chinese urban aerosols (Lin et al., 2010;Hall and Johnston, 2012) and associated with relatively high OM/OC mass ratios (1.5-2.0)(Altieri et al., 2008;Lin et al., 2012).We estimated the associated biases in our calculated OM/OC mass ratios to be −3.6 % in summer and −6.5 % in winter, assuming that HULIS and oxygenated oligomers each constituted 25 % of the total OC mass with an average OM/OC mass ratios of 2.0.On the other hand, Polidori et al. (2008) showed that the OM/OC mass ratios of OA extractions eluted by different solvents increase with the polarity of the solvents.The solvents used by Wang et al. (2006a) and Ho et al. (2007) (dichloromethane/methanol and water, respectively) were of low to high polarity.However, some very low polarity organic species, such as > C 25 n-alkanes, were inefficiently extracted (Polidori et al., 2008).Studies have shown that hydrocarbon-like OA constitute 18-36 % of Chinese urban OA in summer and 29.5 % in winter (e.g., Huang et al., 2010;He et al., 2011;Sun et al., 2012).Assuming that 10 % of the hydrocarbon-like OA was not extracted by Wang et al. (2006a) and assuming an OM/OC ratio of 1.2 for these species, our calculated OM/OC mass ratios may be slightly high-biased by +1.3 % in summer and +0.9 % in winter.The actual net bias of the OM/OC mass ratios calculated based on extracted organic species analyses are likely to be less than Organic compounds affecting OM/OC mass ratios and implications for Chinese urban OA sources We wished to understand what drove the seasonal and spatial variability of the OM/OC mass ratios in Chinese urban OA.To this end, we re-calculated the OM/OC mass ratios based on extracted organic species, excluding one organic compound at a time.In summer, we found that the oxalic acid had the largest impact on the OM/OC mass ratios of Chinese urban OA.In winter, the top two compounds with the largest impacts on OM/OC mass ratios were oxalic acid and levoglucosan.Oxalic acid has the highest molecule-to-carbon mass ratio (3.75) out of all the resolved organic compounds.It constituted on average 11 % and 7 % of the molecularly-resolved OC mass in summer and in winter, respectively.Levoglucosan also has a high molecule-to-carbon mass ratio of 2.25.It constituted on average 8.1 % of the molecularly-resolved OC mass in winter. The current understanding of the sources of oxalic acid in OA is that it is either emitted from biomass burning (e.g., Kundu et al., 2010) or produced secondarily from the aqueous-phase oxidation of carbonyls (e.g., Myriokefalitakis et al., 2011), which are in turn oxidation products of volatile organic compounds from anthropogenic, biogenic, and biomass burning sources.Primary anthropogenic sources of oxalic acid are thought to be small (Huang and Yu, 2007;Myriokefalitakis et al., 2011).Levoglucosan is produced by the thermal degradation of cellulose (Simoneit et al., 1999) and is often used as a molecular tracer for biomass or biofuel burning (e.g., Zhang et al., 2008). Table 3 shows the 21 species with highest correlation against oxalic acid in Chinese urban aerosols in summer (all correlations have one-tail p values < 0.025 and are not driven by outliers.Here and throughout, all statistics were calculated from the full dataset without filtering for outliers unless otherwise noted).In summer, oxalic acid was not significantly correlated with levoglucosan (r = 0.10) in our Chinese urban aerosol samples.Instead, oxalic acid was most highly correlated with its known aqueous phase precursors, such as glyoxylic acid (r = 0.95), adipic acid (r = 0.82), succinic acid (r = 0.80), pyruvic acid (r = 0.78), glutaric acid (r = 0.77), malonic acid (r = 0.75), and glyoxal (r = 0.65) (e.g., Ervens et al., 2004;Carlton et al., 2007;Altieri et al., 2008) as well as sulfate (r = 0.66). High correlations between oxalic acid (or oxalate) and sulfate in ambient aerosols have been reported in many previous studies (e.g., Yu et al., 2005;Sorooshian et al., 2006Sorooshian et al., , 2007)).These studies attributed such high correlations to aqueous production being the dominant source of oxalic acid, supported by time-resolved measurements and box model sim- ulations (e.g., Sorooshian et al., 2006).In our case, because the PM 2.5 samples were collected on 24 h bulk filters; detailed analyses of the history of the sampled air were not possible.However, the fact that oxalic acid is highly correlated with many of its known aqueous precursors, in addition to sulfate, corroborates the idea that the oxalic acid in Chinese urban aerosols were most likely mainly produced secondarily in the aqueous phase in summer.Thus, the relatively high OM/OC mass ratio in summertime Chinese urban OA is driven by strong secondary OA production in summer.The lack of difference between the OM/OC mass ratios in northern and southern cities suggests that the precursor emissions and photochemical processes responsible for secondary OA production are strong throughout the country in the warm season. In winter, levoglucosan and oxalic acid concentrations were both high in all 14 Chinese cities, and the two species were highly correlated (r = 0.72, excluding Chongqing and Xi'an, where levoglucosan concentrations exceeded 2700 ng m −3 and were more than three times the levoglucosan concentrations of any of the other cities).This indicates that Chinese urban OA are strongly impacted by biomass and biofuel burning in winter.The difference in the OM/OC mass ratios between northern and southern cities is mainly due to the higher contribution of biomass and biofuel burning in southern cities.Combined, levoglucosan and oxalic acid constituted on average 1.44 % of the total wintertime OC in southern cities, while they only constituted on average 0.85 % of the total wintertime OC in northern cities.In contrast, alkanes and PAH combined to make up 23.8 % and 15.2 % of the total wintertime OC in northern and southern cities, respectively, reflecting the larger contribution from coal burning for heating purposes in northern Chinese cities in winter (Wang et al., 2006a). Comparison with previous studies and contribution of OA to PM 2.5 Table 4 compares the OM/OC mass ratios calculated in this study against values previously reported for urban OA.Previous estimates of the OM/OC mass ratios for urban OA ranged from 1.3 to 2.16, with higher values in summer than in winter.In China, Huang et al. (2010) and He et al. (2011) found OM/OC mass ratios of 1.58 in Beijing in summer and 1.57 ± 0.08 in Shenzhen in fall, respectively.The range and seasonal variability of our calculated OM/OC mass ratios are consistent with these previous studies.Chen and Yu (2007) reported a high OM/OC mass ratio of 2.1 ± 0.3 for Hong Kong with little seasonal variation, likely reflecting the stronger photochemistry and secondary production in southern China year-round.We calculated the seasonal contributions of OA to urban PM 2.5 , using the OM/OC mass ratios calculated from the extracted organic species analyses for each city in summer and in winter.OA constituted 23-45 % (average 36 %) of the urban PM 2.5 mass in summer and 29-41 % (average 34 %) of the urban PM 2.5 mass in winter.The contributions of OA to PM 2.5 mass did not vary significantly with season for each city, nor were they significantly different between northern and southern Chinese cities in either season. High correlation between aerosol oxalic acid and zinc and its implications During our analysis, we unexpectedly found that oxalic acid was highly correlated with Zn in the Chinese urban aerosol samples in summer (all cities r = 0.72; northern cities r = 0.74; southern cities r = 0.89).Figure 2 shows the scatter plot of the molar concentrations of oxalic acid and Zn in the Chinese urban aerosol samples in summer.Table 3 shows the 21 species with highest correlation against oxalic acid in Chinese urban aerosols in summer.All of the species with higher correlation against oxalic acid than zinc were either known aqueous precursors of oxalic acid or other dicarboxylic acids, which may have aqueous production pathways similar to those of oxalic acid (e.g., Ervens et al., 2004;Carlton et al., 2007;Altieri et al., 2008).To the best of our knowledge, such high correlation between aerosol oxalic acid and Zn on a regional scale has never been reported.We discuss the implications here. There are five possible explanations for the high correlation between oxalic acid and Zn in Chinese urban aerosols in summer.The first is that the correlation merely reflects the contrast in PM 2.5 pollution severity among the different cities.We found this not to be the case, as oxalic acid and Zn were still significantly correlated when normalized by PM 2.5 mass (r = 0.54).A second possibility is that oxalic acid and Zn are of the same primary sources.Known sources of aerosol Zn are mainly anthropogenic, with largest emissions from Zn mining and production, followed by vehicle tire abrasion, waste incineration, iron/steel and copper mining and production, fertilizer production, and cement production (Councell et al., 2004).Measurements in Mexico City and in Beijing showed that Zn particles were mainly from industrial activities and waste incineration (Moffet et al., 2008;Li and Shao, 2009).There is also some Zn emission from biomass burning (Gaudichet et al., 1995).However, we found no significant correlations between oxalic acid and other chemical tracers indicative of primary anthropogenic or biomass burning emissions, such as Pb and levoglucosan.In addition, Huang and Yu (2007) showed that there is no significant vehicular emission of oxalic acid.Myriokefalitakis et al. (2011) modeled the global oxalic acid budget and showed that the primary sources are far too low to account for the atmospheric abundance of oxalic acid.A third possibility is that the anthropogenic sources that emit Zn also emit the precursors of oxalic acid, but we found that this was not the main driver for the high correlation between Zn and oxalic acid.Sorooshian et al. (2006) analyzed aircraft measurement of urban pollution plumes and found that aerosol oxalic acid was correlated with toluene emitted from anthropogenic sources.However, we found no correlation between Zn and glyoxal or methylglyoxal, the two intermediate oxidation products of toluene leading to oxalic acid formation.Table 5 shows the top 22 chemical species with the highest correlations against Zn in Chinese urban aerosols in summer.Aside from the dicarboxylic acids, glyoxlic acid, and pyruvic acid, Zn was highly correlated with di-iso-butyl and di-n-butyl phthalate, OC, Mo, K + , Mn, and C 16 fatty acid, reflecting the anthropogenic origin of Zn.However, with the exception of OC, none of these latter species had a high correlation with oxalic acid. Our analyses above led to the fourth possibility, which is that the stability or secondary formation of aerosol oxalic acid is somehow enhanced at high Zn concentrations.In this study, oxalic acid was measured by GC after derivation to butyl ester, but it may be present in the aerosol as its anion, oxalate.Kawamura et al. (2010) showed that the aerosol oxalic acid concentrations measured by GC after derivatization agree well (4 % difference) with the oxalate concentrations measured by ion chromatography (IC) without derivatization.Furukawa and Takahashi (2011) hypothesized that oxalate may react with metal ions in the aerosol to form metal oxalate complexes that precipitate, decreasing the hygroscopicity of oxalate.These metal oxalate complexes dissolve when aerosol samples are diluted with water during the pre-processing for either GC or IC analysis, and thus have not been detected previously.Furukawa and Takahashi (2011) used X-ray absorption fine structure spectroscopy to characterize the Zn and Ca in size-segregated urban aerosol samples collected in Japan in winter and in summer.They showed that 20-100 % and 10-60 % of the total Zn and Ca in the fine particles were present as Zn and Ca oxalate complexes, respectively.Some 60-80 % of the total oxalate in 0.65-2.1 µm PM was present as either Zn or Ca oxalates, with Zn oxalate being more abundant.Moreover, they found that the ratio of Zn oxalate to total Zn increased with decreasing particle size, suggesting that Zn oxalate may be formed at the particle surface. Our report of high correlation between aerosol oxalic acid and Zn across 14 Chinese cities in summer is consistent with the formation of Zn oxalate complex.Moreover, it suggests that such formation may be the determining factor to secondary oxalic acid abundance on a regional scale, either by enhancing oxalate formation at the particle surface, or by preventing oxalic acid to further oxidize to eventually form CO 2 .This has profound implications not only for the global and regional abundance of aerosol oxalic acid, but also its hygroscopicity and CCN activity (Sullivan et al., 2009), which in turn determines their direct and indirect radiative forcing.Furukawa and Takahashi (2011) further hypothesized that other dicarboxylic acids and heavy metals may also form similar stable organic metal complexes, and our analysis is in support of this.Table 5 shows the top 22 chemical species with highest correlations against Zn in Chinese urban aerosols in summer.In addition to oxalic acid, many other high-concentration dicarboxylic acids, such as phthalic acid, malonic acid, glutaric acid, and azelaic acid, are also highly correlated with Zn (all correlations not driven by outliers and all one-tail p values < 0.025).We added up the molar concentrations of the 12 dicarboxylic acid species with high correlations against Zn shown in Table 5.The molar ratios of Zn relative to the sum of these dicarboxylic acids for the 14 cities ranged from 0.11 to 1.78, with an average of 1.05.This is consistent with the picture that large fractions of both dicarboxylic acid and Zn exist in the aerosol as organic Zn complexes.Glyoxylic acid and pyruvic acid, two important aqueous-phase precursors to oxalic acid, were also highly correlated with Zn.This may imply that Zn participates in the aqueous chemistry of carboxylic acids even before the formation of oxalic acid, although the exact mechanism is currently unknown. In winter, oxalic acid was not significantly correlated with Zn in the Chinese urban aerosol samples (Figure S1 in the supplementary material).This may be because the oxalic acid emitted by biomass burning was present in coarser particles than that produced by secondary production (Wang et al., 2012).Alternatively, it may be because there is an overabundance of Zn in Chinese urban aerosols in winter, such that the oxalic acid concentrations were not limited by Zn concentrations.We added up the wintertime molar concen-trations of the dicarboxylic acids that were strongly correlated with Zn in summer.The molar ratios of Zn relative to the sum of these dicarboxylic acids for each city ranged from 0.22 to 6.06, with an average of 1.88.Moffet et al. (2008) used single particle mass spectrometry to characterize ambient aerosols in northern Mexico City in March 2006.They showed that Zn was mainly from industrial activities and waste incineration, while oxalate was mainly associated with biomass burning and urban sources.They found no significant correlation between aerosol oxalate and zinc. We found no significant correlation between oxalic acid and Ca in the Chinese urban aerosols in summer or in winter, perhaps because oxalic acid exists in finer particles than Ca does.Alternatively, it may be because only water-soluble Ca was measured by Cheng et al. (2012), such that Ca oxalate complexes were filtered out. A final possible explanation for the high correlation between Zn and aerosol oxalic acid is that the formation of stable Zn oxalate complex took place on the bulk PM 2.5 filters.Any additional oxalic acid not forming Zn oxalate complex may have evaporated prior to analysis.If this be the case, then all aerosol oxalic acid measurements based on bulk PM filter samples in areas heavily impacted by anthropogenic sources (Zn) and/or dust (Ca) may be significantly low-biased.Clearly, more detailed, time-and size-resolving measurements are needed to examine the roles of Zn and Ca in the aqueous chemistry of oxalic acid and other dicarboxylic acids. the country.The calculated OM/OC mass ratio in winter (1.59 ± 0.18) was significantly lower than that in summer, with lower values in northern cities (1.51 ± 0.07) than in southern cities (1.65 ± 0.15).This likely reflects the wider usage of coal for heating purposes in northern China in winter, in contrast to the larger contributions from biofuel and biomass burning in southern China in winter.We estimated that the net bias of the OM/OC mass ratios calculated based on extracted organic species analyses to be less than −3.6 % in summer and −6.5 % in winter, since the positive bias associated with under-extraction of low-polarity organics and the negative bias associated with under-identification of oxygenated high molecular weight organics partially offset each other.On average, organic matters constituted 36 % and 34 % of Chinese urban PM 2.5 mass in summer and in winter, respectively. We report, for the first time, high regional correlations between Zn and oxalic acid in Chinese urban aerosols in summer.This is consistent with the formation of stable Zn oxalate complex in the aerosol phase previously proposed by Furukawa and Takahashi (2011).We found that many other dicarboxylic acids were also highly correlated with Zn in the summer Chinese urban aerosol samples, suggesting that they may also form stable organic complexes with Zn.Such formation may have profound implications for the atmospheric abundance and hygroscopic property of aerosol dicarboxylic acids.More detailed, time-and size-resolving measurements are needed to examine the interactions between metals and carboxylic acids in aerosols and the impacts on the abundance and hygroscopicity of OA. Figure1. Figure1.OC concentrations in PM2.5 samples collected in 14 Chinese cities during summer (grey) and winter (black) of 2003.The dashed line indicates 32 o N, which divides northern and southern Chinese cities. Fig. 1 . Fig. 1.OC concentrations in PM 2.5 samples collected in 14 Chinese cities during summer (grey) and winter (black) of 2003.The dashed line indicates 32 • N, which divides northern and southern Chinese cities. −3.6 % in summer and −6.5 % in winter, since the positive www.atmos-chem-phys.net/13etal.: Seasonal and spatial variability of the OM/OC mass ratios bias associated with under-extraction of low-polarity organics and the negative bias associated with under-identification of oxygenated high molecular weight organics partially offset each other. Figure 2 . Figure 2. Zn versus oxalic acid molar concentrations (black) in aerosol samples collected from 14 Chinese cities in summer 2003.Also shown in red are the molar concentrations of Zn versus the the sum of the molar concentrations of the 12 dicarboxylic acids (terephthalic acid, 4-ketopimelic acid, oxalic acid, dodecanedioic acid, malonic acid, malic acid, phthalic acid, azelaic acid, glutaric acid, fumaric acid, adipic acid, and sebacic acid) that were highly correlated with Zn in the summertime Chinese urban aerosol samples. Fig. 2 . Fig. 2. Zn versus oxalic acid molar concentrations (black) in aerosol samples collected from 14 Chinese cities in summer 2003.Also shown in red are the molar concentrations of Zn versus the the sum of the molar concentrations of the 12 dicarboxylic acids (terephthalic acid, 4-ketopimelic acid, oxalic acid, dodecanedioic acid, malonic acid, malic acid, phthalic acid, azelaic acid, glutaric acid, fumaric acid, adipic acid, and sebacic acid) that were highly correlated with Zn in the summertime Chinese urban aerosol samples.The dashed line indicates 1 : 1 molar concentrations. Table 2 . Mass conversion factors used in this study to calculate the mass of oxide compounds of trace elements * .Mass conversion factors, defined as the ratio of the molecular weight of the common oxide compound over the atomic mass of the trace element, were taken from Kleeman et al. (2000), El-Zanan et al. (2005), and Bae et al. (2006a). * Table 3 . The top 21 chemical species with highest correlations against oxalic acid in Chinese urban aerosols in summer a a All correlations have one-tail p values < 0.025 and are not driven by outliers.b Concentration averaged over the 14 Chinese cities. Table 4 . Comparisons of OM/OC mass ratios for urban aerosols in the literature a .The sampled aerosols were PM 2.5 unless otherwise noted.b EOS: extracted organic species; MB: mass balance; AMS: aerosol mass spectrometry; RA: regression analysis. a c The sampled aerosol was PM 1 . Table 5 . The top 22 species with highest correlations against Zn in Chinese urban aerosols in summer a . a All correlations have one-tail p value < 0.025 and are not driven by outliers.b Concentration averaged over the 14 Chinese cities.
2018-05-31T10:11:33.070Z
2013-04-25T00:00:00.000
{ "year": 2013, "sha1": "a0db9bbf0a2c5f21b32c0db301eb21eb705a58fd", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/13/4307/2013/acp-13-4307-2013.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a0db9bbf0a2c5f21b32c0db301eb21eb705a58fd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
264423083
pes2o/s2orc
v3-fos-license
Diversity and pathogenic characteristics of the Fusarium species isolated from minor legumes in Korea Legumes are primarily grown agriculturally for human consumption, livestock forage, silage, and as green manure. However, production has declined primarily due to fungal pathogens. Among them, this study focused on Fusarium spp. that cause Fusarium wilt in minor legumes in Korea. Diseased legume plants were collected from 2020 to 2021, and diverse fungal genera were isolated from the internal tissues of the plant roots and stems. Fusarium spp. were the most dominant, accounting for 71% of the isolates. They were identified via morphological characteristics and molecular identification. In the pathogenicity test, Fusarium oxysporum and Fusarium fujikuroi generally exhibited high virulence. The host range investigation revealed that the NC20-738, NC20-739, and NC21-950 isolates infected all nine crops, demonstrating the widest host range. In previous studies, the focus was solely on Fusarium wilt disease in soybeans. Therefore, in this study, we aimed to investigate Fusarium wilt occurred in minor legumes, which are consumed as extensively as soybeans, due to the scarcity of data on the diversity and characteristics of Fusarium spp. existing in Korea. The diverse information obtained in this study will serve as a foundation for implementing effective management strategies against Fusarium-induced plant diseases. Symptoms and fungal isolation Wilt symptoms were observed in 14 legume cultivation fields in Chungnam, Chungbuk, Gyeongnam, and Jeonnam province during 2020-2021 in Korea.However, there was a difference in the severity of the wilt depending on the cultivation fields.The results revealed that the incidence of wilt symptoms in all the legume fields was 1%-5%, mostly occurring during the high-temperature period of June-September, following the middle growth period of the legume plants.The observed wilt symptoms of the legume plants included yellowing of leaves, browning of stems and roots, root rot, stunting, wilting, and plant death (Fig. 1).As a result of isolating fungi from diseased plant samples, a total of 41 fungi were obtained.Among them, Fusarium spp.were isolated the most with 29 followed by Colletotrichum spp.were isolated with 4, Macrophomina spp. with 3, Rhizoctonia spp. with 2, and Phytophthora sp., Pythium sp. and Lasiodiplodia sp. with 1 each (Supplementary Fig. S1).A total of 6 Fusarium isolates were obtained from diseased kidney bean samples collected from 4 fields, and 9 isolates of mung bean were isolated from 5 fields.Eight Fusarium isolates were obtained from 3 adzuki bean fields, and 6 isolates were isolated from 2 sword bean fields.Each isolate was obtained from a different diseased plant sample. Among the 29 Fusarium isolates, 13 (45%) were identified as FSSC.The macroconidia of these isolates were plump and usually straight with three to five septa, they had oval or obovoid microconidia.The sporodochia of FSSC formed on carnation leaf agar (CLA) were white to beige and usually formed chlamydospores.However, the sporodochia of the NC20-729 and NC20-745 isolates rarely formed, and their microconidia were not observed.Among them, the macroconidia of the NC20-729 isolate were considerably larger and thinner than those of other isolates belonging to FSSC.In addition, the color of sporodochia in the NC20-743 isolate was pale orange rather than white to beige, which differed from the general characteristics of FSSC.Moreover NC20-728 and NC20-776 isolates didn't form chlamydospores.The detailed cultural and morphological characteristics of isolates belonging to FSSC are described in Supplementary Table S1.Seven isolates (24%) were identified as FOSC.They had slightly curved macroconidia with three to four septa and oval or clavate microconidia.In this species complex, sporodochia were generally absent, and when present, they were orange.However, the NC20-772 isolate specifically formed white to beige-colored sporodochia.The detailed cultural and morphological characteristics of the FOSC isolates are described in Supplementary Table S2.In addition, seven isolates (24%) were identified as FFSC, which formed slender macroconidia with no significant curvature.The microconidia were formed in chains and did not form chlamydospore.The detailed cultural and morphological characteristics of the FFSC isolates are described in Supplementary Table S3.Only the NC21-948 isolate formed short chains of microconidia, whereas all the other FFSC isolates formed long chains of microconidia (data not shown).Finally, two isolates (7%) were identified as FIESC, which formed elongated and whip-like macroconidia but did not form microconidia.The sporodochia observed on CLA were orange to beige in color.The detailed cultural and morphological characteristics of the FIESC isolates are described in Supplementary Table S4.Morphological characteristics images of representative isolates of 14 Fusarium species are shown in Fig. 2. The cultural characteristics on PDA media of Fusarium isolate tended to be similar within the same species.However, some species exhibited different growth rates, phenotypes, and pigmentation owing to their intraspecies diversity.For example, unlike most Fusarium spp., NC20-729 of F. azukicola and NC20-745 of F. phaseoli www.nature.com/scientificreports/exhibited a slow growth rate.The previously described morphological and cultural characteristics of 14 Fusarium spp.comprising 29 isolates are described in Supplementary Table S5. Molecular identification by phylogenetic analysis For accurately identifying the 29 Fusarium isolates to the species level, the nucleotide sequences of the translation elongation factor 1 alpha (TEF) and RNA polymerase II second largest subunit (RPB2) regions were analyzed, and their amplification sizes were 600-800 bp and 1,800-2,000 bp, respectively (Supplementary Fig. S2).The phylogenetic tree for the 29 isolates was divided into 4 species complex; FSSC, FOSC, FFSC, FIESC (Fig. 3).The FSSC isolates included F. vanettenii, F. azukicola, F. falciforme, F. solani, F. phaseoli, F. oblongum, F. ferrugineum, F. liriodendri, and F. metavorans.The FFSC isolates included F. fujikuroi, F. concentricum, and F. proliferatum.However, all seven FOSC isolates included F. oxysporum, and two FIESC isolates included F. ipomoeae.The full list of these isolates with their hosts and accession numbers are provided in Supplementary Table S6.Our results reveal that 14 Fusarium spp.were recovered from the minor legumes exhibiting typical wilt symptoms, among which F. oxysporum was the most common species (seven isolates) followed by F. fujikuroi (four isolates). Pathogenicity test As a result of performing a pathogenicity test on 29 Fusarium isolates on the isolated host plants, each isolate showed different virulence (Table 1).Even isolates identified as the same species, such as F. poliferatum isolated from sword bean, showed different pathogenic responses.The isolates evaluated as highly virulent had an average disease index of 3 or greater, including five isolates of F. oxysporum, four isolates of F. fujikuroi, and a single isolate for each of F. solani, F. azukicola, F. vanettenii, F. proliferatum and F. concentricum.Conversely, the F. falciforme, F. metavorans, and F. ipomoeae isolates were less virulent or nonpathogenic.Isolates obtained from kidney beans generally showed high virulence and were also found to cause wilt disease for the first time in Korea, except for (a-i) Fusarium solani species complex (FSSC), (j) F. oxysporum species complex (FOSC), (k-l) F. fujikuroi species complex (FFSC), and (n) F. incarnatum-equiseti species complex (FIESC). Vol www.nature.com/scientificreports/F. phaseoli.The highly virulent isolates identified in the study displayed aggressive pathogenicity, leading to root rot and xylem blockage in their host plants (Fig. 4).Because of these pathogenic effects, water absorption in the www.nature.com/scientificreports/host plants was obstructed, resulting in the inhibition of their growth.Moreover, only the first leaf was grown in the aboveground parts of the plants.The pathogenicity test on the 29 obtained isolates revealed that each isolate exhibited different pathogenicity even when they belonged to the same species. Host range investigation Nineteen Fusarium isolates were selected to investigate the host range based on the results of the pathogenic characteristics of all the Fusarium spp.(Table 2).Investigating the host range of nine kinds of crops belonging to the leguminous and gramineous plants using these selected isolates revealed a very diverse host range for each isolate (Table 3).Duncan's Multiple Range Test (DMRT) using R program (Lucent Technologies, USA) revealed the difference in the incidence of the 19 isolates at a 5% significance level.The FSSC isolates did not cause wilt disease in rice and maize as gramineous hosts even if these isolates were highly virulent.However, in FOSC, the NC20-730 and NC20-773 isolates caused wilt disease in rice as well as legumes (Supplementary Fig. S3).Unlike other species complexes, all the FFSC isolates caused wilt disease in rice and generally had a wide host range.Specifically, the NC20-738 and NC20-739 isolates of F. fujikuroi and the NC21-950 isolate of F. proliferatum demonstrated significant pathogenicity in all the nine plants.Conversely, the NC20-772 isolate, which exhibited low virulence in the pathogenicity assay, did not cause wilt disease in all the tested plants except for adzuki beans. Discussion This study was undertaken due to the limited research about Fusarium wilt conducted on leguminous crops other than soybeans, which has resulted in insufficient data in Korea.Up-to-date information on the distribution and characteristics of pathogens is required for the development of resistant varieties, disease diagnosis, and effective disease control measures.In this study, several genera including Fusarium spp., Colletotrichum spp., Table 1.Pathogenicity of the 29 Fusarium isolates obtained from wilted minor legumes against their original hosts.a Disease index of NC20-737, NC20-738, and NC20-739 were cited from Ha et al. 43 .b *New pathogens that have not been reported in Korea so far, in each host.c Disease index 0 = no symptoms, 1 = root necrosis and root loss < 30%, 2 = root necrosis and root loss 31-60%, 3 = root necrosis, root loss > 61%, and poor growth, and 4 = complete necrosis of root tissue and no roots or plants death.www.nature.com/scientificreports/Macrophomina spp., Rhizoctonia spp., Pythium sp., Phytophthora sp., and Lasiodiplodia sp. were isolated from wilted legumes as a result of plant sampling and fungal isolation.The results indicate that among the various fungal genera isolated from the wilted legumes, Fusarium spp.were the most dominant, accounting for 71% of the recovered isolates (Supplementary Fig. S1).This finding emphasizes the significant role of Fusarium species as the major causal pathogen of wilt symptoms in legumes in the studied area.The previous study conducted in Korea from 2014 to 2016 reported similar results, with Fusarium spp.being the most frequently isolated genus, accounting for 79% of the isolates from soybeans 12 .This consistency across different studies in Korea suggests the 21 .The variation in the distribution of fungal pathogens and dominant species between Korea and China may be influenced by factors such as regional variations, environmental conditions, and different cultivation practices.Therefore, future research is needed to investigate the distribution and density of pathogens by collecting samples by climate region and growth period.This will provide a more comprehensive understanding of the pathogen dynamics and their impact on legume crops.The 29 Fusarium isolates of this study were classified into 4 species complexes (FOSC, FSSC, FFSC, and FIESC) according to morphological and cultural characteristics (Fig. 2), and 14 species were identified through TEF (translation elongation factor 1 alpha) and RPB2 (RNA polymerase II second largest subunit) gene sequencing (Fig. 3).However, the NC20-745 isolate could not be clearly distinguished from F. phaseoli and F. crassistipitatum due to the lack of differences in the nucleotide sequence used for molecular identification.According to Aoki et al. 22 , F. crassistipitatum can be distinguished from F. phaseoli by forming yellow colonies on potato dextrose agar (PDA) medium.Based on the fact that the NC20-745 isolate forms white colonies, it has been identified as F. phaseoli (Supplementary Table S1).Among the 14 species, F. oxysporum was the most common with 7 isolates (24%), followed by 4 isolates of F. fujikuroi, 2 isolates of F. solani, F. vanettenii, F. falciforme, F. metavorans, F. proliferatum, F. ipomoeae, and 1 isolate of F. azukicola, F. phaseoli, F. oblongum, F. ferrugineum, F. liriodendri, F. concentricum.In a previous study conducted in Korea, the frequency of 53 Fusarium strains isolated from soybeans was as follows: F. solani (43%), F. oxysporum (34%), F. asiaticum (9%), F. fujikuroi (8%), and F. commune (6%) 12 .In the UK, the isolate frequency of 33 Fusarium strains isolated from leguminous crops was as high as 30% for F. coeruleum, followed by F. redolens (18%), F. avenaceum (15%), F. oxysporum (9%), F. sambucinum (9%), F. graminearum (6%), Fusarium spp.(6%), F. solani (3%), and F. equiseti (3%) 23 .In Spain, F. oxysporum, F. solani, and Fusarium spp.were isolated from chickpea with wilting and root rot symptoms 24 .Among them, F. oxysporum was mainly isolated from dead or dying plants and was the only fungus isolated from plants showing early wilting symptoms.From this point of view, it is thought that the reason why various Fusarium spp.could be isolated in this study was because the investigation was conducted in the late stage of growth when the wilting symptoms were evident.As such, Fusarium spp.involved in legume wilt and root rot are very diverse, future research needs to investigate the diversity and isolation frequency of Fusarium spp.through sample collection according to the growth period of domestic legumes and continuously monitor pathogens. However, because not all of these isolated Fusarium spp.are pathogenic, pathogenicity tests were performed on the original hosts from which each strain was isolated.The results revealed that four isolates of F. fujikuroi (100%), five of F. oxysorum (71%), one of F. solani, F. proliferatum, and F. ipomoeae (50%), two of F. vanettenii (100%), and one of F. phaseoli, F. concentricum, F. oblongum, and F. azukicola (100%) had a disease severity of 2.5 or higher (Table 1).Through this, many additional pathogens that were not reported in List of Plant Diseases in Korea were newly identified 25 .These include F. azukicola and F. oblongum for mung bean, F. fujikuroi, F. oxysporum, and F. vanettenii for kidney bean, F. fujikuroi and F. oxysporum for adzuki bean, and F. concentricum and F. proliferatum for sword bean.In particular, in the case of F. azukicola, this is the first report in Korea.However, when it was first reported as a new species in Japan 26 , it was isolated from red beans, but there is a difference in that it was isolated from mung beans in Korea.The major pathogen of Fusarium wilt is known to be F. oxysporum, but similar to the results of this study, various other Fusarium spp.have also been frequently reported as pathogens of wilt and root rot in previous studies 12,17,26,27 .Therefore, research on the diversity of other Fusarium species should continue to be conducted because they may become problematic pathogens like F. oxysporum.Whereas three isolates of F. falciforme (100%), two of F. metavorans (100%), one of F. proliferatum (50%), and one of F. ferrugineum (100%) were nonpathogenic.As such, not all of the 14 isolated Fusarium spp.are pathogenic, and even the same species showed different pathogenicity depending on the isolate.Arias et al. 17 reported that only one strain of 14 F. oxysporum strains from infected soybeans caused root rot disease.Likewise, other Fusarium spp.showed significant differences in pathogenicity according to strains, which was consistent with the results of this study.In addition, the severity of disease by Fusarium spp. was also different between studies.In the US, F. graminearum is the most pathogenic followed by F. virguliforme, F. proliferatum, F. sporotrichioides, and F. solani 17 .However, in China, F. proliferatum is reportedly the most pathogenic followed by F. fujikuroi, F. sulawense, and F. luffae. 21These results are attributed to complex differences between countries, including dominant species, cultivated legume species, cultivation environments, and cropping systems.Therefore, it is thought that in-depth investigation according to region, cultivation environment, and cropping system should be conducted in Korea. As a result of investigating the host range of the 19 selected isolates on 7 leguminous plants and 2 gramineous plants, it was observed that most isolates showed polyxenic (Table 2, 3).Most Fusarium isolates did not cause wilt disease in corn; however, three isolates, namely NC20-738, NC21-739 of F. fujikuroi, and NC21-950 of F. proliferatum, were found to induce wilt disease in all nine crops, including corn.These three isolates were identified as having the widest host range, as they exhibited the ability to infect and cause disease in multiple plant species.Similarly, a previous study conducted by Amatulli et al. 28 reported that F. fujikuroi and F. proliferatum have a broad host range, encompassing various plant species, such as corn, asparagus, fig, onion, palm, pine, and rice.Considering the findings of both the previous and the current study, it can be concluded that F. fujikuroi and F. proliferatum have at least 14 known host species.This indicates their versatility and ability to infect a diverse range of plants, underscoring their significance as potential pathogens with significant agricultural implications. Furthermore, all isolates belonging to FFSC were found to cause wilt disease in rice.In particular, since F. fujikuroi is known to be the causal pathogen of bakanae disease in rice, the immersion inoculation method was additionally performed to verify whether the F. fujikuroi strain isolated from legumes could cause bakanae disease 29 .The results revealed that all four F. fujikuroi isolates increased the height of rice and caused bakanae disease (data not shown).This finding aligns with the results of a previous study conducted by Choi www.nature.com/scientificreports/confirmed that F. fujikuroi isolated from soybeans could induce bakanae disease in rice.Conversely, when F. fujikuroi isolated from rice was inoculated into soybeans (cv.Daewon, Poongwon, Taegwang, Wooram), the stems were abnormally elongated and eventually the plants died with symptoms similar to bakanae disease in rice (data not shown).Thus, it was found that F. fujikuroi can cross-infect rice and legumes with each other.Additional research is crucial to address potential problems that may arise in Korea due to the cropping system involving rice paddy rotation and double cropping with legumes.Three isolates of F. oxysporum and one isolate of F. azukicola also caused wilt disease in all seven legumes.With respect to F. azukicola, Aoki et al. 26 reported that eight strains of F. azukicola isolated from Japan also caused root rot in adzuki beans, kidney beans, mung beans, and soybeans.Thus, it is likely to become a problematic fungal pathogen in the near future.As such new pathogens may exist in the future, continuous pathogen identification and host range monitoring are highly recommended.Currently, research regarding the Fusarium wilt of legumes in Korea is insufficient; hence, information regarding the existing Fusarium spp.pathogens is lacking.Therefore, by investigating previously unreported Fusarium wilt pathogens and their pathogenic characteristics and host range, this study fills a critical knowledge gap in understanding the diversity and pathogenic properties of legume pathogens.The findings of this study can be used for future research in effective Fusarium wilt management strategies, including breeding of wilt-resistant varieties and cultivation control methods such as crop rotation. Sample collection and isolation of the fungi Experimental research and field studies on plants, including the collection of plant material, complied with relevant institutional, national, and international guidelines and legislation.And we have permission to collect legumes.From 2020 to 2021, 53 samples exhibiting wilt symptoms from minor legumes such as kidney beans, adzuki beans, mung beans, sword beans were collected from 14 domestic legume plantations, in Hongseong, Boryeong, Seocheon, Yeosu, etc. (Supplementary Fig. S4).To isolate the fungi from the samples, the discolored internal tissues of the root and stem were cut into small pieces (5 × 5 mm).The surface-sterilized sample pieces were placed on water agar (WA) and incubated at 25 ℃ in the dark.After 3-5 days of incubation, single spore was isolated by the single spore isolation method 30 .Then, only pure fungal cultures were transferred to PDA slants and stored at 10 ℃ till further use in the following assays. Morphological identification and characterization of fungal isolates The isolates were cultured on CLA media 31,32 at 25 ℃ for 14 days under near ultraviolet (NUV)/dark (12 h/12 h) incubation conditions to investigate the morphological characteristics of the fungal isolates.Following incubation, the morphological characteristics, such as the shape and size of microconidia, macroconidia, presence or absence and color of sporodochia were investigated 33 .To investigate the cultural characteristics, the isolates were inoculated on PDA and cultured at 25 ℃ in the dark for 7 days.Following incubation, the cultural characteristics, including colony growth rate, aerial mycelial color and texture, and colony pigmentation, were investigated 33 . DNA extraction Genomic DNA was extracted from the mycelial powder using Maxwell® RSC PureFood GMO and Authentication Kit (Promega, Madison, WI, USA) according to the manufacturer's instructions.Each fungal isolate was individually inoculated by placing three to five pieces of PDA with mycelia into 20 ml potato dextrose broth (Difco, Bergen, USA) and then incubated at 25 ℃ for 5-7 days.Following incubation, the growing fungal mycelia were filtrated using a sterilized piece of miracloth.The harvested mycelia were completely dried via freeze-drying overnight, and then ground using sterilized beads and a homogenizer to prepare mycelial powder.The mycelial powder was vortexed with 20 µL RNase A and 30 µL proteinase K and then incubated in a heating block at 65 ℃ for 30 min.After incubation, they were centrifuged at 14,000 rpm for 5 min, and 400 µL supernatant was recovered.The fungal genomic DNA was extracted from this supernatant using Maxwell® RSC Kit and stored at − 20 ℃ till further use in the subsequent assays. DNA purification and sequence analysis The final PCR products were observed via electrophoresis on a 1.4% agarose gel at 100 V for 30 min.When the multi-band was formed, gel purification was performed, and when the single band was formed, PCR purification was performed.PCR and gel purification were conducted via Wizard® SV gel and PCR Clean-up System Kit (Promega, San Luis Obispo, CA, USA) according to the manufacturer's instructions.The purified PCR product was sequenced in both directions via Bionics Co., Ltd.(Seoul, Korea) using EF1 and EF2 primer for TEF and 5f2, 7cr, 7cF, and 11aR primers for RPB2 (Supplementary Table S7).The consensus sequences were assembled Phylogenetic analysis To identify the species of the isolates, the sequence alignments of the TEF and RPB2 regions were conducted using the MUSCLE algorithm of MEGA-X software 38,39 with other reference sequences of Fusarium spp.obtained from the NCBI GenBank.The phylogenetic trees were constructed based on the Maximum likelihood and Kimura 2-parameter model 40,41 and verified by 1,000 bootstrap replicates 31 .The F. staphyleae strain NRRL 22316 was used as an out group.Information regarding the reference and outgroup strains is summarized in Supplementary Table S8. Pathogenicity test The pathogenicity test of the 29 isolates was conducted via the soil inoculation method using cornmeal sand inoculum to the original host (host from which each isolate was collected) 42 .The cornmeal sand inoculum was prepared by mixing 240 g dry sand, 26 g cornmeal, and 65 ml distilled water in 500-mL Erlenmeyer flasks, autoclaving twice at 121 ℃ for 30 min, and adding 15 PDA disks (5-mm diameter) with pathogen mycelium.In the control treatment, pure PDA disks were added instead of inoculated disks.The inoculum was incubated at 25 ℃ for 4 weeks without shaking.Following inoculation, the cornmeal sand inoculum and autoclaved soil were mixed at a volume ratio of 3:7 and then divided into 200 mL for each pot (72 × 72 × 100 mm).Two germinated seeds were planted in each pot, and three replicates pots were used for each treatment.All plants were grown in the controlled plant growth room at 25 ℃-27 ℃, with a photoperiod of 12 h/day.Three weeks after sowing, the disease index was scored on a 0-4 scale for each host according to the degree of root damage (Supplementary Fig. S5). Investigation of host range The host range investigation was conducted with selected isolates based on their pathogenicity; hence, certain isolates with low virulence were also included.In total, 19 isolates were investigated, including 6 isolates collected from kidney beans, 5 from mung beans, 5 from adzuki beans, and 3 from sword beans.The host range investigation assay was also conducted using the soil inoculation method with cornmeal sand inoculum, similar to the pathogenicity test.However, there were some differences in the experimental procedures.In this assay, on preparing the cornmeal sand inoculum, 450 mL dry sand, 26 g finely ground cornmeal (for food), and 70 mL distilled water were mixed in a 1-L Erlenmeyer flask, and 30 PDA disks (5-mm diameter) inoculated with pathogens were added.After incubation for 4 weeks at 25 ℃, the mixture of cornmeal sand inoculum and sterilized soil in a 2:8 volume ratio was placed in 100 × 40 mm plant culture dishes with holes in their bottom, and eight seeds were planted for each crop.The plants used for this host range assay included nine crop plants, involving seven leguminous plants and two gramineous plants.The leguminous crops included kidney bean, mung bean, lentil, sword bean, soybean, adzuki bean, and cowpea.Furthermore, the gramineous plants included rice and corn.The disease index was evaluated 3 weeks after inoculation according to the degree of root damage, similar to the pathogenicity test.Then, Duncan's DMRT was performed at a 5% significance level using the R program to statistically confirm whether there was a significant difference in the incidence of strains for the host. Figure 1 . Figure 1.Legumes showing typical wilt symptoms observed in 14 domestic legume plantations in Korea.(a-c) Kidney beans in Hongseong, (d-f) Adzuki beans in Yeosu, and (g-h) Sword beans in Hwasun. Figure 3 . Figure 3. Phylogenetic trees of Fusarium species obtained from wilted legume plants in Korea.The trees were generated using Maximum likelihood analysis of translation elongation factor 1α (TEF) and RNA polymerase II second largest subunit (RPB2) genes nucleotide sequences.The number in each branch indicates bootstrap values obtained after a bootstrap test with 1,000 replications.The scale bar represents 0.05 nucleotide substitution per site. Figure 4 . Figure 4. Pathogenicity test for the original host plant.Control plants (left) and diseased plants inoculated with Fusarium isolates (right). Table 2 . List of the pathogenic Fusarium isolates that were selected for host range investigation. Table 3 . Host range of 19 selected Fusarium spp.isolates that were collected from the wilted minor legumes. complex a Isolate Origin Mean of disease index b or occurrence of disease c on hosts Kidney bean Sword bean Mung bean Adzuki bean Cowpea Soybean Lentil bean Rice Corn persistence and prevalence of Fusarium spp. as important pathogens affecting legumes.On the other hand, the Chinese study related to soybeans revealed different results, with different fungal genera being isolated, including Fusarium spp., Alternaria sp., Aspergillus sp., Botryosphaeria sp., Colletotrichum sp., Corynespora sp., and Diaporthe sp.
2023-10-23T10:39:18.121Z
2023-12-18T00:00:00.000
{ "year": 2023, "sha1": "f0cd73a567b7d35c854f22b9b2e80b16ca27e71a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "02a87b949c97c2dacd48237b314f8c080266c3ea", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
187498261
pes2o/s2orc
v3-fos-license
APPRAISEMENT OF THE GEOLOGIC FEATURES AS A GEO-HERITAGE IN ABU-ROASH AREA, CAIRO- EGYPT Egypt contains Geologic Heritage that create much opportunity to develop educational and recreational programs as well as tourism projects. Enhancement of Geologic Heritage and awareness of the importance of Geologic Heritage is a great challenge. This paper focuses on a neglected area inside Cairo that is facing a great destruction from the people living there. The Abu-Roash archaeological site is located at 31 ᵒ 02′ 42″ E longitude and 30 ᵒ 02′ 42″ N latitude. It is one of the most important areas for education and scientific study inside Cairo. Although the area is not suited as a geo-heritage or even a protected area, it contains Cretaceous to upper Eocene sedimentary beds and fossils, and a great variety of structural features. Not only an important geologic aspect found in the area but also an archeological site is present which provide the area of a great scientific, cultural/historical, aesthetic and/or social/economic value. These different criteria qualifies the study are to have a regional/provincial rank for its Geo-heritage. Abu Roash area are possess good geo-diversity, geo abundance and geo richness which lead us to start point for establishing potential geo-heritage that should be conserved the area also need to be recognized as a geological conservation sites, the area should be Stated as a protected area of a heritage legislation to protect geo-heritage. INTRODUCTION In the most recent few decades, there emerged another pattern for protection and management of geologic offers through worldwide association. In 1972, the all gathering for UNESCO received the convention concerning those security of the universe social and common heritage" [1]. This gathering gives those definition about two sorts from claiming heritage, the place "…natural legacy will be characterized similarly as those complex about bio-ecological and geomorphological components of nature deserving about protection. This twofold point of view may be additionally recognized at authoritative level, notably Toward those first parts of the EU directive 92/43, and during experimental level Toward the endeavors will join geomorphology Furthermore nature [2,3]. Geo-heritage evaluation arised in the recent years with growing importance, leading to a place for geodiversity concepts alongside biodiversity [4][5][6][7][8][9][10][11]. The assessment study on the topic geoheritage are recent studies but this type of studies is fast growing and depend on quantitative methods [4,6]. Geological features are presenting different contents which displaying variable heritage values, depending on the meaning that we attribute to them. As pointed by a scientist, the diversity of contents and the different protection criteria leads to the existence of a great variety of legal regulations [12][13][14]. As a result, the geological heritage of the planet is irregularly protected all over the world, and objects with different contents may be or not at risk, depending on a wide range of factors, most of them not related with its contents. The term geoheritage is not applied widely in Egypt, although there are many valuable geologic areas. The area of Abu-Roash represents a unique and easily accessible geologic feature. Unfortunately, the area is not currently monitored by a geologic organization as a geoheritage place or even not recorded as a protected area. GEOLOGIC SETTING FOR ABU-ROASH AREA Abu Roash constitutes a complex Cretaceous sedimentary succession with outstanding tectonic features. The area lies on the edge of the western desert, west of Cairo, Egypt (figure 1), at distance of 9 km north of the great pyramid of Giza. Its name is derived from the neighboring village of Abu-Roash. The Abu-Roash area is within the western end of the Syrian-arc folds of which extends from northern Egypt to Syria [13]. The upper Cretaceous rocks in the northwestern desert of Egypt underwent many different tectonic regimes since Paleozoic time. These regimes caused the formation of many sub-basins, ridges, trenches and platforms. The exposed lithostratigraphic sequence of the area includes Cretaceous, Middle and Upper Eocene, Oligocene, and Quaternary rock units. The units in the following ascending order; Sandstone series, Rudista series, Limestone series,Acteonella series, Flint series, Pilcatula series, Chalk-Maddi Formation, Sands, and Basalt and Gravel terraces and alluvial deposits The Abu Roash Massif is also characterized by heterogeneous fold styles with different directions [14][15][16][17]. The folds are plunging anticlines and synclines oriented In a NE-SW direction. The northeast trending folds of the area resulted from the combination of compressional stresses initiated from wrenching in addition to arching of the basement. These folds are believed to have developed during the Late Cretaceous -Early Eocene time. The Cretaceous tectonics were severe to the degree that in many parts of Egypt they formulate the present day structurally related land forms [18]. Among the latter, some domal structures were selected by the petroleum industry to test by drilling like what found in Abu Roash area [19]. The major structural elements in Abu Roash area are folding and faulting. These structural elements reflect the structural pattern of the north-Western Desert that are hidden below the younger sediments. These structures were developed during the late Cretaceous and characterized by compression tectonic regime. Besides the folds, faults are extensively developed in specific directions: The E-W, the ENE and WNW trending faults are the masters with almost a dextral-sense of movement, while those of NW trend are normals. N-S, NNE and NNW sinistral-slip faul ts and NE thrusts are subordinately developed [20][21][22][23]. The en echelon arrangement of both folds and faults in addition to the restriction of deformation in certain narrow belts; weak development of the conjugate sinistral-slip faults and conspicuous rotation of the structural elements indicate a dextral shear-couple. Such a regime principally prevailed with little convergence along the ENE master faults, and divergence along the EW wrenches. Folds in Abu-Roash are the most important structural elements that played a major role in the deformational history of the area [24]. A series of anticlines and synclines are recognized obviously in the area, folds range from 100 m to 0.5 km in width and from 300 m to 2.5 km in length. They are disturbed by longitudinal and reverse faults [25]. Some folds are open and form symmetrical structures, whereas others are rather complicated, asymmetric, plunging. Besides the individual folds, there are domal structure (El-Hassana dome and El Ghigiga dome) GEO-SITES IN ABU-ROASH Abu Roash area is one of the most interesting sites inside Cairo, with important geologic features that could be investigated easily [26,27]. This area is used as the main field trip locale for students in University (geology, geography and Archaeology) since the 1980s, which indicates how valuable the area is and that the area is used for; scientific, educational and archaeological purposes. And rare to be found inside Cairo it was recorded in Bahariya oasis 500 Km from Cairo ( Figures 11& 12). f-Igneous type: Tertiary age basalt is found in the Abu Roash area. g-Archaeological site: Old pyramid, sculpted in chalky limestone, known of the 4 th lost pyramid in Egypt. (Figures 13 & 14). DISCUSSION The rank of geologic heritage in the Abu Roash area according to the classification of Ruban and Kuo. The typology of the area is; Stratigraphical, paleonotical, sedimentary, Igneous, economical, Structural, Paleogeographical, geomorphological, geohistrocal [28]. Which indicate a diversity in the geosites in this area, makes the area ranking from low to moderate in its geologic heritage. And according to the typology of the area contain different facies according to the geologic age recorded in the area from the upper createous which represented in the Chalk facie -Eocene represented in Shallow marine (bivalaves,, nummulites) [29][30][31]. This is one of the unique cases that the archaeological sites are linked with the geological sites represented in the area in an old pyramid for the ancient Egyptian, the area needs good understanding to support a correct assessments of geological heritage value, geo-conservation and geotourism planning. Although the great importance of the area it is treated with caution. After calculating the geodiversity index, for Abu-Roash area, the linear scale is 0.55 which indicate that the area ranked as Regional/provincial in its geosite importance [32]. "stated two types of geomophosite; (ⅰ) a geomorphosite is a landform to which a value can be attributed; (ⅱ) a geomorphological resource is a geomorphosite that can be used by society. The attributes that may confer value to a geomorphosite are: scenic; socioeconomic; cultural; scientific. The scenic (aesthetic) criterion is to a great extent, of an intuitive nature. In this case, the approach to Nature depends upon the individual contemplating it and his/her state of mind at the time. It is derived from feelings which, being personal perceptions, are highly subjective, it is therefore difficult to value and compare with the feelings and perceptions of others". In Abo-Roash area can be classified as a type (i) geomorphosite and a geomorphological resource (ii) which can be used by society, where the area contains both unique landforms and used for scio-ecnomic, culture, and scientific study. CONCLUSION The geologic heritage of Abu Roash is of regional/provincial rank where the area represents geodiversity, geoabundance and georichness in its geoheritage. The area is not currently recorded as a geoheritage site or protected area. The only place recorded as a protected area is the Domal Structure (El Hassan dome), while the aforementioned folds, fossils, and facies are not being monitored by the country, thus there is a high probability of losing Abu Roash as a geoheritage site. Thus, it would be desirable to put this area under control from a specialized organization to save the geologic heritage. Also, the area should be: • Stated as a protected area of geo-heritage legislation that directly protects its geo and archeological heritage. • Vulnerability assessment should also identify tenure status. • An expert working groups should achieve enhanced and practical protection approaches for the geosites.
2019-06-13T13:12:47.894Z
2017-09-10T00:00:00.000
{ "year": 2017, "sha1": "97e5aadc2e875d6d2279fea7227b975be0cd8ee2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26480/mjg.02.2017.24.28", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f19732a8e3d82d6cd3f3e119d27efebd22b9bb1b", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
118896468
pes2o/s2orc
v3-fos-license
MHD simulations of ram pressure stripping of a disk galaxy The removal of the interstellar medium (ISM) of disk galaxies through ram pressure stripping (RPS) has been extensively studied in numerous simulations. Nevertheless, the role of magnetic fields (MF) on the gas dynamics in this process has been hardly studied, although the MF influence on the large-scale disk structure is well established. With this in mind, we present a 3D magnetohydrodynamic (MHD) simulation of face-on RPS of a disk galaxy to study the impact of the galactic MF in the gas stripping. The main effect of including a galactic MF is a flared disk. When the intracluster medium (ICM) wind hits this flared disk, oblique shocks are produced at the interaction interface, where the ISM is compressed, generating a gas inflow from large radii towards the central regions of the galaxy. This inflow is observed for $\sim 150$ Myr and may supply the central parts of the galaxy with material for star formation while the outskirts of the disk are being stripped of gas, thus the oblique shocks can induce and enhance the star formation in the remaining disk. We also observed that the MF alters the shape and structure of the swept gas, giving a smooth appearance in the magnetized case and clumpier and filamentary-like morphology in the hydro case. Finally, we estimated the truncation radius expected for our models using the Gunn-Gott criterion and found that is in agreement with the simulations. INTRODUCTION Lenticular galaxies (S0s) are objects that lie between the elliptical and spiral galaxies in the Hubble sequence. The S0s share properties with both types of galaxies, that is, an old stellar population like ellipticals and stellar disks like spirals. Lenticulars also have prominent bulges (Simien & de Vaucouleurs 1986), low gas content (Gallagher, Faber & Balick 1975) and some observations show that the last star formation episode took place at the bulge (Prochaska Chamberlain et al. 2011;Sil'chenko 2006;Sil'chenko et al. 2012;Bedregal 2012;Johnston et al. 2012Johnston et al. , 2014, but see Katkov et al. 2015). The well studied environmental density-galactic morphology relationship in clusters of galaxies (Dressler 1980) states that late-type galaxies (spirals) are more frequently found in the outskirts of clusters, while early-type galaxies (ellipticals and S0s) are more abundant in the central regions. In the case of groups of galaxies, a similar ⋆ E-mail: m.ramos@irya.unam.mx (MRM) trend has been observed (Postman & Geller 1984). Additionally, in cluster galaxies, the fraction of spirals increases with increasing redshift z, whilst the S0s fraction decreases (Dressler et al. 1997;Fasano et al. 2000). On the other hand, when properties of spiral galaxies in clusters and those in the field are compared (Boselli & Gavazzi 2006 and references therein), cluster spirals are HI deficient and such deficiency increases towards the cluster centre. Also, cluster galaxies show a lower star formation rate (SFR) associated with the lack of HI, and they are redder than field galaxies, which indicates the former form stars passively (Butcher & Oemler 1978). Late-type galaxies also follow more radially elongated orbits than early-type, suggesting they are free-falling into the cluster (Giraud 1986;Dressler 1986;Vollmer et al. 2001;Biviano & Katgert 2004;Aguerri et al. 2017). Lastly, cluster galaxies show an increase in radio-continuum emission, probably due to an enhancement in the magnetic field (MF) intensity, possibly caused by compression (Scodeggio & Gavazzi 1993;Rengarajan et al. 1997). These observations point to one or more mechanisms c 2017 The Authors that act in the environment of clusters and groups, stripping the galactic interstellar medium (ISM) from the disks or increasing its consumption rate so that the star formation shuts down and a change in disk colour is produced. Therefore, the idea that spirals are the progenitors of lenticular galaxies has been proposed, suggesting that the study of S0s may help us understand the impact of environment on the evolution of disk galaxies. In clusters, the main mechanisms proposed to explain the transformation of a spiral galaxy into an S0 are: • Ram pressure stripping (RPS, Gunn & Gott 1972): when a galaxy falls into the cluster centre, the hot intracluster medium (ICM) exerts an hydrodynamic pressure on the ISM of the galaxy and, if this pressure exceeds the gravitational force of the disk (Gunn-Gott criterion), then the ISM is stripped off the galaxy. • Galaxy harassment (Moore et al. 1996): close and frequent encounters between galaxies occurring at high velocities, at a rate of one encounter per 1 Gyr, may increase the SFR, rapidly exhausting the gas supply and eventually leading to a redder disk. These interactions will alter the galactic morphology by dynamically heating the disk. • Starvation (Larson et al. 1980): the galaxy loses the envelope of hot gas that supplies the disk's gas reservoir, so the ISM is consumed and the star formation shuts down. There are also other mechanisms that may act in groups of galaxies that can modify the galactic morphology, such as tidal interactions (Icke 1985), and major (Toomre & Toomre 1972;Borlaff et al. 2014) and minor mergers (Aguerri et al. 2001;Tapia et al. 2014). Nevertheless, these processes are not exclusive, that is, more than one might operate at the same time. Comparing these mechanisms, Boselli & Gavazzi (2006) conclude that RPS is the most appropriate to explain the differences observed in between spirals of clusters and those in the field, since RPS removes the gas from the galaxies producing a change in the SFR and colour. Also, RPS is efficient and inevitable near the cluster centre and may alter indirectly the morphology of the disks (if a galaxy loses its gas, the stellar disk is dynamically heated, leading to a thicker disk, Farouki & Shapiro 1980;Sellwood & Carlberg 1984;Fuchs & von Linden 1998;Bekki et al. 2002;Elmegreen et al. 2002). Multiwavelength observations have shown several cluster galaxies that are good candidates to be experiencing RPS (Koopmann & Kenney 2004;Chung et al. 2009;Yagi et al. 2010;Kenney et al. 2014;Boselli et al. 2014;Bekki 2014), since they show truncated gaseous disks and in some cases gas tails, while the stellar disk remains unperturbed. Cayatte et al. (1990) performed a survey of HI for spiral galaxies in the Virgo Cluster where they found that small HI disks lie almost exclusively in the cluster centre in galaxies with high velocities with respect to the cluster mean velocity, which make it possible they lost their gas through ram pressure stripping. Moreover, they observed that galaxies affected by RPS have shown nuclear activity. This could be since the gas pushed to the centre of the galaxy and the compression exerted by the ICM enhances the star formation. Also, Poggianti et al. (2016) presented an atlas of galaxies at low redshift that are being stripped of their ISM, with candidates found at all cluster centric distances that showed an enhanced SFR compared to non-candidates of the same mass. This points to the idea that RPS can induce and enhance the star formation. A good example of a galaxy subject to RPS is NGC 4522 in the Virgo cluster. This is the most studied case of a galaxy losing its ISM by this mechanism (Vollmer et al. 2000(Vollmer et al. , 2008Abramson & Kenney 2014;Abramson et al. 2016;Stein et al. 2017) and is possibly in the process of transforming into an S0, since it shows a truncated disk in HI with a 3 kpc radius and a ∼ 3 kpc-length gas tail observed in HI and Hα (Kenney & Koopmann 1999). Also, in the Abell 3627 cluster, the galaxy ESO 137-001 is stripped by the hot ICM (Sun et al. 2006(Sun et al. , 2007Sivanandam et al. 2010;Fumagalli et al. 2014;Jáchym et al. 2014;Fossati et al. 2016). ESO 137-001 presents an 80 kpc long, double X-ray gas tail (Sun et al. 2006), with some HII regions embedded within the tail (Sun et al. 2010), indicating that star formation can go on within the ISM stripped out of the galaxy. Later, in the same cluster, another X-ray gas tail was detected (ESO 137-002), with a double Hα tail (Zhang et al. 2013). The ICM-ISM interaction through the RPS has been studied extensively for years. A wide variety of models have been developed with different methods and techniques. The first models were performed under the assumption of a constant ICM wind, using smoothed particle hydrodynamics (SPH; Abadi et al. 1999;Schulz & Struck 2001) and grid codes (Quilis et al. 2000;Roediger & Hensler 2005;Roediger, Brüggen & Hoeft 2006). These models were in good agreement with the Gunn-Gott estimation for the disk truncation radius. Other simulations were done varying the inclination angle of the disk with respect to the wind direction Jáchym et al. 2009). Yet other models added a variable ICM wind, so the RPS mechanism is not constant (Roediger & Brüggen 2007Vollmer et al. 2001, with a sticky-particle code). Another extension to the RPS models included a multiphase gas disk (Quilis et al. 2000;Tonnesen & Bryan 2009, 2010, where the low-density gas is stripped more easily from the galaxy, but the mass loss of the ISM is not so different from homogeneous disk models. Some other works included star formation, which showed an increase in the star formation in central regions of the target galaxies (Schulz & Struck 2001;Vollmer et al. 2001) and sometimes stars were formed in the gas tails (Bekki & Couch 2003;Kronberger et al. 2008;Kapferer et al. 2008;Steinhauser et al. 2012;Tonnesen & Bryan 2012). Despite the huge variety of RPS models, there are very few including MF. MF have been observed in galaxies from polarized emission, mainly in radio frequencies, and Faraday rotation. MF in spirals have an ordered component, i.e. with a constant and coherent direction, and a random or turbulent component that has been amplified and tangled by turbulent gas flows (Beck 2005, Beck & Wielebinski 2013 and references therein). Combining information obtained with different techniques, it is possible to develop a model for the 3D structure of MF in galactic disks. In spirals, the average total field strength is ∼ 9 µG (Niklas 1995) and the regular field strength is 1 − 5 µG (Beck & Wielebinski 2013), in radio-faint galaxies like M31 and M33 the total field is 6 µG (Gießübel 2012;Tabatabaei et al. 2008), in gas rich spiral galaxies the total field is 20 − 30 µG (Fletcher et al. 2011;Frick et al. 2016), for bright galaxies ∼ 17 µG (Fletcher 2010), in blue compact dwarf galaxies 10 − 20 µG (Klein et al. 1991) and the strongest total fields are found in starburst and barred galaxies with 50 − 100 µG (Adebahr et al. 2013;Chyży & Beck 2004;Beck et al. 2005). Since the degree of polarization on average is low in the spiral arms, the random field is assumed to be stronger, up to five times the intensity of the ordered field, whilst in the interarm region the degree of polarization is higher, hence the ordered field should dominate. Additionally, it has been observed that the ordered MF shows a spiral pattern that is offset from the spiral arms of gas and stars (Beck 2005). Ruszkowski et al. (2014) presented simulations of RPS with a magnetized ICM and found that the MF can affect the morphology of the stripped gas tail, since they observed narrower tails than in purely hydrodynamic (HD) simulations. Pfrommer & Dursi (2010) also showed magnetohydrodynamics (MHD) simulations in which the galaxies are moving in a magnetized ICM. The galaxies in their simulations swept the field lines where polarized radiation is generated. This is used to map the orientation of the MF in clusters, e.g. Virgo cluster. In these cases, the MF has been implemented only in the ICM and not in the disks. Some examples of models with magnetized disks are Vollmer et al. (2006Vollmer et al. ( , 2007; Soida et al. (2006), where they used the method of Otmianowska-Mazur & Vollmer (2003) where the MF is evolved via the induction equation using a grid code with the velocity field of the particles, so that the MF is advected with the gas. These simulations of RPS have been carried out first with a sticky particle code, and then a toroidal configuration of the MF is given to the galaxy. Even if the effect of the MF over the gas dynamics has not been taken into account, this method has been useful to explain the polarized radiation in radio that is observed in some galaxies that may be affected by the RPS, as in the case of NGC 4522 . Additionally, Tonnesen & Stone (2014) performed MHD simulations for the RPS including galactic MF, but the ICM was not magnetized. They found that MF do not alter or dramatically change the stripping rate of the gas disk compared to pure HD simulations. Nevertheless, the MF have an impact in the mixing of gas throughout the tail, since inhibits the mixing of the gas tail with the ICM, the unmixed gas survives at large distances from the disk. Besides, the RPS may help magnetize the ICM up to a few µG. Here, we present MHD simulations of ram pressure stripping of a disk galaxy under the wind-tunnel approximation, for a face-on geometry. Additionally, we performed two purely HD runs to compare with the magnetized case and analyze the impact that the galactic MF has in the stripping of the disk. In §2 we present the initial set up for the simulations, in §3 we describe the resulting gas and MF distribution, and in §4 we discuss our conclusions. MODEL We set up a magnetized disk in rotational equilibrium in a fixed gravitational potential. We used the MHD code RAM-SES (Teyssier 2002), which is an adaptive mesh refinement Table 1. Length scale and mass parameters of the gravitational potential, as adjusted to approximate M33's rotation curve. M 1 and M 2 represent the total mass of the bulge and disk, respectively, while M 3 is a mass factor for the halo, where its total mass is obtained up to a cutoff radius. code, so we can have more refinement of cells in the desired regions, and allows us to add MF in the simulations. The models were performed in 3D with 11 refinement levels, for a resolution equivalent to (2048) 3 cells, in a box of 120 kpc in each direction. Initial Conditions The gravitational potential used for our galaxy is based on the model of Allen & Santillán (1991), which is an analytic and simple potential that can reproduce the rotation curve of the Milky Way and is composed by a spherical central bulge, a Miyamoto-Nagai disk and a massive spherical halo. This potential model can be easily modified to approximate the rotation curves of other galaxies. For this work, we modeled an M33-like galaxy, which is a late type and low-luminosity spiral galaxy. We modified the mass and scale parameters to the values shown in Table 1 to model the rotation curve of the M33 galaxy (see figure 1, solid line) as reported by Corbelli (2003). Nevertheless, for the simulations presented in this work, we removed the galactic bulge component of the potential (M1 = 0) since it generated a large potential gradient in the z-direction (perpendicular to the galactic disk) for small radii that generated problems for our initial setup procedure (described below). Regardless, this should have little impact on our conclusions, specially since the M33 bulge's mass is small. The velocity profile used for the simulations, both with and without magnetic fields, is also presented in figure 1. For the initial conditions, we use a method similar to Gómez & Cox (2002). First, we define the radial density and velocity profile in the galactic mid-plane, assuming that the gas disk is in rotational equilibrium with the gravitational force, the total pressure gradient, and the magnetic tension, where the total pressure P is the sum of the thermal (P th = c 2 s ρ(r, z), with cs the sound speed) and the magnetic (PB) pressures. The magnetic pressure has two components, PB = PB,inner + PB,outer, with where R = √ r 2 + z 2 , r b = b1/3 (see table 1), PB0 = 1.75 × 10 −13 dyn cm −2 and nc = 0.04 cm −3 . With these expressions for the total pressure and eq. (1), a given midplane density (or velocity) profile uniquely defines the velocity (density) profile. In the bulge, the rotation curve resembles a rigid body, and so we define the rotation velocity linearly increasing with radius, where v φ (b1, 0) is the circular velocity obtained from the gravitational potential in r = b1 and z = 0. Then, the density profile in the midplane is given by, which is integrated from r = 0 to b1. For r > b1 we do the converse: we define the density profile as exponentially decreasing at the mid-plane, where hr = 6 kpc and ρ0 is the value found at r = b1 from eq. (5). Once the mid-plane density is calculated, the distribution at z = 0 is found by assuming magnetohydrostatic equilibrium and an isothermal equation of state, where again P = P th + PB. By substituting the magnetic pressure components (eqs. 2 and 3) and the equation of state it follows which is integrated along the z coordinate to obtain the vertical density profile at any radius r. The rotation velocity above the midplane is given by (Gómez & Cox 2002): where vA is the Alfvén velocity (vA = 2PB/ρ). Figure 2 shows a density map for the initial condition of the disk at y = 0, both for a magnetized and a purely HD (PB0 = 0) cases. It can be seen that, for the magnetized case, the disk is thicker in the outskirts than in the central region, that is, the galactic disk flares in the presence of the MF. Additionally, the scale height of the MHD disk is larger than in HD one. When solving the equation of hydrostatic equilibrium, the MF changes the compressibility of the gas, thus increasing the surface density Σ for the gravitational potential and midplane density used, which results in a heavier disk compared to the HD model. For this reason, we performed another HD simulation with surface density similar to that in the magnetized disk model. We will refer to this as the heavy disk model. To obtain the density distribution of the heavy disk, we solve again the equations (5) and (7), increasing the initial value of ρ one order of magnitude over original HD model, thus increasing ρHD(z = 0) results in Σ heavy ∼ 1.5ΣMHD. The initial density distribution for the heavy model is also presented in figure 2. Galactic magnetic field In the setup described above for the MHD model, the MF has two components (eqs. 2 and 3). While the outer component (r > b1) is purely toroidal, the inner one is random. For the random inner component (r < b1), we defined the vector potential A with, where the angles φr and θr where obtained randomly and A0 is drawn from a normal distribution with dispersion equal to √ 8πPB0. The function f (z) = sech 2 (z/z h ), with z h = 150 pc, modulates the vector potential so its magnitude has the same scale height as the density in the bulge. Once the components for the vector potential are calculated, it is smoothed in order to avoid large fluctuations. Finally, the MF is calculated Binner = ∇ × A. For the rest of the disk (r > b1), the MF in the setup follows a toroidal configuration, with its strength given by the gas density (eq. 3). Figure 3 shows the initial intensity of the MF with arrows overlaid representing the field lines for the MHD model. ICM wind To simulate the ICM-ISM interaction, we worked under the wind-tunnel approximation, this is, we place the galaxy at rest and the ICM flows towards the disk face-on. The ICM wind is unmagnetized and has the same parameters for all models: the wind starts at z = −10 kpc and moves in the +z direction with density nICM = 10 −5 cm −3 and a velocity that increases linearly in time, from 300 km s −1 to 760 km s −1 at the end of the simulation, at 500 Myr. All the computational boundaries are outflowing, except at the bottom where the wind flows inward. Figure 4 shows the evolution of the models: the magnetized and both HD and heavy hydro models, in maps of projected density. The first row corresponds to a time t = 90 Myr. It can be seen that the wind is starting to interact with the disks. In the MHD run (left column), oblique shocks appear on the side of the disk that is facing the ICM wind (at z < 0), because our disk flares in the presence of the galactic MF, giving it a "bow tie" shape. The oblique shocks lead gas of the external parts of the disk towards the galactic centre and continues for another ∼ 150 Myr more until the ICM wind finally surpasses the gravitational force of the disk and starts to sweep the ISM (see §3.3). Since the HD disk (central column) does not flare as much as the MHD one, the ISM-ICM interaction is different. At 90 Myr, in the HD model the wind perturbs the gaseous disk and displaces it from the z = 0 midplane. The background gravitational force pulls back the gas to its original position, mainly in the inner region of the disk, whilst the outer parts of the disk are still being swept by the wind. In the heavy model, most of the gas disk initially at z < 0 is compressed and moved by the wind to a height z ∼ 0, changing the disk symmetry but to a smaller extent than in the HD case. Model evolution At t = 190 Myr (second row), for the MHD run, most of the gas that lied below the galactic midplane was swept by the wind which in turn starts to erode the disk at large radii, where the ram pressure exceeds the gravitational force of the galaxy (see §3.2). The inner region of the disk (r < 5 kpc) is slightly perturbed, with small variations in the z direction, since the gravitational force tries to keep the disk in the equilibrium position. As a result, the gas moves up and down. These fluctuations in z at small radii occur at earlier times in the HD model but it is basically the same behavior: gas at large r is swept by the wind whilst the gas located near the galactic centre remains bound to the disk. The displaced gas reaches a height of ∼ 5 kpc and ∼ 10 kpc above the galactic midplane, for the MHD and the HD case, respectively. The HD model shows a larger erosion than the MHD, with a gaseous disk of radius r < 5 kpc for the HD run, and r ∼ 20 kpc for the MHD one. The gas of the heavy disk has moved just a few kpc from the midplane, showing almost the same radial extension as the MHD, although the heavy disk is denser near the galactic midplane. After 310 Myr of evolution, the wind continues flowing and accelerating towards the disk and reaches vICM ∼ 550 km s −1 at z = 0. For the magnetized disk (third row, left), the gas at r > 10 kpc is ripped off of the galaxy, where the ram pressure is stronger than the galactic gravitational force. The swept gas has increased its height, reaching z ∼ 10 kpc. There are still some vertical motions in the midplane for r < 10 kpc because the gas in this position is adjusting to the balance between the pressure from the wind and the gravitational force in the disk. This process is also present in the HD run, but the oscillating gas is contained in a smaller radius (r < 7 kpc). Additionally, for the HD model, the gas that has been removed from the disk has reached a height of ∼ 20 kpc above the disk midplane. Compared to the magnetized case, the stripped gas has a more diffuse appearance in the HD run, that is, the gas mixes easier with the surrounding, and it is less extended in the radial direction than the MHD case. The HD disk is, at this time, truncated to a radius of ∼ 6 kpc. The heavy disk at t = 310 Myr shows a structure similar to the MHD one: the heavy disk has a radial extension of r ∼ 10 kpc in the midplane, while in the vertical direction the denser gas reaches a height of z ∼ 5 kpc, but the less dense gas is farther away the galactic midplane in the MHD model (z ∼ 10 kpc). At t = 500 Myr the wind has a velocity of ∼ 760 km s −1 at z = 0 for all cases. In the MHD model, the stripped gas reaches a height of ∼ 20 kpc above the galactic midplane. The disk has been truncated to a radius of ∼ 10 kpc, which is approximately half of its original size. The dimensions of the displaced gas for the MHD model resembles the one from the HD at t = 310 Myr, showing a similar longitude over the midplane, which suggests that the evolution of the MHD simulation is delayed with respect to the HD run, although differences remain in the morphology: in the MHD case, the swept gas has a smooth appearance and is denser at higher z than in the HD case, which indicates that the MF prevents the gas from mixing with the surroundings, similarly as seen in Tonnesen & Stone (2014). On the other hand, the HD model with 500 Myr of evolution shows a more filamentary and clumpier morphology in the stripped gas, contrary to the smooth appearance that the magnetized gas presents. The HD gas is extended over ∼ 40 kpc above z = 0, and the remaining disk has radius of ∼ 4 kpc, which indicates that this disk has reached a state of equilibrium with the ram pressure, since the gas was rapidly eroded in the first ∼ 200 Myr of evolution and the remaining gaseous disk (in z = 0) has the same radial extension until the end of the simulation. The heavy disk has a size similar to the MHD (r 10 kpc) in the midplane (z = 0), suggesting that the stripping rate for both disks is approximately the same, whilst the displaced gas for the heavy model has a lower z-height. Nevertheless, when the heavy and the HD simulations are compared, the displaced gas of the heavy model resembles the HD case, in that both have a clumpy and filamentary-like structure, with the difference that, in the heavy model, the swept gas is denser because of the initial condition of the gas disk, that is ρ heavy > ρHD (Σ heavy > ΣHD) as mentioned in §2.1, which also results in a slower erosion of the disk. Comparing the evolution of the three models, the MHD and the heavy model are left with a similar remnant disk, with radius r ∼ 10 − 12 kpc which is larger than the HD model (r ∼ 4 kpc) for the same time of evolution. Our results suggest that the stripping rate depends on the MF only through the surface density Σ of the disk: a heavier disk (high Σ) is more difficult to erode since the ICM has more material to sweep, even when the gas is farther away from the gravitational potential well, and thus less bound to the galaxy. This is similar to the results presented by Tonnesen & Stone (2014), where the MHD and the HD disks with the same initial mass do not show a significant difference in the stripping rate. Although the heavy model agrees quite well with the MHD in the rate at which the gaseous disk is removed and the truncation radius (see §3.2), the problem with the heavy disk is that it gives a higher and unrealistic volumetric density ρ in the galactic midplane, because in the absence of the MF, using the same potential to solve the hydrostatic requires a high value of ρ to obtain the same Σ of the magnetized case. Nevertheless this heavy model is useful to investigate the dependence of the stripping with the disk surface density. It is observed in our models that the MF has an impact in the morphology and shape of the swept gas. In the magnetized case, the swept gas shows a smooth structure with denser gas surviving at higher z, similar to the results of Tonnesen & Stone (2014); while in the two non-magnetized models, the gas located above the midplane has a clumpy and filamentary shape. The morphology of the swept gas in the HD and heavy models is due to the equation of state of the gas. In the case where an isothermal equation of state is implemented, like in the setup we presented, the gas is more compressible compared to an adiabatic or magnetized gas (with an adiabatic index γ > 1). In our isothermal models, when the wind hits the galaxy, the gas disk is compressed so that clump-like regions form, leading to the development of eddies due to instabilities in the gas and when the eddies are pushed upwards by the wind they generate a tail. This behavior of the gas is similar to the flow of the cigarette smoke, giving the filamentary and clumpy shape to the swept gas in our HD simulation. It is noticeable that, in some aspects, the swept gas in our MHD model resembles the HI distribution of the spiral galaxy NGC 4522, a galaxy considered a classic example of RPS (see the figure 2 from Kenney et al. 2004): the HI distribution is asymmetric with respect to the stellar disk, is cap shaped, the gas contours are compressed in the upstream side, and it is concave or curved to the downstream side. On the other hand, the stripped gas is not as far from the NGC 4522 disk as the gas distribution in our MHD model at t = 500 Myr. Gunn-Gott criterion The Gunn-Gott criterion (GG, Gunn & Gott 1972) estimates the radius at which a disk galaxy, experiencing the ram pressure face-on, will be truncated. This is determined by equalizing the ram pressure Pram = ρICMv 2 ICM exerted by the wind and the gravitational restoring force in the disk, which is the product of the gravitational force of the galaxy and the surface density of the gas disk F (r)Σ(r), that is, the truncation radius is defined by the position where Pram = F (r)Σ(r). Disk surface density In order to verify that our simulations satisfy the GG criterion, we estimate the truncation of our disks measuring the surface density in the z direction. We did these calculations over time also to study the differences in the stripping rate for the three models. Figure 5 shows the evolution of the disk surface density (Σ) over time obtained for |z| ≤ 5 kpc, for the MHD model, and |z| ≤ 3 kpc for the HD and heavy disks. The differences in the Σ integration range in the z-direction are due to the different thickness of the MHD with respect to the HD and the heavy ones. We define the truncation radius as the one where Σ decays abruptly. At t = 90 Myr, the disks are barely perturbed, as can be seen by comparing with figure 4, and their surface density distribution is similar to the initial condition: the surface density decreases slowly in r, and decays rapidly at r ∼ 21 − 22 kpc, except for the MHD disk where the decay is less abrupt. Given that in the MHD case Σ decreases approximately two orders of magnitude (10 −4 − 10 −6 gr cm −2 ) in r > 20 kpc, to obtain the truncation radius of the disk we took the midpoint for this range of densities, in the log-scale, and then we found the value of r where we have this density (Σ = 10 −5 gr cm −2 ), giving r ∼ 23 kpc. As mentioned before (see figure 2), the MHD disk has a higher surface density than the HD disk by approximately one order of magnitude due to the extra support that the MF provides. By construction, the heavy disk has a value of Σ heavy ∼ 1.5ΣMHD (see §2.1), but the decay of Σ heavy is more abrupt than the MHD case and lies between the range of r = 21 − 22 kpc, similarly to the HD model. At t = 190 Myr, the surface density in the inner MHD disk is still similar to the previous snapshot, but Σ decreases more rapidly with radius than at 90 Myr, resulting in a disk with r ∼ 19 kpc, which is a clear sign that the ICM has started to erode the gas of the disk. At this same time, for the HD run, the disk has been eroded more efficiently than the magnetized one, with ΣHD showing an abrupt drop at r ∼ 6 kpc. The heavy disk shows a truncation radius of r ∼ 20 kpc and is also evolving similarly to the MHD run. When the simulations have reached a time of 310 Myr, is clear to see that the surface density profile has changed for the MHD and heavy disks, due to the accelerating wind that swepts the gas of these galaxies. This is observed in the fall of the density and the smaller radial extension of the disks, which have been reduced considerably to r ∼ 15 kpc for the MHD and heavy models. The HD model evolves faster in Figure 5. Evolution of surface density for the MHD (top), the HD (middle) and the heavy (bottom) models as a function of galactocentric radius. The surface density is calculated up to z = ±5 kpc from the midplane for the MHD disk and z = ±3 kpc for the HD and heavy disks. The MHD and heavy disks (top and bottom) are eroded more slowly than the HD (middle). time than the other models, as expected, since most of its disk was swept at earlier times (t = 190 Myr), presenting a gaseous disk with r ∼ 7 kpc. The ICM wind keeps eroding the gaseous disks of the three models until the end of the simulation (t = 500 Myr), leaving a remnant disk with r ∼ 12 kpc and r = 10 − 12 kpc for the MHD and heavy models, respectively. The HD simulation was run longer, but the disk length reaches an approximate steady truncation radius of ∼ 4 kpc at t = 500 Myr, showing that the erosion of this model was faster and more efficient than in the MHD and heavy disks, that loose their gas at a slower rate, as mentioned in §3.1, and whose disks are truncated at a larger radius. It is worth mentioning that the increase in Σ is related with the following numerical factors. First, is the difficulty of modeling a Cylindrical system in a Cartesian grid. The gas fluxes across grid boundaries in this mismatch lead to errors when the curvature of the circular orbits is large, generating spurious radial flux and a lack of proper rotational support. Second, the rapidly changing gravitational potential in the central regions of the galaxy. The HD disk has a scale height of ∼ 200 pc or even smaller at r = 0, and with the best spatial resolution achieved just a few grid points are calculating the hydrostatic. We tested how much our models deviate from equilibrium by performing simulations of isolated MHD and HD disk and found that the ill-resolved hydrostatics and rotation generates a collapse of material in the centre of the galaxy, which yields to an increase in the surface density. In the HD isolated disk, the surface density increases from 1 to 2 orders of magnitude in r < 2 kpc from t = 0 to t = 500 Myr. There is also an infall of material in the isolated MHD model, but since this disk is more extended in the z-direction, the increase in the surface density is less than one order of magnitude for the same radii and time of evolution compared with the HD case, this is because the grid effects are smaller in the MHD model. When the wind is on, the increase of the surface density is lower than in the isolated cases, since the interaction with the wind diminish this effect. With this in mind, the surface density is not an adequate measure of the inflow of gas derived from the oblique shocks in our models (see §3.3). Figure 6 shows the gravitational force per unit area for the MHD (solid thick line), the HD (dashed thick line) and the heavy (dot-dashed thick line) disks, approximated as follows: using the gravitational potential of the background axisymmetric model we obtained the maximum force in the z direction as function of r and multiplied by the surface density Σ(r): Disk truncation where zmax is the point where the gravitational force is maximal. Notice that the gravitational potential is the same for all models but the differences in the restoring force are due to the initial surface density in the disks (see §2.1). This gravitational restoring force is compared with the ram pressure Pram = ρICMv 2 ICM exerted by the wind (represented with the horizontal lines in the figure). The gravitational force of the disk decreases with increasing radius so, for a given set of parameters for the wind, we expect that the disks are truncated at the radius where both forces are equal, that is, where the Pram and the force lines cross each other. For the wind parameters, we have n = 10 −5 cm −3 and the velocity is taken from the simulation. Since it increases in time we chose the value of vICM at z = 0, when it has reached the disk midplane. The lines for the ram pressure are labeled according to the time at which the wind velocity was calculated. Following the GG criterion, the truncation radius expected for the MHD, HD, and heavy disks is r ∼ 16 kpc, ∼ 8.5 kpc and ∼ 18 kpc, respectively, with the wind velocity measured at t = 90 Myr, which is smaller compared to the cut in the radial direction of our disks measured in the simulation (rMHD ∼ 23 kpc and r HD,heavy ∼ 21 − 22 kpc). At t = 190 Myr, the GG truncation radii are also smaller for the MHD and heavy disks compared to the ones calculated from the simulation: 14.5 kpc (19 kpc in the simulation) for the MHD and 16.5 kpc (20 kpc in the simulation) for the heavy model. For the HD disk, the truncation radius measured from the simulation is in better agreement with the one predicted by GG, r ∼ 6 − 7 kpc, and could be due to the fact that this model loses its gas faster than those with a higher initial Σ. At later times, from t = 310 to 500 Myr, the truncation radius from GG is more similar to the observed in the simulations, showing slight differences of 1 − 2 kpc in the two non-magnetized models. By the end of the simulation, t = 500 Myr, the radius of the MHD disk should be r ∼ 8 − 9 kpc according to GG, while the size measured is r ∼ 12 kpc. For the HD model, GG predicts r ∼ 2 kpc while we measure r ∼ 4 kpc in the simulation. Finally, in The oblique shocks are produced (due to the "bow tie" shape of the disk) at the disk-wind interface and move the gas from the outer galaxy towards the galactic centre. the heavy disk we have r ∼ 10 kpc and r ∼ 10 − 12 kpc with GG criterion and measured in the simulation, respectively. The three models are a reasonable fit to the GG criterion, although the HD and heavy ones are marginally better. This could be due to the assumptions of GG: a zero-width disk (the HD model disk has a scale height of ∼ 200 pc) and no Figure 8. Flux of disk mass integrated in z as a function of time. The radial flow azimuthally averaged is z-integrated within the range |z| ≤ 10 kpc. The colour-bar shows the inward motions in blue and the outwards as red of the mass flux. The oblique shocks appear in r = 5 − 10 kpc at t 100 Myr and increase outwards, driving gas to smaller radii. The shocks reach their maximum strength at t ≈ 250 Myr and after that they start to vanish from the outskirts of the disk (r > 10 kpc) when the ram pressure increases and generates an outward flow of gas instead at t > 300 Myr. consideration of the effect of the MF in the gas dynamics. Still, even if the values for the truncation radius do not coincide exactly with the calculations from the simulations, the GG criterion yields a good approximation of how much a gaseous disk may be stripped due to ram pressure. Oblique shocks The MHD model has a flared disk (see §2.1) since the MF yields a less-compressive gas layer. Therefore, when the ICM wind reaches the galaxy, an oblique shock is generated in the wind-disk interface. The shocks change the initial distribution of gas in the disk, as can be seen in the density contours in figure 7 (upper panel), and are present in most of the wind-facing side of the disk. The figure compares the initial distribution of the gas density of the disk (dotted line) and at t = 90 Myr (solid line). The most diffuse gas with n = 10 −4 cm −3 (see the −4 contour) is pushed up and compressed so that the density is more extended in the +z direction than in the −z direction. Conversely, gas with 10 −3 cm −3 (−3 contour) is more extended in the −z direction compared to the initial distribution because of the accumulation of the material due to the compression. This compression advances to smaller radii, so that there is more dense gas in the central regions of the galaxy, leading to an expansion of the inner part of the disk with n = 10 −2 cm −3 (−2 contour) in the radial direction and below the midplane in 5 kpc < r < 7 kpc. These oblique shocks lead to a radial inflow of gas toward the inner regions of the galaxy. The middle panel of figure 7 shows this (azimuthally averaged) mass flux at t = 90 Myr, with the gas density contours at t = 90 Myr from the upper panel are overlaid. Notice that the inward flux matches with the shocked gas, that is the shocks funnel the gas from larger to smaller radii. The oblique shocks and the inward flux of mass they produce are present at all radii, being stronger in the inner region of the disk (2 kpc < r < 10 kpc) than in the external region. The inflow of gas can be also observed through flux arrows, which are represented in the bottom panel of figure 7, where these are overlaid on a density slice as in figure 2, in the y = 0 plane also for t = 90 Myr. The flux arrows show the motion of gas to the centre of the galaxy produced by the oblique shocks, as we mentioned earlier. The shocked gas at the interface of the disk and the wind (z < 0) is pushed up and redirected to smaller radii. These shocks and the inflow of gas from the outskirts (r > 10 kpc) may supply the central regions of the disk to ignite star formation or nuclear activity, until the ram pressure increases and starts to sweep the gas from the galaxy. Figure 8 shows the evolution in time of the z-integrated radial gas flux. The gas flux is integrated over a height of |z| ≤ 10 kpc. Blue colour represents the radial inflow and red colour is the outflow. As it was previously mentioned, the gas is compressed and funneled to the inner regions of the galaxy. Both the shocks and the flow appear at t = 90 − 100 Myr and have a radial extension of r = 6 − 8 kpc, where the flux is maximum at this time. As the wind pushes a larger portion of the disk, the shocks and the inflow they produce increase in radius as time increases. For example, in 15 kpc < r < 20 kpc the inflow is active from t ∼ 150 Myr to t ∼ 300 − 350 Myr, which means that the oblique shocks can funnel the gas from the outskirts of the disk to smaller radii before the wind sweeps it out. The strongest inflow is generated in the time interval of ∼ 100 − 250 Myr after the shocks appear, that is, in the t ∼ 200 − 350 Myr mark in figure 8. After t = 300 Myr, the inflow from the outskirts, that is, the gas originally located in r > 10 kpc, becomes weaker until it starts to vanish at t ∼ 400 Myr. This happens when the wind accelerates and surpasses the galactic gravitational potential, generating an outward flow instead and finally removing the gas of the disk. The swept gas is represented by the red area near the end of the simulation, t > 400 Myr and in the radial range of r = 10 kpc to r = 17 kpc. Even though at times t >∼ 350 Myr for radii r < 10 kpc, there is still an inflow of gas to the centre of the disk, the motion in this region is more random or disordered due to the interaction with the highspeed wind (vICM > 700 km s −1 ), which is observed in the blue and red bands. Additionally, the initial flaring of the disk has almost vanished since the wind has compressed and displaced the gas below the midplane. It can be seen from figure 2 that the heavy disk is also flared but to a lesser extent than the MHD one. We also analyze the oblique shocks in the heavy disk model. We observed an increase in the density of the disk due to a compression of the gas in the wind-disk interaction zone, as in the MHD model. Nevertheless, the layer of the shocked gas is less prominent, with a thickness of ∼ 200 pc which is ≤ 0.4 of the shocked region in the MHD case, where the thickness of the compressed gas ranges from ∼ 500 pc − 1 kpc in some regions of the disk. The intensity of the azimuthally aver-aged flux (as a function of r and z) for the heavy disk has nearly the same maximum value reached in the MHD case at the time shown in figure 7 (t = 90 Myr), but the strongest flux in the heavy model is located near the centre of the disk, where our setup is not very reliable due to the grid. Studying the flux arrows for the heavy disk we also observed that the vertical motion of the gas in the +z direction dominates over the radial one, that is the wind mostly moves the gas upwards before funneling it to the centre of the galaxy. Addiotionally, the compressed layer of gas is closer to the galactic midplane, so the shock is less oblique. The inflow of gas as a function time for the heavy model is on average a factor of 0.5 lower than in the MHD since in the latter there is gas located at higher z and therefore, when the flux is integrated in the z direction, there is more gas moving towards the centre of the disk and the total flux as a function of time and r is higher. The strongest inflow in the heavy model is present between t ∼ 150 − 300 Myr, lasting ∼ 150 Myr. At t > 300 Myr the motion of gas in the heavy model is more disordered in the inner disk (r < 10 kpc) until it is finally removed by the wind. The inflowing gas driven by the oblique shocks raise the possibility of a strong star formation episode in the central part of the galactic disk while the outskirts of the galaxy are being stripped of gas. Observations suggest that S0 galaxies had their last star formation burst in their bulge (Prochaska Chamberlain et al. 2011;Sil'chenko 2006;Sil'chenko et al. 2012;Bedregal 2012;Johnston et al. 2012Johnston et al. , 2014, but see Katkov et al. 2015), so this mechanism can provide the central regions with the gas necessary for that burst. Additionally, galaxies undergoing RPS have shown unusual nuclear activity, possibly because the gas is being pushed to the centre and also an enhanced star formation in the region where the gas is compressed by the ICM, that is, the star formation is induced and enhanced by the ram pressure (Cayatte et al. 1990;Poggianti et al. 2016). Poggianti et al. (2017), found a very high incidence of AGN (Seyfert 2) among jellyfish galaxies from MUSE data and they conclude that ram pressure triggers the AGN activity. There are several points that need to be kept in mind when comparing our simulations with the above quoted results. First, in this work we present only a generic model for a flared disk galaxy. More studies must be performed in order to verify the presence of these oblique shocks in galaxies. In our model, the flare is created by a magnetic field, but this is not the only mechanism to create such a disk (for example, a different equation of state for the gas as presented by Roediger & Hensler 2005). The second point to consider is that the central regions of the disk in our simulation are too idealized, and so it is hard to state how much of the inward flux created by the shock actually reaches the centre of the galaxy. Also, the perfectly face-on geometry of the interaction might have an influence of the shocked-gas galactic inflow. More numerical experiments, with less idealized conditions, will be presented in future contributions. Nevertheless, as long as the disk flares, oblique shocks should appear for a face-on ICM wind interaction and the presence of a magnetic field is a good mechanism to generate such a flare. Also, since a magnetized disk is less compressible than a pure HD one, the shocked layer in the MHD model will be more pressurized and will try to drain gas, either to the outskirts and/or to the central regions of the disk. CONCLUSIONS We performed MHD and HD simulations of a disk galaxy subject to RPS to analyze the impact of the MF in the dynamics of the gas during the stripping event. Both models were set up in hydrostatic equilibrium with the gravitational potential of an M33-like galaxy, without the galactic bulge component of the potential. We found that the galactic MF gives us a thicker gaseous disk than the HD one, which change the dynamics of the model, that is, we have gas farther from the galactic potential well (in the z direction) in the MHD, plus the surface density in z is higher than in an HD disk with the same midplane density. When the ICM wind hits the disks, at the beginning of the simulation, the MHD disk is hardly affected by the wind, since no significant changes were observed in the initial shape of the disk, only the compressed gas in the interaction interface. The HD disk is easily perturbed and pushed off the galactic midplane by the wind. Then the gravitational potential pulls back the material to disk, generating an infall of gas to the disk until the ram pressure exceeds the gravitational force and removes the gaseous disk. The evolution of both models continues as the wind velocity increases. Their ISM is removed of the disk, from the outside-in, and reaches higher z above the midplane. When the models have evolved for t = 500 Myr, the swept gas in the MHD case is denser, reaches a height of approximately z ∼ 20 kpc, and the disk has been truncated to r ∼ 10 kpc. In the HD run, the swept gas is farther away from the galactic midplane, z ∼ 40 kpc, and has a lower density than the MHD. The disk is also eroded to a smaller radius of r ∼ 4 − 5 kpc. These results show that the removal of the gas disk is less efficient in the MHD model than in the HD case with the same midplane density. The main differences found so far between the models are: • The HD disk is more easily eroded than the MHD one, because in the magnetized case we have a higher surface density Σ and the gas is less compressible than in the HD model. Since the surface density strongly affects the stripping rate, we developed an HD model with approximately the same Σ as the MHD, which shows a similar stripping rate. This "heavy" HD disk has a very high midplane volumetric density that makes it unrealistic. • The swept gas for the MHD model has a smooth appearance whilst for the HD models (both the regular and the heavy disks), the gas above the galaxy has a clumpier and filamentary-like morphology, that is, the MF mainly affects the shape and structure of the swept gas. Previous RPS simulations have obtained broader tails, that is the swept gas of the disks, compared with observations of jellyfish galaxies (galaxies undergoing RPS). It was expected that additional physical properties, such as MF, cooling, star formation, etc. may help to solve this problem, presenting narrower tails in the simulations. Ruszkowski et al. (2014) presented MHD simulations with radiative cooling and self-gravity for a magnetized ICM only, and showed that the MF can give narrower gas tails compared with HD models. Our runs show the opposite behavior, the swept gas from the disk in the MHD model is broader than the HD, but we do not have the same initial set-up as them. The differences in the tail width could be also accounted for the radiative cooling. HD simulations performed by Tonnesen & Bryan (2010) including radiative cooling showed narrower tails in better agreement with observations, compared to non-cooling models. On the other hand, the swept gas from our MHD model shows a smooth structure, while the HD models looks clumpier, similarly to tails observed in Ruszkowski et al. (2014). The differences observed in the shape and morphology of the swept gas in our models lie in the equation of state of the gas, that is an isothermal gas is more compressible than an adiabatic (e.g. Roediger & Hensler 2005;Roediger & Brüggen 2007Tonnesen & Bryan 2009, 2010 or a magnetized gas, and when the wind hits the galaxy clump-like regions appear in our HD simulations. When these regions are pushed and eroded by the wind, they generate tails in the swept gas, where the flow is similar to the cigarette smoke, giving the filamentary and clumpy shape to the swept gas in our HD simulation. A more detailed analysis of the morphology and structure of the gas tails will be presented in the near future (Ramos-Martínez et al. in preparation). Tonnesen & Stone (2014) also performed RPS models with galactic MF, with different configurations and intensities for the field, and they found that the MFs do not make a significant difference in the stripping rate of ISM, but the MF inhibits the mixing of the gas tail with the surrounding ICM and unmixed gas survives at larger distances from the disk. In our results we see a similar trend, since the swept gas in the MHD model also remains unmixed for longer time, despite the fact that the z-height is smaller compared to our HD run. The differences in the tail appearance and structure for their MHD and HD models is not so evident or dramatic. Since the approach of our models is not the same as Ruszkowski et al. (2014) and Tonnesen & Stone (2014) we cannot make an analytical comparison with their works. We consider that, in order to understand if MF can make a significant difference and its relevance in the interaction of the ICM-ISM, further investigation will be needed. Even when our HD simulation ran for 1 Gyr, the model reached equilibrium at t 500 Myr: the truncated disk remained with the same radius although the wind was still accelerating to a maximum velocity of 1000 km s −1 before the simulation ended. Therefore, we can assume that the MHD run has also reached equilibrium with the ram pressure, or is near to it. The remaining gaseous disk could be removed by other mechanism, like interactions or fly-by's with other galaxies (e.g. galaxy harassment), this should be taken into account because these objects are not completely isolated, specially in clusters. Interactions between galaxies can remove the gas or trigger star formation so the ISM is consumed or exhausted. It is well known that RPS works well removing the gas of the galaxies, but this process fails in reproduce other S0s features, like higher bulge-to-disc ratios than spirals, given that RPS has been proposed as a transformation mechanism of spirals to S0s. For our magnetized case, with inefficient RPS, we found an interesting behavior in the gas: there are motions of gas from large radii to the galactic cen-tre. This phenomenon occurs only in the early stages of the simulation, when the wind hits the disk, and it is produced by oblique shocks at the interface of the interaction. The oblique shocks appear because of our flared gas disk due to the MF presence and lead the gas to the centre of the disk, which may help to maintain a reservoir of gas available for star formation in the central region of the galaxy, which in consequence could produce a thicker bulge that may lead to a higher bulge-to-disk ratio. Studies have shown that the last star formation burst in S0s galaxies took place in the bulge (Prochaska Chamberlain et al. 2011;Sil'chenko 2006;Sil'chenko et al. 2012;Bedregal 2012;Johnston et al. 2012Johnston et al. , 2014, but see Katkov et al. 2015). Besides, if new stars are born from the remaining gas in the centre, their strong winds could expel the rest of the ISM from the galaxy. Other observations of galaxies affected by RPS have shown unusual nuclear activity, that is, the gas may be pushed to the centre and the compression produced by the ICM enhances star formation: the star formation is induced and enhanced by the ram pressure (Cayatte et al. 1990). Poggianti et al. (2016) showed an atlas of stripping candidates where most of their galaxies presented higher star formation compared to non-stripped galaxies. From this results, the oblique shocks can be seen as a mechanism that enhance the formation of new stars in the remaining disk or even trigger nuclear activity (e.g. an AGN). Also Poggianti et al. (2017), found a very high incidence of AGN (Seyfert 2) among jellyfish galaxies from MUSE data and the conclusion is that ram pressure triggers the AGN activity. Since the flux of gas derived from the oblique shocks in our MHD simulation lasted about ∼ 150 Myr from the time the wind hit the disk, this could be considered as comparable with the duty cycle of AGN's, which has been estimated in 10 − 100 Myr (Haehnelt & Rees 1993), but given that our simulation does not properly model the central regions of the galaxy, nor we have a central black hole, we can only speculate that the oblique shocks will transport the gas for enough time to ignite an AGN. Other tests need to be performed to better study the funneling of gas towards the central regions of the galaxy, such as different wind profiles and angles, and different disk surface densities and flare strengths. Additionally, it has been reported that the star formation can continue in the tail of the stripped gas, as it is shown in observations of HII regions in the tail of a galaxy subject to RPS (Kenney & Koopmann 1999;Boselli & Gavazzi 2006;Cortese et al. 2007;Yoshida et al. 2008;Hester et al. 2010;Sun et al. 2010;Yagi et al. 2010;Abramson et al. 2011;Kenney et al. 2014;Poggianti et al. 2016). Due to the limitations of our models, not enough resolution nor the appropriate equation of state to solve the star formation, we cannot explore the possibility of new stars born in the swept gas of our models or the centre of the disks from the motions of gas originated from the oblique shocks. More on this subject will be done in a future work as well as an in-depth analysis of the swept gas for the MHD model (Ramos-Martínez et al. in preparation). eree for comments that helped improve this manuscript. We acknowledge financial support from UNAM-DGAPA PA-PIIT grant IN100916, and CONACyT for support for MRM.
2018-02-15T19:17:27.000Z
2017-11-03T00:00:00.000
{ "year": 2018, "sha1": "7a6d9a1b4a1323a954816c708ff22983b02c7a65", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.01252", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "56e670a96cd9f5790b42dabb87a2e89595e6f66a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226302249
pes2o/s2orc
v3-fos-license
Pioglitazone Is Associated with Lower Major Adverse Cardiovascular and Cerebrovascular Events than DPP4-Inhibitors in Diabetic Patients with End-Stage Renal Disease: A Taiwan Nationwide Cohort Study, 2006–2016 While pioglitazone reduces insulin resistance and hepatic gluconeogenesis effectively in patients with type 2 diabetes mellitus (T2DM), these benefits remained controversial in patients with end stage renal disease (ESRD). We compared major adverse cardiac cerebrovascular events (MACCEs) and mortality (overall, infection-related, and MACCE-related) of pioglitazone to that of dipeptidyl peptidase 4 inhibitors (DPP4-inhibitors) in patients with T2DM and ESRD. From Taiwan’s national health insurance research database (NHIRD), 647 pioglitazone users and 6080 DPP4-inhibitors users between 1 April 2006 and 31 December 2016 were followed from the 91th date after the ESRD certification until the study outcomes, independently; withdraw from the NHI program, death, or 31 December 2017, whichever came first. After weighting, risks of MACCEs (10.48% vs. 12.62% per person-years, hazard ratio (HR): 0.85, 95% (CI): 0.729–0.985) and all-cause mortality (12.86% vs. 13.22% per person-years, (HR): 0.88, 95% (CI): 0.771–0.995) are significantly lower in pioglitazone group. Subgroup analysis found lower MACCEs risk in the pioglitazone users without insulin therapy (6.44% vs. 10.04% (HR): 0.59, 95% (CI): 0.42–0.82) and lower MACCEs related death (2.76% vs. 3.84% (HR): 0.61, 95% (CI): 0.40–0.95) in the pioglitazone group with dyslipidemia, when comparing with DPP4-inhibitors users. Pioglitazone is associated with lower all-cause mortality and MACCEs in diabetic patients with ESRD, compared to DPP4-inhibitors. These benefits were even more significant in the non-insulin users and patients with dyslipidemia. Introduction Patients with end stage renal disease (ESRD) had poor prognosis, which was caused by the high mortality rate associated to atherosclerosis and infection [1][2][3][4]. Type 2 diabetes mellitus (T2DM) accounts for the great majority of cause to end stage renal disease (ESRD) throughout the world, especially in Taiwan [5,6]. Moreover, the co-existing of T2DM among patients undergoing maintenance dialysis strongly increased the risk of cardiovascular events, including myocardial infarction and cerebrovascular events in comparison to ESRD patients without DM [1][2][3]. In other words, good control of T2DM may prevent patients with ESRD from these fatal events [2,3]. In the population of ESRD, several glucose-lowering agents were of concern. For instance, metformin is contraindicated in patients with advanced chronic kidney disease due to the risk of lactic acidosis [7,8]. Glipizide is the only sulfonylurea that could be prescribed in patients with ESRD, yet it may increase the risk of hypoglycemia and cardiovascular mortality [8,9]. Sodium-glucose transport protein 2 inhibitors (SGLT2 inhibitors) act as a glucose lowering agent via inhibition of glucose reabsorption and it is not suggested for patients with eGFR less than 45 mg/dL [8,9]. In contrast, dipeptidyl peptidase 4 inhibitors (DPP-4 inhibitors) have demonstrated safety and capability as a hypoglycemic agent for patients with ESRD [7,[10][11][12]. DPP4-inhibitors can treat hyperglycemia by protecting activation of incretin to induce glucose metabolism, without the side effect of hypoglycemia [13]. DPP4-inhibitors have become the oral hypoglycemic agent (OHA) with the least adverse effect and were frequently prescribed in T2DM patients with ESRD [14,15]. To the best of our knowledge, there was no direct evidence for the benefit of pioglitazone to reduce adverse effects and mortality rate in the category of T2DM with ESRD in comparison to DPP4-inhibitors. Hence, we aimed to estimate the rates of major adverse cardiac and cerebrovascular events (MACCEs) and mortality (overall, infection-related, and MACCE-related) in the diabetic patients with ESRD receiving pioglitazone in Taiwan. The control group was diabetic patients with ESRD receiving DPP4-inhibitors rather than pioglitazone. Data Source The primary data source was from the Taiwan national health insurance research database (NHIRD) and Taiwan Death Registry (TDR). The Taiwan National Health Insurance program was founded in 1995 and covered more than 99.6% of individuals since 1997 [33]. Registration data (year of birth, sex, income, place of residence, occupation, date in and out of the NHI program), and original claims for reimbursement (dates of clinical visits, medical diagnoses, medical expenditure, details of prescriptions, examinations, and procedures) are stored in NHIRD. The disease diagnoses were coded using the ICD-9-CM and were switched to ICD-10-CM after 2016. The TDR had the information about the date of death, cause of death (underlying and immediate) for deceased Taiwanese residents. The cause of death was also coded using the ICD-9-CM and were switched to ICD-10-CM after 2008. Note that both the NHIRD and the TDR are available for research purpose after the identification information were encrypted. The link between these two datasets is feasible because of using the same encryption algorithm. To further protect the privacy of the beneficiaries, the use of NHIRD is restricted at Health and Welfare Data Science Center, Ministry of Health and Welfare (HWDC-MHW), Taiwan and its sub-centers, and only summary results are allowed to carry out from the center. This study had obtained an approval from the Institutional Review Board of Chang Gung Medical Foundation (approval number: 201900840B0) and the National Health Insurance Administration, Department of Health and Welfare, the holder of the NHIRD. Study Design Using the NHIRD and TDR, we designed a nationwide retrospective cohort study with patients having T2DM and ESRD and divided in two study groups: pioglitazone and DPP4-inhibitors. The active control group of DPP4-inhibitors, including sitagliptin, saxagliptin, and linagliptin, allow us to reduce the channeling bias (also called cofounding by indication) [34]. The cohort was followed from the index date till primary outcomes, secondary outcomes, independently, withdraw from the NHI program, death, or 31 December 2017, whichever came first. Patient Selection The algorithm of patient selection in this study is shown in Figure 1. Patients older than 20 years old with first catastrophic certification of ESRD between 1 April 2006 and 31 December in 2016 and having T2DM were identified as new onset ESRD cohort. The 91th date after the certification was defined as the index date. Patients with newly diagnosed T2DM after index date, patients with malignancy before index date and patients with incomplete demographic data were excluded. Patients who died or had MACCEs within 90 days before the index date were excluded because the events were less likely due to the exposure of pioglitazone or DPP4-inhibitors. J. Clin. Med. 2020, 9, x FOR PEER REVIEW 3 of 14 same encryption algorithm. To further protect the privacy of the beneficiaries, the use of NHIRD is restricted at Health and Welfare Data Science Center, Ministry of Health and Welfare (HWDC-MHW), Taiwan and its sub-centers, and only summary results are allowed to carry out from the center. This study had obtained an approval from the Institutional Review Board of Chang Gung Medical Foundation (approval number: 201900840B0) and the National Health Insurance Administration, Department of Health and Welfare, the holder of the NHIRD. Study Design Using the NHIRD and TDR, we designed a nationwide retrospective cohort study with patients having T2DM and ESRD and divided in two study groups: pioglitazone and DPP4-inhibitors. The active control group of DPP4-inhibitors, including sitagliptin, saxagliptin, and linagliptin, allow us to reduce the channeling bias (also called cofounding by indication) [34]. The cohort was followed from the index date till primary outcomes, secondary outcomes, independently, withdraw from the NHI program, death, or 31 December 2017, whichever came first. Patient Selection The algorithm of patient selection in this study is shown in Figure 1. Patients older than 20 years old with first catastrophic certification of ESRD between 1 April 2006 and 31 December in 2016 and having T2DM were identified as new onset ESRD cohort. The 91th date after the certification was defined as the index date. Patients with newly diagnosed T2DM after index date, patients with malignancy before index date and patients with incomplete demographic data were excluded. Patients who died or had MACCEs within 90 days before the index date were excluded because the events were less likely due to the exposure of pioglitazone or DPP4-inhibitors. Exposure All participants in this cohort study were exposed to at least one OHA between ESRD date and index date, either pioglitazone or DPP4-inhibitors. Patients who did not receive either one of these two drugs or who took both drugs were not enrolled. Covariates and Outcomes We considered the following covariates: (1) demographic characteristics (age, gender, income level, and place of residence), (2) comorbidities within one year before index date (hypertension, dyslipidemia, liver cirrhosis, connective tissue disease, atrial fibrillation, and peripheral arterial disease), (3) hospitalization history (heart failure, myocardial infarction, stroke, and infection) within 3 years before index date, and (4) medication within 90 days before index date (ACEi or ARB, other anti-HTN, diuretics, aspirin or plavix, NSAIDs, insulin, sulfonylurea, acarbose, meglitinides, GLP-1, and anti-cholesterol). To reduce misclassification, all comorbidities had to be at least two visits at outpatient or one hospitalization. Charlson's score, which weighted based on 14 diseases, was also presented [35]. All-cause mortality as well as MACCEs, including myocardial infarction, cardiogenic shock, new-onset heart failure, coronary revascularization, fulminant arrhythmia, and cerebrovascular events) were the two primary outcomes of this study. The secondary outcomes were infection-related death and MACCEs related death, which were the two most leading cause of mortality in this population. Death due to MACCEs or infection were recognized by surveillance of final diagnosis appertained to hospitalization or emergency room visits, or the underlying cause of death in TDR. Please see the Supplementary Table S1 for the ICD-9-CM and ICD-10-CM for the study outcomes and covariates for this study. Statistical Analysis We used the propensity score method with stabilized weights (PSSWs) to balance the covariates at index date between the two drug groups [36]. The PSSWs provide an appropriate estimate of the main effect variance without compressing or magnifying the sample size of the original data, hence, the designated type I error was maintained. We included the covariates (except Charlson's score) at baseline (Table 1) in the generalized boosted model (GBM) to obtain PSSWs, because Charlson's score included some comorbidities and hospitalization history used in this study. The GBM is less affected by large weights and can achieve the optimal balance between the two drug groups, by automatically including interactions or polynomial terms of the covariates [37]. We used the absolute standardized mean difference (ASMD) to examine the balance of covariates at index date between the two drug groups, because balance is a property of the sample and not of an underlying population. The value of ASMD ≤ 0.1 indicated a negligible difference in covariates between the two study groups [38]. We computed the incidence rates as the total number of study outcomes during the follow-up period divided by person-years at risk. We assessed the hazard ratio (HR) of study outcomes for pioglitazone versus DPP4-inhibitors using survival analysis (Kaplan-Meier method and log-rank test for univariate analysis and Cox proportional hazards model for multivariate analysis). We also performed subgroup analysis and used forest plot to show whether the pioglitazone group had a consistent HR for pioglitazone when compared with the DPP4-inhibitors group in specific subgroups. To maintain a balance of varied covariates between the two drug groups, we re-conducted PSSWs for each subgroup analysis. The significant level of this study was 0.05. All statistical analyses were performed using SAS ver. 9.4 (SAS Institute, Cary, NC, USA). Patient Characteristics There were 28,497 patients with type 2 DM and newly diagnosed ESRD during 1 April 2006 to 31 December 2016 in Taiwan. After excluding those had first diagnosis of T2DM before index date (n = 800), age under 20 years old (n = 1), incomplete demographic data (n = 46), malignancy before index date (n = 874), patients who died (n = 0) or have MACCEs (n = 4553) within 90 days before the index date, took both pioglitazone and DPP4-inhibitors within 90 days before the index date (n = 652), did not take either pioglitazone or DPP4-inhibitors within 90 days before index date (n = 14,844), there were 647 patients in the pioglitazone group and 6080 patients in the DPP4-inhibitor group (Figure 1). Table 1 illustrated the demographic characteristics, comorbidities, hospitalization history, and use of medication between the two drug groups. Before PSSWs, there were more female, rural resident, hospitalization history of heart failure, use of angiotensin-converting enzyme inhibitors (ACEi) or angiotensin II receptor blockers (ARBs), and use of other oral hypoglycemic agent (OHA) in the pioglitazone group than the DPP4-inhibitor group. After PSSWs, all covariates were balanced between the two drug groups as ASMDs were less than 0.1, except the use of insulin. This may indicate pioglitazone was prescribed in combination with insulin more frequently and less OHA in comparison with DPP4-inhibitors in our study cohort. Outcomes The incidence rate of study outcomes between the two drug groups are shown in Table 2 and the cumulative incidence vs. follow-up time are plotted in Figure 2 (after PSSWs) and Supplemental Figure S1 Discussion In this nationwide cohort study investigating patients with coexisting T2DM and ESRD, the pioglitazone group was associated with reduced MACCEs and all-cause mortality when compared to the DPP4-inhibitor group after PSSWs. These significant findings were not shown clearly in previous real-world study. As one may criticize that saxagliptin had been reported to increase heart failure related hospitalization [39], we also conducted an additional analysis for MACCEs and MACCE-related death which excluded saxagliptin users in control group. The result (Supplementary Table S2) revealed that excluding saxagliptin users from control group did not revoke the effect of pioglitazone to reduce MACCEs. There was no difference of MACCEs-related death between these two groups. Another difference in the subgroup analysis of insulin therapy was that DM patients who were insulin-free were more likely to benefit from pioglitazone with lower MACCEs. Furthermore, patients with underlying dyslipidemia were associated with lower MACCEs related death in pioglitazone users' group. Pioglitazone is a full PPAR-γ agonist, affecting multi-system with the potential to promote health or reduce lethal consequences in patients with DM [16][17][18]25,27,29,40]. The advantages and mechanism of PPAR-γ agonist includes inhibition of cytokine production by macrophages, reduction of oxidative stress, improving insulin resistance, control dyslipidemia due to the regulation of adipogenesis, and lowering blood pressure via vasodilation [22,24]. The aforementioned properties may then result in protecting effects, mostly against ischemic stroke and cardiovascular events. Considering the high risk of cerebrovascular accident and cardiovascular events and related mortality, it is not surprising that pioglitazone could eventually decrease the risk of mortality rate among patients with diabetes and ESRD [16,17]. It was not clearly unstressed why patients without insulin therapy had lower risk of MACCEs in the group of pioglitazone than in the DPP4-inhibitors group. However, it was consistent with previous large retrospective study in ArMORR cohort [41]. In our opinion, insulin may increase the risk of hypoglycemia in dialysis patients that affected the compliance of combined OHA [42]. Moreover, considering the phenomenon of "Burnt-Out Diabetes" [43] after progression into ESRD, the use of insulin in dialysis cohort may represent a relative higher HbA1c or more fluctuated serum glycemic status. In addition, insulin per se could be responsible for the increased MACCEs risk in dialysis patient with or without the combination of other glycemic control agent [44]. All these factors could interfere with the effect of pioglitazone for reduced MACCEs. On the other hand, pioglitazone's effect in lipid lowering via regulation of adipogenesis remained intact in ESRD patients [16,29], which could be supported by our observation that patients with underlying dyslipidemia benefit more from pioglitazone rather than DPP4-inhibitors. There were plenty of clinical studies which were designed to investigate the effectiveness and safety of pioglitazone, such as PROactive trial [25,40], CHICAGO trial [45], and PERISCOPE trial [46], whether compared to placebo or other oral glucose lowering agents. The PROactive trial investigated DM patients with prior macrovascular events and revealed a reduction of all-cause mortality, non-fatal myocardial infarction, and stroke in the intervention group [25,40]. The CHICAGO trial displayed the role of pioglitazone to slow progression of carotid artery intima-media thickness [45] while the PERISCOPE trial proved the effect of pioglitazone to lower coronary atherosclerosis in DM patients with a history of coronary artery disease [46]. None of these clinical trials discussed the effect and safety outcome in the setting of end stage renal disease. In contrast, some clinicians compared pioglitazone to placebo or other OHA in the population with ESRD [21]. However, these trials were either too short (most had mean follow up less than 1 year) or too small (participants were less than 100 in most studies) to provide solid evidence. One large retrospective study conducted by Brunelli et al. used the data extracted from the ArMORR cohort had been published in 2009 [41]. This study, comparing pioglitazone with placebo among patients receiving incident hemodialysis, consistently disclosed the effect of reduction in all-cause mortality in the group of non-insulin participants. This effect was contributed to non-CV mechanisms, explained by Brunelli and associates. Unlike the former research, the current study pointed out a significant reduction of both all-cause mortality and MACCEs. However, with similar design, our study offered more robust evidence through the setup of control group and a long follow-up duration. One of the obstacles to obtain diabetes control in the ESRD population is that there are few OHA that are safe and effective in these patients. As mentioned above, DPP4-inhibitors maintained their effectiveness in diabetic patients with renal impairment and even ESRD [7,10,47]. Some studies indicated that specific DPP4-inhibitors may lead to adverse cardiovascular events, such as recurrent myocardial infarction and hospitalization, due to heart failure in selected populations [39,48], while the others take a positive attitude among cardiovascular outcome in patients with ESRD [14,49]. Overall, DPP4-inhibitors are thought to be neutral with regard to the cardiovascular events in DM population [12,50]. Based on this feature, patients who received DPP4-inhibitors were enrolled as the control group in our study. The results of present study demonstrated a reduction in all-cause mortality and MACCEs in the pioglitazone group. This finding implied the safety of pioglitazone in patients with ESRD and may further point out the possible benefit of pioglitazone in the treatment of DM-ESRD patients beyond glycemic control. Unfortunately, pioglitazone had been reported to be associated with several adverse events. Among these side effects, edema, weight gain, and heart failure are of concerned to many clinicians, although there is some controversy [9,31,32]. In fact, these side effects were mostly reported by research in populations with normal renal function to mild renal impairment. For patients under maintenance dialysis, it is likely that the edema and weight gain can be controlled by adjustment of the dialysis modality. However, the possibility to increase burden of ultrafiltration, particularly in peritoneal dialysis population, may require further investigation. The true incidence and influence of these adverse events, especially edema, weight gain, and heart failure, in the ESRD cohort with pioglitazone therapy remained unclear and thus further randomized control trials are required in future. There were several limitations in our study. First of all, this cohort study was not assembled to compare the efficacy of the glucose lowering of these two different OHA. In other words, the present study did not examine the glucose control ability of these two agents. Second, the NHIRD contains most information which was required for the purpose of reimbursement. However, the absence of laboratory results (i.e., glycohemoglobin), examination findings (i.e., left ventricular ejection fraction), and lifestyle characteristics (i.e., body mass index and cigarette smoking) do not allow us to examine these particular risk factors. Third, this research is conducted retrospectively in an observational perspective, which means the indication, the dosage, and the compliance in each group were not standardized. To mitigate this shortcoming, PSSW was applied with as much covariates as available in this administrative database, including socioeconomic status, comorbidities, and medication use. Fourth, the side effects, as discussed previously, and minor events including hypoglycemia were hard to obtain in both groups. Therefore, the findings of this study should be interpreted with caution and a further prospective randomized control trial is warranted. In summary, the current study provided robust evidence to support that pioglitazone is associated with lower all-cause mortality and MACCEs in comparison to DPP4-inhibitors in diabetic patients with ESRD, especially in those non-insulin enrollees. Besides, patients with dyslipidemia are more likely to benefit from pioglitazone among MACCEs related death.
2020-10-02T19:03:17.539Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "bf0ba234beb2ab4eaeb316c3d23cbcd0e5dcb895", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/9/11/3578/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6deb981b7686d0e25daf2beb99fbdb365bf65cd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245441453
pes2o/s2orc
v3-fos-license
Transcriptional signature in microglia isolated from an Alzheimer's disease mouse model treated with scanning ultrasound Abstract Transcranial scanning ultrasound combined with intravenously injected microbubbles (SUS+MB) has been shown to transiently open the blood–brain barrier and reduce the amyloid‐β (Aβ) pathology in the APP23 mouse model of Alzheimer's disease (AD). This has been accomplished through the activation of microglial cells; however, their response to the SUS treatment is incompletely understood. Here, wild‐type (WT) and APP23 mice were subjected to SUS+MB, using nonsonicated mice as sham controls. After 48 h, the APP23 mice were injected with methoxy‐XO4 to label Aβ aggregates, followed by microglial isolation into XO4+ and XO4− populations using flow cytometry. Both XO4+ and XO4− cells were subjected to RNA sequencing and transcriptome profiling. The analysis of the microglial cells revealed a clear segregation depending on genotype (AD model vs. WT mice) and Aβ internalization (XO4+ vs. XO4− microglia), but interestingly, no differences were found between SUS+MB and sham in WT mice. Differential gene expression analysis in APP23 mice detected 278 genes that were significantly changed by SUS+MB in the XO4+ cells (248 up/30 down) and 242 in XO− cells (225 up/17 down). Pathway analysis highlighted differential expression of genes related to the phagosome pathway and marked upregulation of cell cycle‐related transcripts in XO4+ and XO4‐ microglia isolated from SUS+MB‐treated APP23 mice. Together, this highlights the complexity of the microglial response to transcranial ultrasound, with potential applications for the treatment of AD. | INTRODUCTION Alzheimer disease (AD) is the most common cause of dementia worldwide. The disease is characterized by progressive and irreversible neurodegeneration. However, given the complexity of the disease combined with a lack of knowledge on how to treat AD efficiently, there is an acute requirement to develop novel treatment strategies. 1 At a histopathological level, AD is characterized by the accumulation of extracellular amyloid-β (Aβ) plaques, intraneuronal tau deposits and increased microglial activation. 2 A broad range of studies have revealed how microglial cells assume both a protective role (through shielding, recognition, and removal of Aβ) and a detrimental role (through removal of synapses or the release of neurotoxic factors), driving the progression of AD. 3 Transcriptomic studies on microglia have advanced our understanding of the pathogenesis of AD at the level of transcriptional network dynamics, highlighting important molecular players depending on the different phases of the disease. [4][5][6] Microglia are known to phagocytose aggregated forms of Aβ, and it has been proposed that deficiencies in this process may contribute to late-onset AD 7 and metabolic labeling in humans indicated that clearance of Aβ is impaired in AD. 8 Recently, it has been shown that Aβcontaining microglia differ in their transcriptional signature in comparison to microglia that have not internalized the peptide. 9 An obstacle to treating AD is the blood-brain barrier (BBB), which prevents large molecules such as antibodies from entering the brain, with IgG having 0.1% transfer across the barrier. 10 Approaches to modify anti-Aβ antibodies to increase levels in the brain are in development, 11 along with other approaches to circumvent the BBB. (a) Scanning ultrasound (SUS +MB ) or sham (no ultrasound) treatment was applied to APP23 transgenic and wild-type (WT) mice. Two days post-treatment, the mice received a single injection with methoxy-XO4 (that binds Aβ) 2 h before euthanasia and collection of brain tissue. The brains of the mice were harvested and homogenized to form a single-cell suspension, followed by FACS-based isolation of XO4 + and XO4 À microglial cells. (b) Inhouse prepared microbubbles were used for scanning ultrasound (SUS +MB ) and their size and concentration were measured using a Coulter Counter. (c) The gating strategy used to isolate microglial cells into XO4 + and XO4 À populations via FACS with CD11b and CD45 antibodies to isolate a pure population of microglia, and methoxy-XO4 fluorescence to isolate microglial cells that contain methoxy-XO4 bound to Aβ. (d) Methoxy-XO4 (blue) binds to Aβ plaques in the brains of APP23 mice, with Iba1-positive microglia in red. Scale bar: 50 μm Studies in animal models of AD have indicated that repeated transient BBB openings that are achieved throughout the entire brain using transcranial ultrasound in a scanning mode together with intravenously injected microbubbles (SUS +MB ) significantly clear amyloid plaques. One study reported that plaque reduction can occur as fast as 48 h after BBB opening, 12 and we have shown that this process occurs through microglial phagocytosis. 13 Ultrasound-mediated bioeffects (including microglial activation) have also been demonstrated by specifically targeting the hippocampus, 14,15 but the therapeutic benefit seems to be most pronounced when the brain is treated more globally. 13 Of note, this clearing process requires BBB opening 16 and is even effective at reducing Aβ pathology in 22-month-old senescent mice. 17 Combination treatments with ultrasound for delivery of anti-Aβ antibodies such as Aducanumab that has been recently approved by the Food and Drug Administration (FDA) 18 or an anti-pyroglutamylated Aβ antibody, 19 led to more effective plaque removal and behavioral improvements than in those observed in mice that were treated with either ultrasound alone or antibodies alone. 18 Ultrasound-mediated BBB opening has also been achieved in a small safety trial that revealed tolerability in patients with mild AD when a small region of the frontal cortex was targeted. 20 A subsequent clinical study found that the BBB could be opened in parts of the hippocampus, 21 with a modest reduction in the amyloid PET signal following three treatments with ultrasound over a 6-month period. 22 A recent clinical trial opened the BBB in the frontal lobes bilaterally and resulted in a modest reduction in the amyloid PET signal and significant improvement in neuropsychiatric symptoms. 23 In all these studies, BBB opening by ultrasound was shown to be safe and reversible in that the BBB was fully restored after 24 h. Several mechanisms have been proposed to explain how BBB opening leads to amyloid plaque reduction, including the uptake of endogenous immunoglobulins 24 or albumin binding to amyloid, 13 followed by microglial phagocytosis of Aβ and lysosomal digestion. Here, to gain a better understanding of how the combination of SUS SUS vs sham in XO4-microglia +MB F I G U R E 3 SUS +MB treatment leads to an increase in the number of differentially regulated genes in XO4 + microglia, when compared with sham-treated APP23 mice. (a) A Venn diagram depicting the number of genes up-regulated by SUS +MB distribute similarly between XO4 + and XO4 À cells, with many genes up-regulated in both. (b) A larger number of genes were down-regulated in the XO4 + cells compared with XO4 À cells following SUS +MB , with few genes down in both groups (adjusted p < 0.05). fluorescent dye to detect Aβ internalization within the microglia, we identified differences between the microglial cells from mice treated with or without ultrasound, as well as between cells that had internalized Aβ or not. 2 | RESULTS 2.1 | XO4 and FACS-based isolation of Aβ-positive and Aβ-negative microglia To understand the different effects of ultrasound-mediated BBB opening on plaque-phagocytic and non-phagocytic microglia in AD, we applied SUS +MB or sham (i.e., mice were anesthetized and injected with microbubbles but not exposed to ultrasound) to the brains of APP23 mice or WT littermate controls (Figure 1a,b). In addition, to be able to distinguish between microglial cells that had internalized Aβ and those that had not, we used the fluorescent Congo-red derivative methoxy-XO4 to stain Aβ within microglia when injected into live mice, as previously done. 9,25 This allowed us to use a fluorescence activated cell sorting (FACS)-based technique to separate and isolate XO4 + (Aβ phagocytic) and XO4 À (non-phagocytic) microglia following both SUS +MB and sham treatment paradigms (Figure 1c,d). by Aβ uptake (XO4 + vs. XO4 À cells), as well as treatment (SUS +MB vs. sham-treated animals), which were markedly accentuated in the APP23 samples. Of note, there was no effect of SUS +MB treatment in the microglial transcriptome of WT mice. Thus, we subsequently focused our analysis on the effects of ultrasound ± Aβ internalization in APP23-derived microglia only. | SUS treatment induces an increased number of up-regulated genes in microglia To gain insight into the response of APP23 microglia to the SUS treatment regime, we further analyzed the transcripts obtained from XO4 + and XO4 À cells. Our analysis identified 397 differentially enriched genes (FDR ≤ 0.05), with 155 genes being specific for XO4 + cells, 199 genes specific for XO4 À microglia, and 123 genes being independent of the Aβ signature. Analyzing the treatment-dependency patterns, we observed that most of the up-regulated genes were induced by SUS +MB , with a total of 353 enriched genes across all the Aβ internalization levels (Figure 3a), and only 44 genes that were down- and "DNA metabolic processes" (Table 3). KEGG pathway analysis revealed that the most enriched pathways included "DNA replication" and "cell cycle," as well as established pathways in relation to the role of microglia in AD, such as "phagosome" and the "complement and coagulation cascade" ( cycle" and "phagosome" pathways in a treatment-(SUS +MB versus sham) and Aβ internalization (XO4 + versus XO4 À )-dependent manner revealed similar trends, with a stronger response found for the XO4 + microglia containing internalized Aβ (Figure 5a,b). More genes in the phagosome pathway are significantly altered by SUS +-MB in XO4 + microglia (seven genes up-regulated and three downregulated) than XO4 À microglia (five genes up-regulated) with two of these genes up-regulated in both (Figure 5a). For the cell cycle pathway, there were also more genes up-regulated in XO4 + microglia (18 genes up-regulated) than XO4 À microglia (11 genes up-regulated with 9 of these genes up-regulated in both; Figure 5b). | The magnitude of BBB opening after SUS treatment does not differ significantly between APP23 and WT mice While it was not the major focus of this work, it was surprising to find that there was no effect of SUS +MB treatment on the trans- | DISCUSSION In this study, we sought to investigate the changes to the microglial transcriptomic profile induced by the application of BBB opening achieved with therapeutic ultrasound in conjunction with intravenously injected microbubbles in a mouse model of AD. This profile Several previous studies have investigated the effect of ultrasound application to the brain by applying omics techniques to cell populations. One study investigating ultrasound-mediated delivery of plasmids to the brain of WT mice performed single-cell RNA sequencing and found an upregulation of lysosomal genes in microglia 48 h after ultrasound treatment. 31 In support of this, in a SWATH quantitative proteomics screen following a series of 6 weekly sessions of SUS +MB treatments in aged C57Bl/6 (WT) mice, 32 we identified an increase in two microglial proteins (LRBA and CAGP) that are involved in phagocytosis. 26 In our analysis performed a priori to our bioinformatic data mining, we evaluated whether Aβ load (XO4 + /XO4 À ) and treatment (SUS +MB /sham) are independent effects or interacting. Assessing the response of key pathways (phagocytosis and cell cycle), we conclude that the effects are independent and therefore can be analyzed in isolation. Our initial bioinformatics screen aimed to identify differences between microglia from WT and APP23 mice subjected to SUS +MB (with or without Aβ internalization) has revealed several interesting aspects related to the cellular response to the treatment. Thus, the WT microglia revealed the presence of a similar effect on the transcriptome in both sham-and SUS +MB -treated experimental groups, as revealed by both the PCA and heatmap analysis. This could be attributed to the fast resolution of microglial response to acute stimulation. 32 The WT transcriptome was found to cluster in the proximity of the transcriptome specific to XO4 À APP23 sham-treated microglia, reflecting a particular nonphagocytic cellular state, that is, most likely a nondisease associated microglial phenotype. We found microglia WT mice. 29 In addition, the response of microbubbles to ultrasound may differ between APP23 and WT mice because of differences in their cerebrovasculature, 33 or the fact that APP23 mice weigh less than their WT littermates. We performed recordings of acoustic emissions and found that APP23 mice had higher harmonics emissions than WT mice, but that ultraharmonic and broadband emissions were similar. Broadband emissions are associated with the largest magnitude and most violent cavitation activities, and these were mostly similar between WT and APP23 mice. The cause and significance of this difference in cavitation activity between WT and APP23 mice are unclear; however, it is conceivable that the increased cavitation recorded in APP23 mice might lead to an increased magnitude of transcriptomic changes at 48 h, which warrants further systematic studies, for instance, by using a cavitation controller. 27 If applied at an early stage of AD, boosting the Aβ phagocytic activity of microglia may present a promising therapeutic strategy by increasing the clearance of protein deposits. 34 A previous attempt to investigate the microglial response following ultrasound treatment focused on investigating transcripts related to the downstream effects of the NFκB pathway and damage-associated molecules (DAMs) in bulk lysates from WT rodent brains, with most transcript levels returning to baseline after 24 h. 30 A subsequent study, however, reported no significant changes in the expression of any of the NFκBrelated genes when using a lower, more clinically relevant dose of microbubbles. 35 These opposing effects could be attributed to the specific ultrasound parameters that elicit a cavitation-modulated inflammatory response through the microbubbles present in the blood circulation. 27 In addition, the transcriptomic response to ultrasoundinduced BBB opening was found to be dependent on the type of anesthesia used during the procedure. 31 Of note, we used ultrasound settings that we have previously demonstrated to increase microglial phagocytosis, 13 with no damage to neurons, 36 have been observed to remove synapses in AD through a mechanism involving members of the complement system. 37,38 In addition, it has been proposed that the metabolism of microglia is impaired in AD, an effect that can be ameliorated by enhancing the cellular energetic and biosynthetic metabolism. 39 Increased microglial numbers in the proximity of plaques are associated with more compact plaques and reduced axonal dystrophy, 40 and we have previously reported increased microglial numbers around plaques following SUS +MB treatment. 17 Higher numbers of microglia around plaques may result from an increased proliferation or metabolic activity, as hinted at in the present study. Reactivation of the cell-cycle machinery in microglia following ultrasound treatment is of particular interest, as it has been recently reported that repopulating microglial cells following ablation are neuroprotective in AD. 41 | Animals In this study, we have used APP23 mice (harboring the AD Swedish | Acute isolation of microglia and FACS Two hours prior to brain harvest, mice were injected intraperitoneally with methoxy-X04 (2 mg/ml (Figure 1d).
2021-12-24T16:12:38.533Z
2021-12-21T00:00:00.000
{ "year": 2022, "sha1": "6dd12b6528e0aeb34c7d95e70ea1d3a1bc0bdff9", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/12/21/2021.12.20.473590.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b16ab26cdaab699c69c4587de1425034959ffe79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
73537989
pes2o/s2orc
v3-fos-license
An Optimization-LCA of a Prestressed Concrete Precast The construction sector is one of the most active sectors, with a high economic, environmental and social impact. For this reason, the sustainable design of structures and buildings is a trend that must be followed. Bridges are one of the most important structures in the construction sector, as their construction and maintenance are crucial to achieve and retain the best transport between different places. Nowadays, the choice of bridge design depends on the initial economic criterion but other criteria should be considered to assess the environmental and social aspects. Furthermore, for a correct choice, the influence of these criteria during the bridge life-cycle must be taken into account. This study aims to analyse the life-cycle environmental impact of efficient structures from the economic point of view. Life-cycle assessment process is used to obtain all the environmental information about bridges. In this paper, a prestressed concrete precast bridge is cost-optimized and afterwards, the life-cycle assessment is carried out to achieve the environmental information about the bridge. Introduction The basis for the definition of sustainable development lies in the Brundtland Commission's report [1], which describes it as "development that meets the needs of the present generation without compromising the needs of the future generation".This idea implies the consideration of different aspects of three main components: economic, environmental and social.Therefore, achieving sustainable development implies a consensus among these three main pillars, which usually have different goals.Wass et al. [2] stated that sustainable development implies that a decision-making strategy must be considered.Decision making is a process that can help to find a solution that provides a compromise between different aspects and therefore achieves a sustainable solution [3,4]. The construction sector is one of the most active sectors and one of the ones with a greater influence on the economic, environment and social aspects of the world.This indicates a need for a trend toward sustainability of buildings and structures.One of the most important structures in this sector is bridges.The construction and maintenance of bridges are crucial to generate and keep the best transport possible between different places.For this reason, the assessment of sustainable development during the whole life-cycle is necessary.Of the three main components of sustainable development, the social aspect is the least studied and there are more doubts about its assessment.On the contrary, the economic and environmental aspects have been studied more intensively and it is convenient to assume that their consideration is sufficient.Considering the evaluation of these two components to achieve sustainability of bridges, the objective is to design the bridge with the lowest cost and lowest environmental impact.Although these two pillars of sustainability have different goals, some works have stated that there is a relationship between the cost and CO 2 emissions of structures [5,6].Therefore, reducing the cost implies a reduction of CO 2 emissions. Obtaining the lowest cost or CO 2 emissions have been studied by several works.Optimization algorithms are most often used to reduce the cost or CO 2 emissions of structures.In some cases, this involves a mono-objective optimization of cost and CO 2 emissions [5][6][7], whereas other works carry out multi-objective optimization to achieve both objectives at the same time [8,9].Despite the relationship between cost and CO 2 emissions, the environmental impact cannot be assessed by taking into account CO 2 emissions alone [10].For this reason, the environmental impact assessment must achieve a complete environmental profile.This complete environmental profile can be obtained using the life-cycle assessment (LCA) process.LCA is one of the most important and accepted methods of assessing the environmental impacts [11][12][13][14][15][16], making it an excellent tool for assessing the environmental impact of a bridge. In this paper, a prestressed concrete precast 40 m bridge is selected as the subject of an optimization-LCA.The optimization of the cost will reduce the cost of the bridge directly and the associated CO 2 emissions indirectly.This process makes it possible to obtain a cost-optimized bridge with a low environmental impact.After finishing the optimization, all the features of the cost-optimized bridge will be known, including its cost but the environmental impact will not yet have been obtained.The LCA makes it possible to obtain a complete environmental profile of this cost-optimized bridge.With this methodology, a bridge whose costs have been optimized directly and whose environmental impact has been improved is obtained and finally the LCA for the whole life-time can be performed.For this purpose, a hybrid memetic algorithm is used to carry out the cost-optimization of the bridge.Then, the Ecoinvent database [17] and the ReCiPe method [18] are used to conduct the LCA process of the bridge. Optimization The optimization process is used to achieve the best solution to a problem.This process is a clear alternative to designs based on experience.Optimization methods can be categorized into exact methods and heuristic methods.On one hand, the exact methods are based on mathematical algorithms that make it possible to obtain the global optimal solution [19].On the other hand, the heuristic methods, which include a large number of algorithms [20], obtain an optimal solution starting from an initial solution.The exact methods are very useful in problems where there are a small number of variables, because the computing time becomes unworkable for a large number of variables.Structural optimization problems are defined for a large number of design variables and thus the heuristic method is the most useful for structural optimization.There are a large number of works that use heuristic algorithms for the optimization of different kinds of structures [8,9,21]. Life-Cycle Assessment Life-cycle assessment (LCA) is one of the most important and accepted methods of evaluating the environmental impact of a product, process, or service during its whole life-cycle, taking into account all the activities involved, which are defined as inputs and outputs.The limits defined for these inputs and outputs are the boundaries of the system and represent the scheme to be considered.The LCA must be complete and thus it should consider all the activities needed for the achievement of the product, process, or service.Therefore, focusing on the construction sector, a full LCA of structures must consider all the activities from the acquisition of the raw material to the end of life.These activities associated with the whole life-cycle of the structures are grouped into the manufacturing phase, construction phase, use and maintenance phase and end of life phase.The LCA makes it possible to carry out an environmental impact assessment of a set of activities associated with the different stages of a structure's life-cycle and the global environmental impact by adding these phases.For all that, the LCA is an excellent tool to evaluate the environmental impact of structures.ISO 14040:2006 [22] provides guidance on carrying out the LCA, divided into four steps: (1) definition of goal and scope; (2) inventory analysis; (3) impact assessment; and (4) interpretation. The first step defines all the specifications that will be considered in the LCA.This involves other features besides the definition of the goal and scope, such as the life-cycle inventory to be taken into account, the life-cycle assessment methodology considered, the functional unit and the assumptions and limitations that have been considered in the LCA.According to the guidance defined by ISO 14040:2006 [22], the characterization defines some assumptions and limitations of the LCA that condition the following life cycle inventory and life cycle assessment.Another important feature is the functional unit that represents the unit in which the assessment will be referred. The inventory analysis is the collection of the data needed to define the inputs and outputs that represent the system studied.This data can be obtained in different ways: from direct measurements, literature, or other sources such as databases.The most common way to obtain data is from databases. Once these first steps have been defined, the environmental impact assessment is used to evaluate the result of the inventory analysis to obtain a set of environmental indicators that represent the environmental profile of the product, process, or service.There are different methods of representing the environmental profile.These methods can be grouped into two different approaches: midpoint and endpoint assessments.The midpoint approach defines the environmental profile by means of a set of impact categories and the endpoint approach defines the environmental profile by means of a set of damage categories.There are three damage categories (human health, resource depletion and ecosystems) into which the impact categories are clustered.Therefore, although the midpoint approach provides a complete environmental profile, it is more difficult to interpret [23].Conversely, the endpoint approach does not provide a detailed environmental profile like the midpoint approach but is easier to understand. Finally, the information obtained must be interpreted.For this purpose, an analysis of the different stages of life-cycle of the bridge is carried out.In addition, a study of the environmental impact of a product, process, or service can be made to improve the environmental impact associated with its activities. Case Study For the purpose of this work, a bridge is selected to carry out the optimization-LCA.First, a cost-optimization of the bridge will be carried out and then a LCA of the cost-optimized bridge will be applied to obtain a complete environmental profile.In the next points, a precise description of the bridge will be presented and then the cost-optimization and the LCA will be described in detail for the bridge described. Bridge Description The bridge studied is a single span prestressed concrete precast bridge of 40 m.The section of the bridge is formed by two prestressed concrete precast isostatic beams with a U-shaped cross-section.The cross-section integrates a 12 m upper reinforced concrete slab.Note that the substructure is not included in the analysis since it depends on the ground characteristics and the orography.Figure 1 shows a general view of the bridge.The bridge is located in the eastern coastal area of Spain and the environmental ambient corresponds to XC-4 according to EN 206-1 [24].Thus, corrosion is mainly caused by carbonation. Optimization In this section, the cost-optimization of the prestressed concrete precast bridge will be explained.This optimization process consists in the minimization of the cost C while some restrictions gj are satisfied. , , … , 0 Note that x1, x2, …, xn are the design variables used for the optimization.The objective function C expresses the cost of the bridge and the restrictions gj are the serviceability limit states (SLS), the ultimate limit states (ULS), the durability limit states and the geometric and constructability constraints of the problem.There are 40 design variables, including eight variables that define the geometry of the section, two that define the concrete of the slab and the beam, four that define the prestressed steel and 26 that define the reinforcing steel.Furthermore, there are a set of parameters that have no influence on the optimization problem, such as the width, span and web inclination.Structural constraints have been considered according to the Spanish codes [25,26].The ULSs verify if the ultimate resistance is greater than the ultimate load effect.Besides, the minimum amount of reinforcing steel for the stress requirements and the geometrical conditions are also considered.The SLSs examine different aspects.Cracking limit state requires compliance of the compression and tension cracks, as well as the decompression limit state in the area where the post-tensioned steel is located.Deflections are limited to 1/1000 of the free span length for the quasipermanent combination.In addition, the concrete and steel fatigue has been considered in this study.Table 1 summarizes of the ULSs and SLSs considered. In this optimization, a hybrid memetic algorithm (MA) is applied.The MA is a population-based approach to stochastic optimization that combines the parallel search used by evolutionary algorithms with a local search of the solutions forming a population [27].Regarding the local search used, a variable-depth neighbourhood search (VDNS) is used as a variant of the very large-scale neighbourhood search (VLSN) [28].In this MA-VDNS, a set of 500 random solutions (n) is generated as the population.Then each of these solutions is improved by means of a VDNS search to reach a local optimum.To this end, the algorithm begins by changing only one variable and when ten consecutive movements have been performed without improvement (no_imp), there will be an increase in the number of variables (var) that are changed simultaneously, up to eight.Then, with this new improved population, a genetic algorithm is applied.The genetic algorithm develops the population, which is subjected to random movements (mutations and crossovers), preserving the Optimization In this section, the cost-optimization of the prestressed concrete precast bridge will be explained.This optimization process consists in the minimization of the cost C while some restrictions g j are satisfied.C = f (x 1 , x 2 , . . . ,x n ) (1) Note that x 1 , x 2 , . . ., x n are the design variables used for the optimization.The objective function C expresses the cost of the bridge and the restrictions g j are the serviceability limit states (SLS), the ultimate limit states (ULS), the durability limit states and the geometric and constructability constraints of the problem.There are 40 design variables, including eight variables that define the geometry of the section, two that define the concrete of the slab and the beam, four that define the prestressed steel and 26 that define the reinforcing steel.Furthermore, there are a set of parameters that have no influence on the optimization problem, such as the width, span and web inclination.Structural constraints have been considered according to the Spanish codes [25,26].The ULSs verify if the ultimate resistance is greater than the ultimate load effect.Besides, the minimum amount of reinforcing steel for the stress requirements and the geometrical conditions are also considered.The SLSs examine different aspects.Cracking limit state requires compliance of the compression and tension cracks, as well as the decompression limit state in the area where the post-tensioned steel is located.Deflections are limited to 1/1000 of the free span length for the quasipermanent combination.In addition, the concrete and steel fatigue has been considered in this study.Table 1 summarizes of the ULSs and SLSs considered. In this optimization, a hybrid memetic algorithm (MA) is applied.The MA is a population-based approach to stochastic optimization that combines the parallel search used by evolutionary algorithms with a local search of the solutions forming a population [27].Regarding the local search used, a variable-depth neighbourhood search (VDNS) is used as a variant of the very large-scale neighbourhood search (VLSN) [28].In this MA-VDNS, a set of 500 random solutions (n) is generated as the population.Then each of these solutions is improved by means of a VDNS search to reach a local optimum.To this end, the algorithm begins by changing only one variable and when ten consecutive movements have been performed without improvement (no_imp), there will be an increase in the number of variables (var) that are changed simultaneously, up to eight.Then, with this new improved population, a genetic algorithm is applied.The genetic algorithm develops the population, which is subjected to random movements (mutations and crossovers), preserving the better adapted solutions.The cost assessment takes into account a penalty cost; nevertheless, the VDNS does not consider the penalty cost (only feasible solutions are accepted) in order to avoid the early divergence of the algorithm.The VDNS is applied to the new generation up to 150 generations.Figure 2 shows a flow chart of the hybrid memetic algorithm.better adapted solutions.The cost assessment takes into account a penalty cost; nevertheless, the VDNS does not consider the penalty cost (only feasible solutions are accepted) in order to avoid the early divergence of the algorithm.The VDNS is applied to the new generation up to 150 generations. Figure 2 shows a flow chart of the hybrid memetic algorithm.The solution obtained for the 40 m-long prestressed concrete precast bridge has a total cost of 108,274.45€.The geometry of this bridge is shown in Figure 3.The amount of beam concrete used is 0.1117 m 3 /m 2 , with a strength of 35 MPa, while the amount of slab concrete used is 0.1797 m 3 /m 2 , with a strength of 40 MPa.Furthermore, the precast concrete beams require 6163 kg (12.52 kg/m 2 ) of reinforcing steel and 5184 kg (10.53 kg/m 2 ) of prestressed steel, while the concrete slab is defined by 11,772 kg (23.92 kg/m 2 ) of reinforcing steel. The solution obtained for the 40 m-long prestressed concrete precast bridge has a total cost of 108,274.45€.The geometry of this bridge is shown in Figure 3.The amount of beam concrete used is 0.1117 m 3 /m 2 , with a strength of 35 MPa, while the amount of slab concrete used is 0.1797 m 3 /m 2 , with a strength of 40 MPa.Furthermore, the precast concrete beams require 6163 kg (12.52 kg/m 2 ) of reinforcing steel and 5184 kg (10.53 kg/m 2 ) of prestressed steel, while the concrete slab is defined by 11,772 kg (23.92 kg/m 2 ) of reinforcing steel. Life-Cycle Assessment In this section, the guidance defined by ISO 14040:2006 [22] will be applied to the bridge studied.For this purpose, the different steps will be particularized to the case of study, describing and taking into account the specific characteristics considered for this study.Figure 4 show a general view of the LCA process carried out. Goal and Scope The LCA will be divided into the four main phases of the whole life-cycle of the bridge for a better understanding: (1) manufacturing; (2) construction; (3) use and maintenance; and (4) end of life.Each phase will be defined separately and thus each phase will be limited by its own system boundary.The functional unit will be 1 m of the length of the bridge.The final goal is to find the environmental Life-Cycle Assessment In this section, the guidance defined by ISO 14040:2006 [22] will be applied to the bridge studied.For this purpose, the different steps will be particularized to the case of study, describing and taking into account the specific characteristics considered for this study.Figure 4 show a general view of the LCA process carried out. Life-Cycle Assessment In this section, the guidance defined by ISO 14040:2006 [22] will be applied to the bridge studied.For this purpose, the different steps will be particularized to the case of study, describing and taking into account the specific characteristics considered for this study.Figure 4 show a general view of the LCA process carried out. Goal and Scope The LCA will be divided into the four main phases of the whole life-cycle of the bridge for a better understanding: (1) manufacturing; (2) construction; (3) use and maintenance; and (4) end of life.Each phase will be defined separately and thus each phase will be limited by its own system boundary.The functional unit will be 1 m of the length of the bridge.The final goal is to find the environmental Goal and Scope The LCA will be divided into the four main phases of the whole life-cycle of the bridge for a better understanding: (1) manufacturing; (2) construction; (3) use and maintenance; and (4) end of life.Each phase will be defined separately and thus each phase will be limited by its own system boundary.The functional unit will be 1 m of the length of the bridge.The final goal is to find the environmental impact of each phase and consequently the global environmental impact of the bridge by adding the environmental impacts of different phases. Manufacturing The manufacturing phase includes the upstream processes of the products used in the bridge and the associated transport, from the acquisition of raw materials to materials that are ready to be used in the construction of the bridge.The prestressed concrete precast bridge has three main components: beams of precast concrete, fresh concrete and steel.Therefore, first it is necessary delimit the activities associated with each product including the transport. On one hand, the manufacture of the beams of precast concrete takes into account all the activities from the extraction of raw materials to the finishing of the beams in the precast plant, while the manufacture of the fresh concrete for the slab takes into account the activities from the extraction of raw material to the point when the concrete is ready to be used in the construction place.In both cases, the distance considered between the quarry and the precast plant or concrete plant is 50 km, the distance considered in the cement transportation is 20 km and the distance between the concrete plant and the construction site is 50 km.Furthermore, the dosage of concrete is taken into account to achieve the strength required.On the other hand, the manufacture of the steel takes into account all the activities from the acquisition of the raw material to the point when the steel is ready to be used in the precast plant or construction site.Considering that the bridge is built in Spain, the analysis takes the Spanish steel production characteristics.This implies that 67% of the steel is produced in an electric arc furnace and the remaining 33% is produced in a basic oxygen furnace.This ratio generates a recycling rate of steel of 71%.The distance considered between the steel production plant and the precast plant or construction site is 100 km.Table 2 shows the amount of material needed for the beam and slab and the dosage of the concrete in both cases. Construction The construction phase includes all the materials and construction machinery necessary for the erection of the bridge.It includes the transportation and elevation of precast beams using special transport over 50 km.Furthermore, the bridge slab is considered to be cast in place.The construction machinery considered for the slab construction was obtained from the Bedec database [29].The concrete machinery consumes 123.42 MJ of energy and emits 32.24 kg of CO 2 per m 3 of concrete.The distance travelled considered by the construction machinery is 50 km.In addition, the formwork is made by wood and can be reused 3 times. Use and Maintenance The maintenance and use phase includes everything that happens in the service life of the bridge.It takes some activities and processes (considering its own maintenance activities and the traffic detour due to the closure of the bridge) and the fixed CO 2 .On one hand, the bridge needs one maintenance period of 2 days to satisfy with the regulations during its 120 years of service life.This maintenance activity considers that the concrete cover is replaced by a repair mortar.The maintenance action consists firstly of removing the concrete cover and providing a proper surface for the coating adhesion.Then, a bonding coat is applied between the old and new concrete.Finally, a repair mortar is placed to provide a new reinforcement corrosion protection [30].Note that the study considers that the quality on-site work is adequate to guarantee that the bridge does not have durability problems during the service life.Besides, it is important to highlight that other maintenance activities to repair or replace equipment elements may take place.However, they are not evaluated in this study. This study takes into account all the machinery necessary to repair the deterioration of the bridge including the transport to the bridge location and the increase in emissions generated due to the traffic detour [13,14].The traffic detour is considered taking into account the average daily traffic of 8500 vehicles/day, where trucks comprise 10% of vehicles and a detour distance of 2.9 km.On the other hand, the fixation of the CO 2 by the concrete is a widely studied fact [31,32] that has been considered in the bridge studied. End of Life The end-of-life phase includes everything that happens after the service life of the bridge.All the activities and processes associated with this phase are related with the demolition of the bridge and the treatment of the generated wastes.On one hand, demolition activities for the destruction or dismantling of the bridge will be necessary.These demolition activities take into account all the machinery necessary for this purpose.On the other hand, the treatment of generated wastes takes into account a greater set of activities depending on the purpose of the processing.In this case, the bridge will be destroyed, after which all the wastes will be transported to a sorting plant where the concrete and steel will be separated.The concrete will be crushed and transported to a landfill and in this way, the complete carbonation of the concrete [32] and thus a higher fixation of CO 2 is assured.Seventy-one per cent of the steel will be recycled and in this way, the life-cycle of the bridge ends. Inventory Analysis The major part of the information of the products or processes used to define the activities of the whole life-cycle of the bridge is obtained from Ecoinvent database [17].In the case of the information of the products or processes needed for the environmental impact assessment that do not exist in the Ecoinvent database, the data will be created by means of the data obtained from the literature or the Bedec database [29]. The Ecoinvent database is one of the most complete databases for the construction sector and has been created and grown thanks to the information obtained from different institutions.It was created in 2004 through the efforts of the several Swiss Federal Offices and research institutes.That implies that the major part of the information existing in the first versions of Ecoinvent was obtained from Swiss institutions but later, data from other countries were inserted.In this case, the bridge is located on the eastern coast of Spain.In the Ecoinvent database there is no information about this region and therefore it is necessary to consider information about the products or processes from other regions that do not coincide exactly with the products or processes used on the eastern coast of Spain.That means that there is inconsistency between the real data and the data from the Ecoinvent database.For this reason, uncertainty is applied to the Ecoinvent data.The uncertainty is divided into two parts: the first part concerns the type of product or process [33] and the second part concerns the differences between the real data and the data considered by means of the pedigree matrix [34]. Impact Assessment There are many works in which the environmental impact assessment is carried out taking into account a small number of indicators, of which the CO 2 emissions are the most popular [35,36].Despite the importance of the emission of CO 2 , a complete impact assessment must consider a set of indicators that represent a complete environmental profile.That implies the use of environmental impact assessment methods.These methods can be separated depending on the approach used: midpoint or endpoint.On one hand, the midpoint approach defines the environmental profile by means of a set of impact categories.One of the most popular methods that take into account the midpoint approach is the CML.On the other hand, the endpoint approach defines the environmental profile considering only a small set of damage categories.One of the most frequently used methods that consider the endpoint approach is the Eco-indicator.Both approaches are necessary to carry out a complete environmental interpretation of the bridge.On one hand, the midpoint approach can provide a more accurate and complete environmental profile.On the other hand, the endpoint approach can be easier to interpret.For these reasons, the environmental impact assessment method used in this work is the ReCiPe method [18], whose main objective is to provide a combination of the Eco-indicator and CML, considering the midpoint and endpoint approaches. Interpretation The results are obtained considering the descriptions presented in the preceding sections.As stated above, the ReCiPe method will be used to carry out the environmental impact assessment of the bridge.For this purpose, by means of the midpoint approach, 18 impact categories will be shown with the associated uncertainty.In addition, the contribution of the different processes of the bridge life-cycle for the most popular impact categories will be represented.In the endpoint approach, the three damage categories are studied.Both approaches allow a higher level of interpretation. Midpoint Approach The midpoint approach of the ReCiPe method provides a complete environmental profile of each stage of the bridge life-cycle represented by 18 impact categories: agricultural land occupation (ALO), climate change (GWP), fossil depletion (FD), freshwater ecotoxicity (FEPT), freshwater eutrophication (FEP), human toxicity (HTP), ionizing radiation (IRP), marine ecotoxicity (MEPT), marine eutrophication (MEP), metal depletion (MD), natural land transformation (NLT), ozone depletion (OD), particulate matter formation (PMF), photochemical oxidant formation (POFP), terrestrial acidification (TAP), terrestrial ecotoxicity (TEPT), urban land occupation (ULO) and water depletion (WD).This large amount of information makes the results difficult to interpret.Although it is difficult to achieve a global assessment of the environmental impact of the bridge with the information obtained by means of the midpoint approach, it is very helpful to obtain more accurate knowledge of the impact of each category and the contribution of each process to the different impact categories. As explained above, the data used for the environmental impact assessment do not correspond with the real data.This implies that the uncertainty associated with the different products or processes should be taken into account to obtain more realistic results.Table 3 shows the mean and coefficient of variance of each impact category for each bridge life-cycle phase.Although it is not possible to carry out a global assessment for each bridge life-cycle phase, it is possible to obtain information about the phase in which each impact category is the most significant and the variance of the information obtained.In this way, it can be observed that the manufacturing phase is the phase in which there are a higher number of impact categories with the highest contribution followed by the use and maintenance phase.The impact categories with the highest contribution in the manufacturing phase are ALO, GWP, FEPT, FEP, HTP, IRP, MEPT, MD, TETP, ULO and WD and the impact categories with the highest contributions to the use and maintenance phase are FD, MEP, NLT, ODP, PMFP, POFP and TAP.Neither the construction phase nor the end of life phase has impact categories with the highest contribution.All of this can be seen better in Figures 5 and 6, in which the bars represent the ratio of the contribution of each impact category to each life-cycle phase in relation to the highest contribution.In addition, Table 3 shows the variance of each result.In this way, although the GWP has the highest variance in the manufacturing phase, the manufacturing phase is the one in which more impact categories have the lowest variance, with a mean of 7.13%.The construction phase has the highest mean of variances (17.15%), followed by the end-of-life phase (13.16%) and the use and maintenance phase (10.58%).Furthermore, the impact category with the highest coefficient of variation is the ULO (17.28%) and the impact category with the lowest coefficient of variation is the ALO (8.04%). Another type of information that can be obtained by the midpoint approach is the contribution of the different products or processes to each impact category.For illustrative purposes, only three of the most popular impact categories (GWP, OD and PMF) will be studied more exhaustively and will display the contribution of the different products or processes to each bridge life-cycle phase.Figures 7-10 show the contributions of the most important processes for each bridge life-cycle phase.Figure 7 corresponds to the manufacturing phase and it is possible to see that the most important associated processes are the cement production, steel production and transport.Cement production makes the highest contribution to the GWP, namely 46.49% of the total but in the PMF and OD categories, steel production has the higher ratio with percentages of 76.14 and 57.44% respectively.Furthermore, it can be seen that, although the GWP has a low percentage of other processes (6.07%), the cement production, steel production and transport represent a larger part of the environmental impact of this bridge life-cycle phase.Figure 8 corresponds to the construction phase and the processes that lead to practically all the environmental impacts are those due to the manipulation of fresh concrete and the transport and elevation of the precast beams.Figures 9 and 10 show the use and maintenance phase and end-of-life phase, in which the CO 2 fixed is taken into account.In the GWP impact category, it can be seen that there is a positive impact.On one hand, in the use and maintenance phase, the amount of CO 2 fixed is much lower than the CO 2 eq produced by the maintenance activities and the traffic detour because the concrete surface in contact with the environment represents a very low proportion of the total of amount of concrete in the bridge.The percentage of the CO 2 fixed is −3.84%, while the percentages of maintenance activities and traffic detour are 89.95% and 13.89%, respectively, adding a total of 100% due to that the global GWP impact in this phase is positive.The ratio of the contribution of the maintenance activities and traffic detour can be modified considerably in function of the features of the traffic diversion (distance, average daily traffic and percentage of trucks).On the other hand, in the end-of-life phase, the amount of CO 2 fixed is higher (−254.05%)than the CO 2 eq produced by the demolition activities (22.40%), the waste treatment (36.21%) and the associated transport (96.18%).The total contribution of the processes in the end-of-life phase is negative, adding a total of −100%. In the other impact categories (PMF and OD), the maintenance activities and transport make the major contribution to each bridge life-cycle. Endpoint Approach Despite the large amount of information obtained by means of the midpoint approach, it is very difficult to obtain a global environmental impact assessment.For this purpose, the endpoint approach is more useful.This approach provides only three damage categories (human health, resources and ecosystem), which are easier to interpret. Endpoint Approach Despite the large amount of information obtained by means of the midpoint approach, it is very difficult to obtain a global environmental impact assessment.For this purpose, the endpoint approach is more useful.This approach provides only three damage categories (human health, resources and ecosystem), which are easier to interpret. Endpoint Approach Despite the large amount of information obtained by means of the midpoint approach, it is very difficult to obtain a global environmental impact assessment.For this purpose, the endpoint approach is more useful.This approach provides only three damage categories (human health, resources and ecosystem), which are easier to interpret.Table 4 shows the mean and coefficient of variance of the three damage categories.Although the reference unit of the different damage categories remains different, carrying out the normalization and weighting of three categories is easier than doing so for 18 categories.In fact, ReCiPe allows the normalization of the three damage categories by converting the reference unit of each damage category into points.That makes it easier to interpret the global environment assessment of the bridge.Figure 11 shows the normalized value of each damage category of the whole life-cycle of the bridge and Figure 12 displays the contribution of each phase considering that the different damage categories have the same importance.On one hand, Figure 11 shows that human health is the most important damage category, followed by resources and ecosystem.On the other hand, in Figure 12 the contribution of different phases using the endpoint approach can be seen.The manufacturing phase is the phase with the highest contribution to the bridge life-cycle, followed by the use and maintenance phase and both the construction phase and the end-of-life phase make very low contributions compared to the other two phases.three damage categories.Although the reference unit of the different damage categories remains different, carrying out the normalization and weighting of three categories is easier than doing so for 18 categories.In fact, ReCiPe allows the normalization of the three damage categories by converting the reference unit of each damage category into points.That makes it easier to interpret the global environment assessment of the bridge.Figure 11 shows the normalized value of each damage category of the whole life-cycle of the bridge and Figure 12 displays the contribution of each phase considering that the different damage categories have the same importance.On one hand, Figure 11 shows that human health is the most important damage category, followed by resources and ecosystem.On the other hand, in Figure 12 the contribution of different phases using the endpoint approach can be seen.The manufacturing phase is the phase with the highest contribution to the bridge life-cycle, followed by the use and maintenance phase and both the construction phase and the end-of-life phase make very low contributions compared to the other two phases.three damage categories.Although the reference unit of the different damage categories remains different, carrying out the normalization and weighting of three categories is easier than doing so for 18 categories.In fact, ReCiPe allows the normalization of the three damage categories by converting the reference unit of each damage category into points.That makes it easier to interpret the global environment assessment of the bridge.Figure 11 shows the normalized value of each damage category of the whole life-cycle of the bridge and Figure 12 displays the contribution of each phase considering that the different damage categories have the same importance.On one hand, Figure 11 shows that human health is the most important damage category, followed by resources and ecosystem.On the other hand, in Figure 12 the contribution of different phases using the endpoint approach can be seen.The manufacturing phase is the phase with the highest contribution to the bridge life-cycle, followed by the use and maintenance phase and both the construction phase and the end-of-life phase make very low contributions compared to the other two phases. Conclusions Reduction of the environmental impact is a trend that must be taken into account due to the environmental problems that exist nowadays.In this respect, the construction sector has a large margin for improvement.The design of structures or buildings must consider the aspects of three pillars of sustainability.The assessment of the environmental impact during the whole life is a factor that must be taken into account in the design of structures or buildings.Although CO 2 emissions are not the only indicator to be considered in the environmental assessment, due to the relationship of this indicator with the cost, it is used to obtain a bridge with the lowest cost and a low environmental impact.Once this bridge has been obtained, a complete environmental assessment is carried out.For this purpose, a heuristic optimization by means of a hybrid memetic algorithm is used to obtain a cost-optimized prestressed concrete precast bridge and thus a low amount of associated CO 2 .Then, the midpoint and endpoint approaches of the ReCiPe method are used to obtain a complete environmental profile of the bridge.These different approaches make it possible to obtain complementary data that provide different information.While the midpoint approach provides detailed information, the endpoint approach provides more concentrated information so it is possible to obtain only one score to assess all the impacts. Regarding the results of the midpoint approach, the manufacturing phase and use and maintenance phase are the phases with the higher environmental impact.With this knowledge, it is interesting to determine the processes that make the biggest contributions in these phases to try to reduce the environmental impact.Cement production and steel production are the processes with the highest environmental impact in the manufacturing phase, while the maintenance activities have the most environmental impact in the use and maintenance phase.Therefore, the midpoint approach indicates the process with the highest contribution in each impact category and in this way, it is possible to know which process to modify depending on the impact category to be improved.The midpoint approach provides detailed information but does not offer a single score that represents the global environmental impact of the bridge.For this purpose, the endpoint approach is used.As can be deduced, in the midpoint approach, the manufacturing phase and the use and maintenance phase are the ones with the higher environmental impact. After studying both the midpoint and endpoint approaches, the results show the need for a complete environmental profile to evaluate the environmental impact of the bridge.The midpoint approach provides information that makes it possible to identify the processes in which improvements should be carried out to improve specific impact categories of the bridge but the endpoint approach provides a single score that is able to evaluate the global environmental impact of the bridge.Furthermore, although CO 2 emissions are an important indicator in the environmental impact assessment, in some cases it is not sufficient to obtain an accurate environmental evaluation and it is necessary to take into account all the other impact categories. Figure 1 . Figure 1.General view of the prestressed concrete precast bridge. Figure 1 . Figure 1.General view of the prestressed concrete precast bridge. Sustainability 2018, 10, x FOR PEER REVIEW 6 of 17The solution obtained for the 40 m-long prestressed concrete precast bridge has a total cost of 108,274.45€.The geometry of this bridge is shown in Figure3.The amount of beam concrete used is 0.1117 m 3 /m 2 , with a strength of 35 MPa, while the amount of slab concrete used is 0.1797 m 3 /m 2 , with a strength of 40 MPa.Furthermore, the precast concrete beams require 6163 kg (12.52 kg/m 2 ) of reinforcing steel and 5184 kg (10.53 kg/m 2 ) of prestressed steel, while the concrete slab is defined by 11,772 kg (23.92 kg/m 2 ) of reinforcing steel. Figure 5 . Figure 5. Impact categories of manufacturing and construction stage. Figure 5 . Figure 5. Impact categories of manufacturing and construction stage. Figure 6 . Figure 6.Impact categories of use and end of life stage. Figure 6 . Figure 6.Impact categories of use and end of life stage. Figure 9 . Figure 9. Use and maintenance phase. Figure 9 . Figure 9. Use and maintenance phase. Table 1 . Ultimate and serviceability limit states.Compression and tension stress.Decompression in post-tensioned steel depth Deflection for the quasipermanent combination <1/1000 Table 1 . Ultimate and serviceability limit states. Torsion Torsion combined with flexure and shear Fatigue Crack width <0.2 mm Compression and tension stress.Decompression in post-tensioned steel depth Deflection for the quasipermanent combination <1/1000 Table 2 . Amount of materials. Table 3 . Midpoint approach.Impact categories of manufacturing and construction stage.Impact categories of use and end of life stage. Table 4shows the mean and coefficient of variance of the Use and maintenance phase. Table 4 shows the mean and coefficient of variance of the Figure 12.Contribution of bridge life-cycle phases.
2018-12-31T09:55:32.904Z
2018-03-02T00:00:00.000
{ "year": 2018, "sha1": "61656e721b371c8205c7fc8b64fb57146d7c15fc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/3/685/pdf?version=1520001234", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "61656e721b371c8205c7fc8b64fb57146d7c15fc", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
258418336
pes2o/s2orc
v3-fos-license
CED: Catalog Extraction from Documents Sentence-by-sentence information extraction from long documents is an exhausting and error-prone task. As the indicator of document skeleton, catalogs naturally chunk documents into segments and provide informative cascade semantics, which can help to reduce the search space. Despite their usefulness, catalogs are hard to be extracted without the assist from external knowledge. For documents that adhere to a specific template, regular expressions are practical to extract catalogs. However, handcrafted heuristics are not applicable when processing documents from different sources with diverse formats. To address this problem, we build a large manually annotated corpus, which is the first dataset for the Catalog Extraction from Documents (CED) task. Based on this corpus, we propose a transition-based framework for parsing documents into catalog trees. The experimental results demonstrate that our proposed method outperforms baseline systems and shows a good ability to transfer. We believe the CED task could fill the gap between raw text segments and information extraction tasks on extremely long documents. Data and code are available at \url{https://github.com/Spico197/CatalogExtraction} Introduction Information in long documents is usually sparsely distributed [13,21], so a preprocessing step that distills the structure is necessary to help reduce the search space for subsequent processes. Catalogs, as the skeleton of documents, can naturally locate coarse information by searching the leading section titles. As exemplified in Figure 1, the debt balance "474.860 billion yuan" appears in only one segment in the credit rating report that is 30 to 40 pages long. Taking the whole document into Information Extraction (IE) systems is not practical in this condition. By searching the catalog tree, this entity can be located in the "Government Debt Situation" section with prior knowledge. Unfortunately, most documents are in plain text and do not contain catalogs in an easily accessible format. Thus, we propose the Catalog Extraction from Documents (CED) task as a preliminary step to any extremely long document-level IE tasks. In this manner, fine-grained entities, relations, and events can be further extracted within paragraphs instead of the entire document, which is pragmatic in document-level entity relationship extraction [12,15,14] and document-level event extraction [1]. Designing handcrafted heuristics may be a partial solution to the automatic catalog extraction problem. However, the performance is limited due to three major challenges: 1) Section titles vary across documents, and there are almost no common rules. For documents that are in the same format or inherited from the same template, the patterns of section titles are relatively fixed. Therefore, it is common to use regular expression matching to obtain the whole catalog. However, such handcrafted heuristics are not reusable when the formats of documents change, and researchers have to design new patterns from scratch, making catalog extraction laborious. 2) Catalogs have deep hierarchies with five-to sixlevel section headings. As the level of section headings deepens, titles become increasingly complex, and simple rule systems usually cannot handle fine-grained deep section headings well. 3) A complete sentence may be cut into multiple segments due to mistakes in data acquisition tools. For example, Optical Character Recognition (OCR) systems are commonly used for obtaining document texts. However, these systems often make mistakes, and sentences may be incorrectly cut into several segments by line breaks. These challenges increase the difficulties of using handcrafted rules. To address the CED task, we first construct a corpus with a total of 650 manually annotated documents. The corpus includes bid announcements, financial announcements, and credit rating reports. These three types of documents vary in length and catalog complexity. This corpus is able to serve as a benchmark for the evaluation of CED systems. Among these three sources, bid announcements are the shortest in length with simple catalog structures, and financial announcements contain multifarious heading formats, while credit rating reports have deep and nested catalog structures. In addition, we collect documents from Wikipedia with catalog structures as a large-scale corpus for general model pretraining to enhance the transfer learning ability. These four types of data cover the first two challenges in catalog extraction. We also chunk sentences to simulate the incorrect segmentation problem observed in OCR systems, which covers the third challenge in CED. Based on the constructed dataset, we design a transition-based framework for the CED task. The catalog tree is formulated as a stack and texts are encased in an input queue. These two buffers are used to help make action predictions, where each action stands for a control signal that manipulates the composition of a catalog tree. By constantly comparing the top element of the catalog stack with one text piece from the input queue, the catalog tree is constructed while action predictions are obtained. The final experimental results show that our method achieves promising results and outperforms other baseline systems. Besides, the model pre-trained on Wikipedia data is able to transfer the learned information to other domains when training data are limited. Our contributions are summarized as follows: -We propose a new task to extract catalogs from long documents. -We build a manually annotated corpus for the CED task, together with a large-scale Wikipedia corpus with catalog structures for pre-training. The experimental results show the efficacy of low-resource transfer. -We design a transition-based framework for the task. To the best of our knowledge, this is the first system that extracts catalogs from plain text segments without handcrafted patterns. Related Work Since CED is a new task that has not been widely studied, in this section, we mainly introduce approaches applied to similar tasks below. Parsing Problems: Similar to other text-to-structure tasks, CED can be recognized as a parsing problem. A common practice to build syntactic parsers is biaffine-based frameworks with delicate decoding algorithms (e.g., CKY, Eisner, MST) to obtain global optima [4,19]. However, when the problem shifts from sentences to documents, former token-wise encoding and decoding methods become less applicable. As to documents, there are also many popular discourse parsing theories [8,11,6], which aim to extract the inner semantics among Elementary Discourse Units (EDU). However, the number of EDUs in current corpora is small. For instance, in the popular RST-DT corpus, the average number of EDU is only 55.6 per document [17]. When the number of EDUs grows larger, the transition-based method becomes a popular choice [7]. Our proposed CED task is based on naive catalog structures that are similar to syntactic structures, but some traditional parsing mechanisms are not suitable since one document may contain thousands of segments. To this end, we utilize the transition-based method to deal with the CED task. Transition-based Applications: The transition-based method parses texts to structured trees in a bottom-up style, which is fast and applicable for extremely long documents. Despite the successful applications in syntactic and discourse parsing [20,7,5], transition-based methods are widely used in information extraction tasks with particular actions, such as Chinese word segmentation [18], discontinuous named entity recognition [3] and event extraction [16]. Considering all the characteristics of the CED task, we propose a transition-based method to parse documents into catalog trees. Dataset Construction In this section, we introduce our constructed dataset, the ChCatExt. Specifically, we first elaborate on the pre-processing, annotation and post-processing methods, then we provide detailed data statistics. Processing & Annotation We collect three types of documents to construct the proposed dataset, including bid announcements 3 , financial announcements 4 and credit rating reports 5 . We adopt Acrobat PDF reader to convert PDF files into docx format and use Office Word to make annotations. Annotators are required to: 1) remove running titles, footers (e.g., page numbers), tables and figures; 2) annotate all headings with different outline styles; and 3) merge mis-segmented paragraphs. To reduce the annotation bias, each document is assigned to two annotators, and an expert will check the annotations and make final decisions in case of any disagreement. Due to the length and structure variations, one document may take up to twenty minutes for an annotator to label. After the annotation process, we use pandoc 6 and additional scripts to parse these files into program-friendly JSON objects. We insert a pseudo root node before each object to ensure that every document object has only one root. In real applications, documents are usually in PDF formats, which are immutable and often image-based. Using OCR tools to extract text contents from those files is a common practice. However, the OCR tools often split a natural sentence apart when a sentence is physically cut by line breaks or page turnings in PDF, as shown in Figure 1. To simulate real-world scenarios, we randomly sample some paragraphs with a probability of 50% and chunk them into segments. For heading strings, we chunk them into segments with lengths of 7 to 20 with jieba 7 assistance. This makes heading segmenting more natural, for example, "招标 Table 1. Data statistics. BidAnn refers to bid announcements, FinAnn is financial announcements and CreRat is credit rating reports. One node may contain multiple segments in its content, and we list the number of nodes here. Depth represents the depth of the document catalog tree (text nodes are also included). Length is obtained by counting the number of document characters. 公告" will be split into "招标 (zhao biao)" and "公告 (gong gao)" instead of "招 (zhao)" and "标公告 (biao gong gao)" . For other normal texts, we split them into random target lengths between 70 and 100. Since the workflow is rather complicated, we will open-source all the processing scripts to help further development. In addition to the above manually annotated data, we collect 665,355 documents from Wikipedia 8 for model pre-training. Most of these documents are shallow in catalog structures and short in text lengths. We keep documents with a catalog depth from 2 to 4 to reach higher data complexity, so that these documents are more similar to the manually annotated ones. After that, 214,989 documents are obtained. We chunk these documents in the same way as the manually annotated ones to simulate OCR segmentation. Table 1 lists the statistics of the whole dataset. Among the three types, BidAnn has the shortest length and the shallowest structure, and the headings are similar to each other. FinAnn is more complex in structure than BidAnn and contains more nodes. Moreover, there are many forms of headings in FinAnn without obvious patterns, which increases the difficulty of catalog extraction. CreRat is the most sophisticated one among all types of data. Its average length is 8.5 times longer than BidAnn while the average depth is 4.59. However, it contains fewer variations in headings, which may be easier for models to locate. Compared to manually annotated domain-specific data, Wiki is easy to obtain. The structure depth is similar to that of FinAnn while its length is 1.5k shorter. Because of the large size, Wiki is well suited for model pre-training and parameter initializing. Data Statistics It is worth noting that leaf nodes can be heading or normal texts in catalog trees. Since normal texts cannot lead a section, all texts are leaf nodes in catalogs. However, headings could also be leaf nodes if the leading section has no children. Such a phenomenon appears in approximately 24% of documents. Therefore one Table 2. An example of transition-based catalog tree construction. Elements in red bold represent the current stack top s, and elements in blue underline represent the input text q. $ means the terminal of Q and the finale of action prediction. Step Catalog Tree Stack S Input Queue Q Predicted Action node cannot be recognized as a text node simply by the number of children, which makes the CED task more complicated. Transition-based Catalog Extraction In this section, we introduce details of our proposed TRAnsition-based Catalog trEe constRuction method TRACER. We first describe the transition-based process, and then introduce the model architecture. Actions & Transition Process The transition-based method is designed for parsing trees from extremely long texts. Since the average length of our CreRat documents is approximately 15k Chinese characters, popular global optimized tree algorithms are apparently too costly to be utilized here. Action design plays an important role in our transition-based method. There are two buffers here: 1) the input queue Q providing one text segment q at each time; and 2) the tree buffer S that records the final catalog tree, where the current stack top points to s. Actions are obtained by comparing s and q continuously, which results in the buffer changing. As the comparison process continues, actions compose a control sequence to build the target catalog tree simultaneously. To solve the mentioned challenges, actions are designed to distinguish between headings and texts. Our actions can also capture the difference between headings from adjacent depth levels. In this way, we construct the catalog tree without regard to its depth and complexity. Additionally, we propose an additional action for text segment concatenation. Based on these facts, we design 4 actions as follows: -Sub-Heading: current input text q is a child heading node of s; -Sub-Text: current input text q is a child text node of s; -Concat: current input text q is the latter part of s and their contents should be concatenated; -Reduce: the level of q is above or at the same level as s, and s should be updated to its parent node. An example is provided in Table 2. To start the prediction, a Root node is given in advance. The first heading Credit Rating Report is regarded as a child of Root. Then, Debt Situation becomes another heading node. After that, the Sub-Text action suggests that The balance is the child node of Debt Situation as the body text. Action Concat concatenates two body text. Next, action Reduce leads to the second layer from the third one. We can eventually build a catalog tree with such a sequence of actions. Furthermore, we present two constraints to avoid illegal results. The first one is that the action between Root node and the first input q can only be Sub-Heading or Sub-Text; Another constraint restricts text nodes to be leaf nodes in the tree, and only Reduce and Concat actions are allowed when s is not a heading. If the predicted action is illegal, we take the second-best prediction as the final result. Model Architecture As Figure 2 shows, the given inputs s and q are encoded via a pre-trained language model (PLM). Here, we use a light version of Chinese whole word masking RoBERTa (RBT3) [2] to obtain encoded representations s and q. After concatenation, g = s||q is fed into Feed-Forward Networks (FFN). The FFN is composed of two linear transform layers with ReLU activation function and dropout. Finally we adopt the softmax function to obtain the predicted probabilities as shown in Equation 1. where A denotes all the action candidates. In this way, we can capture the implicit semantic relationship between two nodes. During prediction, we take the action with maximal probability p as the predicted result: where a i ∈ A is the predicted action. As discussed in § 4.1, we use two extra constraints to help force decoding legal action results. If a i is an illegal action, we sort the predicted probabilities in reverse order, and then find the legal result with the highest probability. As for training, we take cross entropy as the loss function to help update the model parameters: where I is the indicator function, y a is the gold action, and a i ∈ A is the predicted action. Datasets We further split the datasets into train, development, and test sets with a proportion of 8:1:1 for training. To fully utilize the scale advantage of the Wiki corpus, we use it to train the model for 40k steps and subsequently save the PLM parameters for transferring experiments. Evaluation Metrics We use the overall micro F1 score on predicted tree nodes to evaluate performances. Each node in a tree can be formulated as a tuple: (level, type, content), where level refers to the depth of the current node, type refers to the node type (either Heading or Text), and content refers to the string that the node carries. The F1 score can be obtained by comparing gold tuples and predicted tuples. where N r denotes the number of correctly matched tuples, N g represents the number of gold tuples and N p denotes the number of predicted tuples. Baselines Few studies focus on the catalog extraction task, thus we propose two baselines for objective comparisons. 1) Classification Pipeline: The catalog extraction task can be formulated in two steps: segment concatenation and tree prediction. For the first step, we take the text pairs as input and adopt the [CLS] token representation to predict the concatenation results. Suppose the depth of a tree is limited, the depth level can be regarded as a classification task with MaxHeadingDepth+1 labels, where "1" stands for the text node label. We use PLM with TextCNN [9] to make level predictions. 2) Tagging: Inheriting the idea of two-step classification from above, the whole task can be formulated as a tagging task. The segment concatenation sub-task reflects the BIO tagging scheme, and the level depth and node type are tagging labels. We use PLM with LSTM and CRF to address this tagging task. Experiment Settings Experiments are conducted with an NVIDIA TitanXp GPU. We use RBT3 9 , a Chinese RoBERTa variation, as the PLM. We use AdamW [10] to optimize the model with a learning rate of 2e-5. Models are trained for 10 epochs. The training batch size is 20, and the dropout rate is 0.5. We take 5 trials with different random seeds for each experiment and report average results on the test set with the best model evaluated on the development set. For the classification pipeline and the tagging baselines, we set the maximal heading depth to 8. Main Results From Table 3, we find that our proposed TRACER outperforms the classification pipeline and tagging baselines by 5.305% and 4.121% overall F1 scores. The pipeline method requires two separate steps to reveal catalog trees, which may accumulate errors across modules and lead to an overall performance decline. Although the tagging method is a stronger baseline than the pipeline one, it still cannot match TRACER. The reason may be the granularities that these methods focus on. The pipeline and the tagging methods directly predict the depth level for each node, while TRACER pays attention to the structural relationships between each node pair. Besides, since the two baselines need a set of predefined node depth labels, TRACER is more flexible and can predict deeper and more complex structures. As discussed in § 4.1, we use two additional constraints to prevent TRACER from generating illegal trees. The significance of these constraints is presented in the last line of Table 3. If we remove them, the overall F1 score drops 0.794%. The decline is expected, but the variation is small, which shows the robustness of the TRACER model design. Interestingly, the PLM trained on the Wiki corpus does not bring performance improvements as expected. This may be due to the different data distributions between Wikipedia and our manually annotated ChCatExt. The following transferring analysis section § 5.6 contains more results with WikiBert. Analysis of Transfer Ability One of our motivations for building a model to solve the CED task is that we want to provide a general model that fits all kinds of documents. Therefore, we Table 4 to 7. We first train models on three separate source datasets and make direct predictions on target datasets. From the left part of Table 4, we can obtain a rough intuition of the data distribution. The model trained on BidAnn makes poor predictions on FinAnn & CreRat, and gets only 7.391% and 2.361% F1 scores, which also conforms with former discussions in § 3.2. BidAnn is the easiest one among the three sources of datasets, so the generalization ability is less robust. FinAnn is shallower in structure, but it contains more variations. The model trained on FinAnn only obtains a 69.249% F1 score evaluated on FinAnn itself. However, it gets better results on BidAnn (25.557%) and CreRat (14.420%) than the others. The model trained on CreRat gets 92.790% on itself. However, it does not generalize well on the other two sources. We also provide the zero-shot cross-domain results from Wiki to the other three subsets. Although the results are poor under the zero-shot setting, the pre-trained WikiBert shows great transfer ability. Comparing results horizontally in Table 4, we find that the pre-trained WikiBert could provide good generalization and outperforms the vanilla TRACER among 6 out of 9 transferring data pairs. The other 3 pairs' results are very close and competitive. To further investigate the generalization ability of pre-training on the Wiki corpus, we take an extreme condition into consideration, where only a few documents are available to train a model. In this case, as shown in Table 5, we train models with only k source documents and calculate the final evaluation results on the whole target test set. Each model is evaluated on the original source development set to select the best model and then the best model makes final predictions on the target test set. TRACER w/ WikiBert outperforms vanilla TRACER among 23 out of 27 transferring pairs. There is no obvious upward trend when increasing k from 3 to 10, which is unexpected and suggests that the model may suffer from overfitting problems on such extremely small training sets. In most cases of real-world applications, a few target documents are available. Supposing we want to transfer models from source sets to target sets with k target documents available, there are two possible methods to utilize such data. The first one is to train on the source set, and then further train with k target documents; the other one is to concatenate the source set and k targets into a new train set. We conduct experiments under these two settings. The results are presented in Table 6 and 7. Comparing the vanilla TRACER model results, we find that concatenating has 10 out of 18 pairs that outperform the further training method. From k=3 to 10, there are 2, 3, and 5 pairs that show better results, indicating that the concatenation method is better as k increases. WikiBert has different effects under these two settings. In the further training method, WikiBert is more powerful (11 out of 18 pairs), while it is less useful in the concatenation method (8 out of 18 pairs). Overall, we find that: 1) WikiBert achieves good performances, especially when the training set is small; 2) If there are k target documents available besides the source set, WikiBert is not a must, and concatenating the source set with k targets to make a new train set may produce better results. Analysis on the Number of Training Data The left part of Figure 3 shows the average results on each separate dataset with different training data scales. Although BidAnn is the smallest data, the model still gets a 63.460% F1 score and surpasses the other datasets. Interestingly, a decline is observed in BidAnn when the number of training documents increases from 40 to 80. We take it as a normal fluctuation representing a performance saturation since the performance standard deviation is 4.950% when the training data scale is 40. Besides, we find that TRACER has good performance on Cr-eRat. This indicates that TRACER performs well in datasets with deeply nested documents if the catalog heading forms are less varied. In contrast, TRACER is lower in performance on FinAnn than BidAnn and CreRat, and it is more data-hungry than other data sources. For ChCatExt, the merged dataset, performance grows slowly with the increase of training data scale, and more data are needed to be fully trained. Comparing the overall F1 performance of 82.390% on the whole ChCatExt, the small scale of the training set may lead to a bad generalization. Analysis on Different Depth From the right bar plot of Figure 3, it is interesting to see the F1 scores are 0% in level 1 text and level 5 heading. This is mainly due to the golden data distribution characteristics that there are no text nodes in level 1, and there are few headings in deeper levels, leading to zero performances. The F1 score on level 2 text is only 43.938%, which is very low compared to the level 3 text result. Considering that there are only 6.092% of text nodes among all the level 2 nodes, this indicates that TRACER may be not robust enough. Combining the above factors, we find that the overall performance increases from level 1 to 2 and then decreases as the level grows deeper. To reduce the performance decline with deeper levels, additional historical information needs to be considered in future work. Conclusion and Future Discussion In this paper, we build a large dataset for automatic catalog extraction, including three domain-specific subsets with human annotations and large-scale Wikipedia documents with automatically annotated structures. Based on this dataset, we design a transition-based method to help address the task and get promising results. We pre-train our model on the Wikipedia documents and conduct experiments to evaluate the transfer learning ability. We expect that this task and new data could boost the development of Intelligent Document Processing. We also find some imperfections from the experimental results. Due to the distribution gaps, pre-training on Wikipedia documents does not bring performance improvements on the domain-specific subsets, although it is proven to be useful under the low-resource transferring settings. Besides, the current model only compares two single nodes each time and misses the global structural histories. Better encoding strategies may need to be discovered to help the model deal with deeper structure predictions. We leave these improvements to future work.
2023-05-01T01:15:16.508Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "a54f6186bdad9e1ea8f0e7bec2deac6147b032c8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a54f6186bdad9e1ea8f0e7bec2deac6147b032c8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8907202
pes2o/s2orc
v3-fos-license
Structured scale-dependence in the Lyapunov exponent of a Boolean chaotic map We report on structures in a scale-dependent Lyapunov exponent of an experimental chaotic map that arise due to discontinuities in the map. The chaos is realized in an autonomous Boolean network which is constructed using asynchronous logic gates to form a map operator that outputs an unclocked pulse-train of varying widths. The map operator executes pulse-width stretching and folding and the operator's output is fed back to its input to continuously iterate the map. Using a simple model, we show that the structured scale-dependence in the system's Lyapunov exponent is the result of the discrete logic elements in the map operator's stretching function. We report the experimental observations of microscopic discontinuities in a one-dimensional chaotic return map based on structures in a scale-dependent Lyapunov exponent. The chaos is realized in an autonomous Boolean network which is constructed using asynchronous logic gates to form a map operator that outputs an unclocked pulse-train of varying widths. The map operator executes pulse-width stretching and folding and the operator's output is fed back to its input to continuously iterate the map. Using a simple model, we show that the structured scale-dependence in the system's Lyapunov exponent is the result of the discrete logic elements in the map operator's stretching function. Understanding the distinct roles of noise and determinism is important for all experimental chaotic systems. We examine a scale-dependent Lyapunov exponent (SDLE) of a Boolean chaotic system that iterates the dynamics a one-dimensional (1D) map. A SDLE was studied previously as a way to distinguish the entropy of microscopic noise from the entropy of macroscopic chaos [1][2][3]. This distinction is of increasing importance as physical random number generators that use chaotic systems as entropy sources continue to be developed [4][5][6][7][8]. Our experiment iterates a macroscopic tent map with microscopic discontinuities that are blurred by the noise of the system. Here, we demonstrate that scaling of these discontinuities manifests as a structured scale-dependence in the Lyapunov exponent of our Boolean chaotic system. Boolean chaos is a term used to describe the phenomenon of deterministic dynamics in unclocked Boolean networks with an exponential divergence of neighboring trajectories. Originally, theories of continuous, ideal Boolean networks predicted non-repeating switching in certain networks, but without chaos [9]. Contrary to this prediction, recent experiments showed that physical logic gates can introduce non-ideal effects that give rise to such chaos [10]. Boolean chaos has been reported in both autonomous and driven networks [10,11], both of which yield complex, multi-dimensional dynamics. Onedimensional (1D) Boolean chaotic maps were theorized for specific non-ideal effects [12]. Here, we examine the first experimental 1D Boolean chaotic map and report its unique, multi-scaled features. Our experimental setup is influenced by recent studies of non-chaotic autonomous Boolean systems. In particular, asynchronous networks of logic gates were studied as excitable systems with synchronization patterns [13,14] and phase oscillators with chimera states [15,16]. One appealing feature of these Boolean systems is that they can be implemented entirely on a field-programmable gate array (FPGA), a common component in modern electronics. This platform allows for large dynamical networks of asynchronous logic gates to easily be built [8,[13][14][15][16]. The implementation of our experiment is also facilitated by an FPGA, where we note that our system is not a simulation on a finite-state-machine (see [17] and references therein); it is an unclocked system with non-zero entropy and a potential continuum of dynamical states that are subject to analog effects and experimental noise. Our experimental system is shown in Fig. 1a, which is initialized by an input voltage pulse of initial pulse-width w 0 . This pulse drives a map operator M which consists of two separate functions: a pulse-width folding-function f and a pulse-width gain-function g that approximately doubles a pulse's width (details provided later), where this combination of folding and stretching are sufficient conditions to see chaos [18]. The output voltage of M is labeled as v out , and a delay line routes v out back to the input of M , where the delay is long enough to ensure only one pulse is in this feedback loop at a time. We note that this is a time-delay system with an infinite dimensional phase space, but neighboring pulses do not interact, allowing for 1D dynamics. This system remains in a stable steady state v out = 0 V until we inject a pulse, and thus w 0 serves as its initial condition. After an initial pulse is injected, the system produces a self-sustaining pulse-train. In Figs. 1b-c, we plot the temporal evolution of v out , which contains pulses that occur with non-repeating pulse-widths w i and nonrepeating spacings y i . Thus, the transition times in v out are the state variables of the chaos. This is different from analog chaotic circuits that iterate 1D maps in discrete or continuous time and use voltage or current as the state variables [19,20]. The power spectral density of v out (not shown) is broadband with prominent frequency components at integer multiples of 1/T ∼ 13 MHz, where T =w +ȳ ∼ 76 ns is the average pulse-repetition period and can be adjusted with the feedback delay. To analyze the dynamics, we study w i , which is plotted in Fig. 1d; as we will show later, y i is a function of w i and contains no new information about the dynamics. We construct the return map using (w i , w i+1 ) in Fig. 1e, which shows a 1D structure similar to a tent map [18]. Figure 1e also shows that the density of the return map is non-uniform, which differs from an ideal tent map. We fit Fig. 1e using a piecewise-linear function where m is the average slope and τ n is the folding point. The fit yields m fit = 1.95±0.01 and τ n,fit = 11.7±0.01 ns, which shows that the map is not a tent-map of full height. We note that a tent map of full height in the experimental Boolean implementation can only show transient chaos before collapsing to the steady state v out = 0 V due to short-pulses rejection (SPR) by the physical logic gates [10]. The slope m = 1.95 limits the grammar of the map and prevents pulse widths w i 1 ns, allowing for nontransient chaos. Interestingly, we might approximate the Lyapunov exponent λ using m fit such that λ ∼ ln(1.95)/T [18]. However, using this fit of the return map to estimate λ assumes a continuous map. As we will discuss later, the discrete logic gates in the system's design create discontinuities in the output of the operator M . To to avoid assumptions about the experimental system, we instead compute a SDLE from w i . We define a SDLE as the divergence of neighboring trajectories, where neighbors (w i , w j ) satisfy < |w i − w j | < + ∆ . An example of neighboring (w i , w j ) is shown in Fig. 2a. To calculate λ( ), we average the separation of (w i , w j ) over neighboring trajectories for a given . An example is shown in Fig. 2b, where a linear fit of the local divergence shows λ( )/T ∼ 0.7. The calculated λ( ) for a scan of in 10 ps steps is shown in Fig. 2c. In the figure, we use a scaling reference of τ 1 , which is the approximate delay time through a single logic element. We note that λ( )/T > 0 is an indicator of trajectory separation at an exponential rate. Figure 2c demonstrates that the experimental system exhibits exponential separation from both noise and chaos. Neighboring points on the return map with d o < τ 1 have an initial separation that is dominated by noise, while for d o > τ 1 the SDLE shows oscillations in the divergence rate at frequency ∼ 1/(2τ 1 ) near λ( )/T ∼ ln(2) (the value of the Lyapunov exponent for a continuous tent-map of the full height). Thus, the rate of divergence shows a structured scale-dependence about a global rate that is described by the macroscopic features of the return map. Understanding this phenomenon is important for exploiting/avoiding scale-dependent entropy sources. In the remainder of this paper, we briefly outline the design of Fig. 1a and examine the map operator's functions to motivate a simple model that exhibits a similarly structured SDLE. Our experimental system exploits the propagation delays of pulses as they transmit through logic gates. In the simplest example, the feedback loop τ N in Fig. 1a acts as a continuous delay line that routes pulses from the output of the map operator back to its input. This delay line is constructed using cascaded NOT gates [13], where even numbers of NOT gates are used to reduce asymmetries between rise and fall times of pulse edges that propagate [10] and preserve pulse widths. The number of NOT gates n sets the propagation delay τ n . In the map operator, a folding function f is implemented with the circuit in Fig. 3a. In the figure, v in and v a are signals for input pulses of width w in and output pulses of width w a , respectively, such that w a ∼ w in for w in ≤ τ n and w a ∼ (2τ n − w in ) for τ n < w in ≤ 2τ n . To illustrate this circuit's folding, in Fig. 3b we plot experimental examples of (v in , v a ) of the folding circuit, and in Fig. 3c, we scan w in and plot the respective w a . We model w a = f (w in ) from Eq. [1] with m = 1, and we note f (w in ) does not address y i , but based on the folding circuit's operation, we derive y i = τ N + τ n − w i for w i ≤ τ n and y i = τ N otherwise. The pulse-width gain function g of the map operator is shown Fig. 4a. In the figure, an input pulse v a is launched into a delay line of NOT gates, where AND gates compare the outputs of gates (k, 2k), where k is the index number of K total NOT gates such that 2k max = K. The AND-gate outputs drive a multi-input OR-gate, which outputs a pulse v out of width w out for τ K > w out , where τ K is the delay through K gates. To illustrate the pulse-width gain from this circuit, we plot examples of (v a , v out ) and a scan of (w a , w out ) in Figs. 4bc, respectively. The resulting waveforms show approximate pulse-width doubling and the characterization of (w a , w out ) has average slope ∼ 2. However, the discrete nature of the AND-gate comparisons of the delay line in Fig. 4a creates regularly-spaced, small-scale discontinuities that are not resolved in Fig. 4c due to noise. Based on these discrete comparisons, we model the pulse-width gain as w out = g(w a ) ∼ 2τ 1 w a /τ 1 + h(w a − τ 1 w a /τ 1 ), (2) where τ 1 w a /τ 1 is a measure of w a in single gate-delays, and h is a function that describes the width of an output pulse for a single gate. As w in increases in Eq. [2], g is discontinuous and increases by steps of τ 1 . We define h(0) = 0 such that, when w in is an integer multiple of τ 1 (w a − τ 1 w a /τ 1 = 0), the pulse-width gain is exactly 2. When w a is not an integer multiple of τ 1 , h provides a corrective term that describes the continuous growth of pulse widths, where the input to h resets at each multiple of τ 1 . Thus, the function g is has an average slope ∼ 2 with discontinuities spaced regularly by τ 1 and local slope(s) h (w in ) in between each discontinuity. For simplicity, we let h(w a ) = w a such that the map w i+1 = M (w i ) = g(f (w i )) is an example of a piecewiselinear system that exhibits noise-induced chaos. Noiseinduced chaos occurs in chaotic systems that only show periodic or steady state dynamics without the presence of noise [1]. Similar models with fine-scaled discontinuities have been previously studied with the use of a SDLE, where with enough noise, these simulated chaotic maps exhibit characteristics of their macroscopic map structures [2]. A different choice for h, such as a nonlinear function, can yield chaos without noise, but we choose the simplest model with noise to demonstrate the experimental observations. Noise in the FPGA causes jitter in w i , where we measure the jitter to be approximately Gaussian with standard deviation (STD) σ ∼ 90 ps. We simulate the map w i+1 = M (w i ) using Eqs. [1][2] with m = 1, τ n = 12 ns, τ 1 = 0.3 ns, and additive-white-Gaussian-noise at every iteration (STD = σ). The simulated return map is shown in Fig. 5a with a 1D structure similar to a tent map with slopes ∼ ±2. The probability density of the map is also non-uniform, where clustering occurs at evenly-spaced intervals. This differs from the experimental density because, in the model, we can guarantee that τ n = Lτ 1 , for integer L. Even though there are an integer number of logic gates in the experimental τ n , heterogeneities in gate delays due to physical effects and FPGA routing cause τ n = Lτ 1 ± τ , where τ is a cumulative timing difference. We implement a timing difference in the model (not shown) and note that clustering in the return map changes with τ . We calculate the simulated SDLE λ sim ( ) for the return map in Fig. 5a using the same method applied to the experimental data. The result is plotted in Fig. 5b, demonstrating that λ sim ( ) also has microscopic features ∼ O(τ 1 ) that oscillate about the average divergence of the macroscopic map. Thus, the results from the simple model are quantitatively similar to those from the experimental Boolean system, where more agreement can likely be achieved by introducing individual gate-delay heterogeneities, using a macroscopic slope m = 1.95, and exploring nonlinear functions for h. Interestingly, the structures in the simulated SLDE become more (less) pronounced for lower (higher) levels of noise, and for sufficiently high noise levels, the structures are no longer detectable. This suggests that some physical systems may have underlying scale-dependent structures that may or may not be detected depending on noise levels. Furthermore, introducing irregularly-spaced discontinuities in the model (not just at τ 1 but at τ 1 , 2τ 1 , etc.) also blurs these structures and suggests that heterogeneities may be a mechanism that removes structures in the SDLE. In our experiment, the spacings between discontinuities can be tuned by moving the AND-gate inputs in Fig. 4a, and thus our 1D Boolean chaotic system is a good candidate to begin exploring these concepts. In summary, we present an experimental chaotic system with a macroscopic 1D return map and microscopic, regularly-spaced discontinuities that are apparent in the structure of SDLE. These discontinuities are the result of the discrete nature of the logic gates in our design on the FPGA that stretches pulse widths as part of a 1D map operator. Using a physically-motivated, simple model, we reproduce a similarly structured scale-dependent Lyapunov exponent that warrants additional experimental and theoretical study. S.D.C acknowledges the financial support of the US Army Aviation and Missile Research Development and Engineering Center and is thankful for discussions with Dr. Ned Corron and Dr. Jonathan Blakely. S.D.C. also thanks Dr. David Rosin his procedures in Ref. [21].
2015-04-16T12:41:49.000Z
2014-12-02T00:00:00.000
{ "year": 2015, "sha1": "75abfed14b65729b8ce89099701c1b1234f18d3f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1412.1036", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "75abfed14b65729b8ce89099701c1b1234f18d3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Mathematics", "Physics" ] }