id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
4605939
pes2o/s2orc
v3-fos-license
Optical Coherence Tomography Angiography of Combined Central Retinal Artery and Vein Occlusion Optical coherence tomography angiography (OCTA) is a new, noninvasive technology that enables detailed evaluation of flow in the retinal and choroidal vasculature. The authors believe this to be the first report to describe the optical coherence tomography angiography findings associated with combined central retinal artery occlusion (CRAO) and central retinal vein occlusion (CRVO). Introduction Combined central retinal artery occlusion (CRAO) and central retinal vein occlusion (CRVO) is a rare vasoocclusive entity that has been associated with multiple etiologies that can cause devastating vision loss [1][2][3][4][5][6][7][8]. In the population without age-related cardiovascular risk factors, the majority of the combined cases has been attributed to rheological causes, including thrombophilia, vessel wall inflammation, and mechanical compression [3]. Optical coherence tomography angiography (OCTA) is a new, fast, noninvasive imaging modality that allows detection of blood flow through the retinal and choroidal plexuses without intravenous dye injection [9]. The depth-resolved imaging technique affords insight regarding various retinal and choroidal diseases that is not available through other diagnostic modalities, such as fluorescein angiography (FA) [10]. OCTA is rapidly becoming an indispensable tool to describe a spectrum of pathologies, including macular degeneration, diabetic retinopathy, glaucoma, and choroidal neovascularization [10,11]. Recently, it has been utilized as an adjunct tool to characterize retinal venous or arterial occlusion [10,12,13]. The authors believe this report to be the first to describe the optical coherence tomography angiography findings associated with combined central retinal artery occlusion and central retinal vein occlusion. The commercially available Cirrus 5000 with AngioPlex (Zeiss, Jena, Germany) was used, without any subsequent image modification or processing. Case Report A healthy 69-year-old female presented to the Emergency Department with sudden, painless, visual loss that started immediately following cataract surgery with retrobulbar anesthesia in the left eye (OS) nine days prior to presentation. The patient denied jaw claudication, temporal headache, scalp tenderness, or visual loss in the right eye (OD). Immediately following the event, the patient underwent a work-up which included a transthoracic echocardiogram (TTE), electrocardiogram (EKG), carotid ultrasound, erythrocyte sedimentation rate (ESR)/C-reactive protein (CRP), computed tomography (CT), and magnetic resonance imaging (MRI) of head. All tests were within normal limits. A complete ophthalmologic exam was performed. Best corrected visual acuity was 20/40 OD and hand motion OS. Intraocular pressure measured by Tono-Pen XL (Reichert Technologies) was 18 mmHg OD and 19 mmHg OS. Full ductions were present without pain. Pupils were equally round with an afferent pupillary defect OS. Anterior segment examination in the right eye was significant for a nuclear sclerotic cataract and examination of the left eye revealed corneal edema, trace cell, +1 flare, and a well-centered intraocular lens. Fundus examination by indirect ophthalmoscopy was unremarkable OD. Funduscopic exam OS demonstrated mild disc edema, macular edema, whitening of the macula, subtle tortuosity of vessels, and flame-shaped hemorrhages and cotton wool spots in all quadrants ( Figure 1). Spectral domain optical coherence tomography (SD-OCT) was performed on OS and showed increased hyperreflectivity and edema of the inner retina with disruption of the ellipsoid zone (EZ) (Figure 2). OCTA revealed an absence of flow in the foveal and perifoveal area in the superficial and deep retinal capillary plexuses (Figures 3(a) and 3(b)). In contrast, there is minimal alternation in choriocapillaris and choroidal vascular flow (Figures 3(c) and 3(d)). Discussion A combined CRAO and CRVO is a rare entity, and the etiology is incompletely understood. Although cardiovascular diseases, hypercoagulopathy, and inflammatory diseases are potential risk factors, our patient presented with a combined occlusion without any history of systemic diseases following cataract surgery with retrobulbar anesthesia [1][2][3][4]. Several studies have reported the occurrence of a combined CRAO and CRVO following retrobulbar injections, suggesting it can be a severe complication of periocular anesthesia [14][15][16][17][18][19][20]. The exact mechanism of combined CRAO and CRVO has not been elucidated, but there are multiple mechanisms proposed to explain the association with retrobulbar injection. Combined occlusion could result from optic nerve sheath hematoma secondary to needle penetration or direct injection into the optic nerve sheath [16,21]. Another potential mechanism is the compromise of one circulation leading to the occlusion of the other. Brown et al. described two patients who initially presented with a CRVO and then developed a subsequent CRAO, suggesting that increased venous pressure could cross the capillary bed to impede the arterial flow and cause ischemia [18]. Combined CRAO and CRVO is an ophthalmological emergency that should be recognized as a serious postsurgical complication due to its poor outcome. Without timely intervention, combined occlusion can lead to rubeosis iridis, neovascular glaucoma, retinal necrosis, periphlebitis of the central vein, and eventually permanent vision loss [18,22]. Various treatment modalities have been attempted to reverse the pathology with limited success, including triamcinolone, bevacizumab, and hyperbaric oxygen therapy. However, Vallée et al. demonstrated that timely intervention with fibrinolytics may restore retinal perfusion with visual improvement [2,4,22]. In the current report, OCTA showed that the vascular flow of the superficial and deep retinal plexuses were both interrupted OS (Figures 3(a) and 3(b)). In contrast, the choriocapillaris and choroidal vascular flow were minimally affected (Figures 3(c) and 3(d)). These results together suggest that the occlusion was limited to the retinal circulation without significant involvement of the choroidal circulation. Fluorescein angiography would have allowed us to assess the macular flow impairment. However, OCTA enables visualization of the flow disruption in the superficial and deep retinal capillary plexuses. With the depth of vascular disruption as a new metric for assessing disease severity, OCTA can provide more information regarding visual prognosis for this condition and other retinal vascular diseases. This case demonstrates clinical features of combined CRAO and CRVO imaged with OCTA following retrobulbar anesthesia associated with cataract surgery. OCTA technology can facilitate diagnosis and extent of combined CRAO and CRVO as it enables discrimination between superficial and deep retinal vasculature. Additional advantages of OCTA compared to FA include faster image acquisition and no potential allergic systemic effects [9,23,24]. In conclusion, OCTA is a new, fast, noninvasive imaging technology that has enabled improved understanding of the pathophysiology of many retinal vascular diseases including combined CRAO and CRVO. To the best of our knowledge, this is the first reported case that describes the OCTA findings associated with combined CRAO and CRVO. Future studies with OCTA will hopefully illuminate additional features of combined CRAO and CRVO and provide a better understanding of this complex disease. Conflicts of Interest The authors declare that they have no conflicts of interest regarding the publication of this paper.
2018-04-26T19:49:05.519Z
2018-02-12T00:00:00.000
{ "year": 2018, "sha1": "8c8c8d7ac39134f07a3f786b8d29b4e073ad3f28", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/criopm/2018/4342158.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0fedcd95d11179b0fa96d7b0bf9e0e92e7df49b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
145210962
pes2o/s2orc
v3-fos-license
Now into big strides : report on statutory status for the South African Library and Information Services sector The road to acquiring statutory status for the Library and Information Services (LIS) sector in South Africa has been traversed numerous times over the last sixty to seventy years. In more recent years, there has been renewed vigour to explore the acquisition of statutory status for the sector in South Africa. As part of this process of acquiring statutory status, a number of studies have been conducted. This paper examines the latest drive by the Library and Information Association of South Africa (LIASA) to solicit the views of a cross section of LIS personnel with regard to the sector acquiring statutory status. This issue of the acquisition of statutory status is earmarked as a priority in the recently developed Strategic Directions 2010-2014 document of LIASA. At the 2009 LIASA Conference, a clear mandate was given for a national survey to be conducted to solicit the views of personnel that work in the LIS sector with regard to the said issue. The authors administered a short questionnaire to a sample population representing all categories of staff irrespective of whether they belonged to an association or not. The questionnaire was administered using Survey Monkey. This paper reports the results of that survey. Given the overwhelming support for the acquisition of statutory status, the authors examined significant elements that would need to be crafted into the governance structures of a statutory body for the sector. Introduction and background The attempts to acquire statutory status for the Library and Information Services (LIS) sector in South Africa dates back to the first half of the twentieth century (Louw 1990;Raju 2006).There have been intermittent peaks between then and the present in terms of dedicated focus to acquiring statutory status.In 2004, there was a concerted effort by the Library and Information Association of South Africa (LIASA) to drive a new investigation into acquiring statutory status for the sector.In 2006, Raju was mandated by LIASA to lead this new investigation.From then to date, there have been a number of minor studies and report backs at LIASA conferences and in publications in LIS journals on the investigation into the acquisition of statutory status for the LIS sector in South Africa.The current investigation into statutory status for the LIS profession builds on the research conducted by Raju (2006).The publication from this research effort gave, inter alia, a historical account of the predecessors of LIASA and their efforts to provide effective and efficient representation of the LIS sector.Further, the study presented three possible options for pursuance in the quest for acquiring statutory status.These three options were:  acquiring statutory status via new legislation that is specific to the discipline;  acquiring statutory status via existing legislation, namely, the LRA 66 of 1995 which will result in unionisation; and  acquiring statutory status via an independent legislation and the LRA (Raju 2006). Since then (2006) to the present, the issue of statutory status has been on the agenda of LIASA and has become a priority.The principle of a professional association being relevant has always been the underpinning factor for representation of the LIS sector.The issue of relevance was tested in early 2008 when an Indaba was held.The Executive Committee and the Representative Council of LIASA together with other leading role players within the LIS sector met at this Indaba to determine, inter alia, the effectiveness and relevance of LIASA.At the end of the engagement, LIASA"s relevance was confirmed.However, the issue of statutory status was by far the most critical issue that had to be addressed. Taking its cue from the Indaba, the LIASA Executive Committee recommended to the 2009 Annual Conference the acquisition of statutory status via a new legislation that is specific to the sector (LIS).It was further recommended that a referendum be conducted on the matter to confirm or reject the views of the Executive Committee. The rationale for the "referendum recommendation" was the reality that LIASA represented a small proportion of those who work in the sector whereas the envisaged body will represent the interests of all who worked in the sector.It was acknowledged by the Executive Committee that a referendum was an enormous task which included the development of a register of all who worked in the sector and to administer the referendum to those that were on the register.Given the lack of capacity to complete the task, the Committee explored the possibility of outsourcing the development of the register and the conducting of the referendum.Unfortunately, the quotation of R2m to conduct the entire referendum process was way beyond the affordability of LIASA. The desperate need to acquire statutory status is articulated by Raju (2011: 12) when he states that those that are brave enough to pull their heads out of the sand will admit that the Library and Information Services profession is in distress.The closing down of library schools, the appointment of unqualified staff to provide an information service and to engage communities in critical issues such as information literacy, the "greying" of the profession and the lack of holistic coordination of the profession contribute to this distress.Raju (2011) goes on to state that the sector is in dire need of reform or revitalization and a legislative process that would bring essential cohesion and "control" to the profession. In 2010, LIASA explored the possibility of conducting a pilot study to determine support for the acquisition of statutory status.A sub-committee of the Executive Committee was formed with the mandate to conduct a pilot study.The Western Cape was mooted for the pilot study as the chair of the sub-committee was from the region.Preparations were made for the implementation of the pilot study.The sub-committee was midstream in developing the register and reported this progress to the 2010 LIASA Annual Conference.There was objection from the conference delegates to the pilot being restricted to the Western Cape.The recommendation of the Conference was that the sub-committee test the opinion of the sector at the national level. This paper examines the necessity for acquiring statutory status for the LIS sector.It also briefly looks at professions that are governed by legislation.The authors report on the national survey, its methodology and findings.Given the support for a statutory body, the paper examines what are interpreted as significant elements that need to be crafted into the governance structure of a LIS statutory body. 2 Why the need for statutory status for the LIS sector? The LIS sector, like in most other countries, is fragmented in terms of the clientele that it serves.There is further fragmentation by the categorisation of staff that it employs.These fragmentations are exacerbated by the diversity in terms of representation of staff.There are a number of trade unions and staff associations that represent the industrial interests of workers within the sector.Professional interests are addressed by the professional association, LIASA.However, LIASA is a voluntary organisation and its current low membership raises questions about its representivity of the LIS sector and allied professions.Despite having one of the most progressive constitutions for a professional association within the sector, LIASA cannot and does not claim to represent and/or protect the views, functionalities and interests of all within the sector. LIASA has acknowledged and debated the fact that there is no single body representing the profession in totality, that is, all individuals who are working in the sector and the services that the sector provides to the various communities.This lack of representivity has been debated at LIASA conferences.Given the significance of the issue, LIASA commissioned an investigation into the way forward with regards to representivity of the LIS sector.The investigation recommended the need for the acquisition of statutory status for the LIS sector. Given that there is a lack of comprehensive representation of the sector, its fragmentation, continuous erosion of the credibility of the profession and personnel that work within the sector, and other negative factors, it becomes clearly evident that the sector is in dire need of resurrection (Raju 2006).Therefore, it is imperative that there is a legislative process that significantly contributes to this resurrection or upswing of the profession.This upswing would be beneficial to the profession and the country as a whole as information, which is the core business of libraries, is essential for all forms of development.It is important not to lose sight of the fact that statutory status is not only a significant issue for the profession; it is in fact, as indicated above, a national imperative (Raju 2006).Hence the issue of the acquisition of statutory status being identified as a key strategic goal in the newly developed Strategic Directions 2010-2014 document (LIASA 2010).This quest for the acquisition of statutory status is also built on the recommendations of the Department of Arts and Culture and the National Council for Library and Information Services who state that "LIASA should be registered as [a] statutory body in order to regulate and give professional status to the LIS sector" (2009: xxii). What does a statutory body provide? The assumption that a legislative process will contribute to an upswing in the profession is based on an examination of other professions that are governed by statutory bodies.In terms of professions that have statutory status, the primary objective of the relevant legislation is to protect the interest of the public and regulate those that enter (for employment purposes) the profession.Other significant objectives which are common to statutory bodies representing a profession include:  Promoting the discipline;  Regulating the entrance of personnel into the sector;  Determining standards of professional education and training; and  Setting and maintaining excellent standards of ethical and professional practice. Brief examination of professions governed by legislation The education profession is an example of a profession that is governed by legislation -Act No. 31 of 2000 entitled South African Council for Educators Act.The governing body of the education profession is the South African Council for Educators (SACE) (Republic of South Africa 2000a). The Act addresses the issues mentioned earlier (namely, promoting the discipline, regulating those entering the profession, standards of professional education and such).The authors found compelling Section 21 of the Act as it dealt with the issue of compulsory registration of educators.The section reads as follows: 21(1) A person who qualifies for registration in terms of this Act must register with the council prior to being appointed as an educator. (2) No person maybe employed as an educator by any employer unless the person is registered with the council (Republic of South Africa 2000a: 16). Such mandatory regulations will be a major paradigm shift and would, in the opinion of the authors, give new direction and take representation of the LIS sector to entirely different levels of effectiveness. The authors examined a number of other professions that are governed by statutory bodies.The example of the engineering sector was considered to have significant synergy, in terms of structure, with the proposed statutory body for the LIS sector. The statutory body for the engineering sector is the Engineering Council of South Africa. The Engineering Council of South Africa (ECSA) was established in terms of the Engineering Profession Act (Act 46 of 2000), and derives its powers and responsibilities from this Act (Republic of South Africa 2000b).The mission of the Engineering Council as set out in a document by the Council on Higher Education (2003: 8) is to ensure, through a cooperative process of quality assurance, that persons wishing to enter the profession are educated and trained accordingly to widely accepted standards, so as to be able to render a professional service for the benefit of the public and the country as a whole. The mandate of ECSA, as documented by the Council on Higher education (2003: 8), inter alia, is to  provide for separate categories of registration i.e. "Professional Engineer", "Professional Engineering Technologist", "Professional Certificated Engineer" "Professional Engineering Technician" and other specific categories, respectively;  make provision for the reservation of work exclusively for registered persons;  draw up "Codes of Practice" in addition to the normal Code of Conduct;  act in the public interest -beyond registered persons; and  engage in accreditation including accreditation of programmes offered by providers other than universities and universities of technology. It is the opinion of the authors that principles encapsulated in LIASA"s mission and the mandate for acquisition of statutory status are critical issues that are necessary for the upswing of the LIS sector.The issue of the provision of a professional service for the benefit of the public and the country underpins LIASA"s Strategic Directions 2010-2014.It is these issues (that is, the benefit for the public and country) that have shaped the development of the current mission and vision of LIASA.It is envisaged that the "LIS statutory body" will be over-arching with LIASA being the professional arm or a sub-body.Further, the afore mentioned core principles are already the bedrock of the LIS sector and would have to be transferred to the envisaged statutory body as one of its building blocks. Methodology In implementing the mandate of the 2010 LIASA Conference (that is, to test the views of the sector), the sub-committee investigating the acquisition of statutory status decided to conduct the survey at a national level.As indicated earlier, LIASA does not have the capacity or the funds to solicit the opinion of all (with regard to the issue of acquiring statutory status) that work in the LIS sector.Therefore, and as supported by Strydom (2005: 194), it was decided to solicit the opinion of a "small portion of the total set".Strydom (2005) goes on to state that the major reason for sampling is feasibility as a complete coverage of the total population is seldom possible.Kaniki (2006) makes the assertion that the main concern in sampling is its representativeness.He goes on to point out that "the aim is to select a sample that will be representative of the population about which the researcher aims to draw conclusions" (Kaniki 2006: 49).Taking the cue from Strydom (2005) and Kaniki (2006), the authors identified a sample population that was representative of the LIS sector as a whole.The population identified for this survey included staff from the academic library sector including the library schools, the national library (both campuses -Pretoria and Cape Town), six metropolitan libraries and a random sample of special libraries. The authors are of the opinion that this sample population provided a balanced cross section of the different levels or categories of staff employed in the sector.Further, the development of a list of contact details (email addresses) for this sample population was thought to be realistic and manageable.However, the actual process of acquiring the email addresses of staff from this sample was more complicated than anticipated primarily because of the lack of cooperation from institutions.Notwithstanding this, the authors did accumulate more than 2 000 email addresses. 5.1 The survey method Mangione and Van Ness (2009: 476) point out that a mail survey can be especially good when (1) the researcher has limited resources to help conduct the survey, (2) the questions can be written in a closed-ended style, (3) the research sample has a moderate to high interest in the topic, and (4) the research objectives are modest in length.As indicated earlier, the authors had very limited resources and therefore had to rely on a survey to solicit the views of the respondents.In keeping with the afore mentioned guidelines of Mangione and Van Ness (2009), the research had a very simple objective and that was to test the views of the respondents with regards to the LIS sector acquiring statutory status.The core item in the questionnaire was Do you support statutory status?yes or no.Further, all of the items in this short questionnaire were closed-ended: again conforming to the afore mentioned guidelines.In terms of the interest of the respondents in the topic, the authors were convinced that all within the sector would have a very high interest in the topic. The questionnaire was administered electronically using Survey Monkey with an explanation of what statutory status is together with five itemssee Appendix 1.The survey was administered from 5 August to 12 September 2011 with an extension to 19 September 2011.A notice of the extension as well as reminders was sent to the respondents. The response rate An examination of the IP addresses of the responses received revealed 35 unique sets of IP addresses.Two thousand and thirty-six emails were sent out.Although most of the emails were "delivered", a number of institutions "complained" about not receiving the questionnaire.Investigation into the "non-delivery" of the email to the intended recipient revealed that some institutions have a policy of automatic deletion of "suspicious" emails.Therefore, it is difficult to determine the exact number of emails that were delivered.Hence, it is not possible to present the percentage of responses in relation to the number of emails sent.The total number of responses received was 550. Survey findings and discussion It was important for the authors to test the opinion of both professional and support staff as there is a large cohort of support staff within the LIS profession in South Africa.Figure 1 shows the distribution of respondentsprofessional staff, support staff and staff in library schools.Although the number of professional respondents was substantial (64.5% or 355 respondents), there was a significant number of responses from support staff (30.7% or 169 respondents).The significance of this 169 responses is that it provides a perspective from the support staff who constitute a substantial proportion of personnel working in the sector.In the eventuality of the sector acquiring statutory status, the support staff would be an important group that would contribute to the effective functioning of the sector.Further, the current cohort of support staff will serve as a critical mass for the growth in the number of professional librarians in the sector. Distribution of respondents (N=550) There were four other items in the questionnaire that was distributed.These items include the core question and three others.Listed in tables 1, 2 and 3 are the responses to three of the four items.With regard to the item relating to the sector that the respondents were employed in, the responses are captured in Table 1.The third item was "Are you a member of a Professional Association?".The following were the responses (Table 3): The fourth item "Do you support the acquisition of statutory status?" (which was the core item in the questionnaire) solicited an overwhelming positive response (see Figure 2).The authors did a number of cross tabulations to interrogate the responses.In the manipulation of data, the authors found that the larger proportion of the respondents did not belong to any professional association (see Figure 3).This revelation bodes well for the continued quest for statutory status as a significant view is from a cohort that does not belong to any professional association.The authors infer from this response that those who do not belong to an association, in the main, do not see the need for one.However, they do see the need for a statutory body.Of the 280 respondents (Figure 3) that did not belong to any association, 86.8% (243 respondents) supported the quest for statutory status.Only 13.2% (37 respondents) did not support the acquisition of statutory status.This overwhelming response reinforces the support for a statutory body for the LIS sector.In terms of research conducted (Raju 2006;Khomo 2007), personnel in the LIS sector preferred a representative body that addressed their industrial concerns.Hence, in terms of these studies, the preference of LIS personnel is to belong to trade unions as opposed to a professional association.However, the respondents in the Raju (2006) study revealed that they would prefer a discipline specific organisation to represent their industrial concerns.Given this finding, it was important to solicit the views of those who belonged to a trade union with regard to the acquisition of statutory status for the sector despite the fact that they already belong to a statutory body, namely, a trade union.As can be seen from Figure 4, 69.8% (or 389 respondents) of the respondents belonged to trade unions.Of these 389 respondents, 92.4% supported the acquisition of statutory status -ONLY 7.6% of the 389 "trade union respondents" did not support the acquisition of statutory status.Therefore, it can be inferred that those who belonged to a trade union would be amenable to belonging to a second statutory body as is the case with the educators who belong to SACE and to a registered trade union. Despite the overwhelming support for the acquisition of statutory status, it was deemed necessary to have an aggregate view of the distribution of staff that did not support the acquisition of statutory status (47 or 8.5% of 550 respondents) to identify any noteworthy trends (see Figure 5).The most significant trend was the number of respondents who currently were not members of an association and who did not support the acquisition of statutory status.This issue has already been addressed.To reiterate, 91.5% of the respondents supported the acquisition of statutory status.This positive response is interpreted as a mandate to proceed with the second phase of the process, namely, addressing the logistical issues relating to the registration of a statutory body. The way forward with statutory status The authors examined a number of statutory bodies representing various professions with the intention of identifying a statutory body or bodies that were considered to have the greatest level of sameness to the envisaged statutory body for the LIS sector, especially in terms of representing varying categories of staff within the sector and similar training and development requirements.Using the identified statutory bodies (that is, the statutory body for the engineering profession and the education profession) as a framework, the authors gleaned strengths from these two statutory bodies in developing a mock structure for a statutory body for the LIS sector.For the purpose of discussion, this mock statutory body is referred to as the South African Council for Library and Information Services (SACLIS). In terms of the Directory of ETQAS and professional bodies there are certain characteristics or elements that are consistent in statutory bodies.Some of these characteristics include the mission of the statutory body, objectives, mandate, functions, and education and training (Council on Higher Education 2003). Characteristics such as powers and duties of a council and, the composition of the council have been gleaned from relevant legislation. Mock structure for the LIS statutory body For the purposes of developing a mock structure for the LIS sector, the authors extracted characteristics from the Directory of ETQAS and professional bodies document and respective legislation and applied this to the mock structure.It was interpreted that the issue of a mission and vision is more intimately defining and should be debated and derived by a representative group.However, what was extracted and applied to SACLIS were characteristics such as objectives, mandate, functions, powers and duties of the council and, the composition of the council. Objectives of SACLIS i. to provide for the registration of LIS personnel; ii. to promote the professional development of LIS personnel; and iii. to set, maintain and protect ethical and professional standards for LIS personnel. Powers and duties of SACLIS i. with regard to the registration of LIS personnel: (a) provide for separate categories of registration i.e. "Professional Librarian", "Professional Library Technician", "Certificated Library Assistant" and other specific categories, respectively.A person may not practise in any of the categories contemplated unless he or she is registered in that category; (b) make provision for the reservation of work exclusively for registered persons; (c) determine minimum criteria and procedures for registration or provisional registration; (d) consider and decide on any application for registration or provisional registration; (e) keep a register of the names of all persons who are registered or provisionally registered; and (f) prescribe the period of validity of the registration or provisional registration. ii. with regard to the promotion and development of the LIS profession: (a) must advise the Minister on matters relating to the education and training of LIS personnel, including but not limited to 1 : (1) the minimum requirements for entry to all the levels of the profession; (2) the standards of programmes of pre-service and inservice LIS education; and (3) the requirements for promotion within the LIS system. (b) research and develop a professional development policy; (c) promote in-service training of LIS personnel; (d) develop resource materials to initiate and run, in consultation with employers, training programmes, workshops, seminars and short courses that are designed to enhance the profession; (e) compile, print and distribute a professional journal and other publications; (f) set and audit academic standards for purposes of registration through a process of accreditation of LIS programmes at universities and universities of technology: (f.1) accreditation powers are extended to include accreditation visits and to accredit programmes offered by providers other than universities and universities of technology. (g) set and audit professional development standards through the provision of guidelines which sets out post-qualification requirements for registration in the professional categories of registration; (h) determine exit levels and education and training outcomes (outcomes based competence); (i) determine essential modules within curricula to address national imperatives; (j) draw up "Codes of Practice" in addition to the normal Code of Conduct; (k) has jurisdiction to act in the public interest extended beyond registered persons; and (l) determine the fees and increments regarding fees. Composition of council The council consists of members, appointed by the Minister, taking into account, inter alia, the principles of transparency and representivity (including race, gender and disability): (i) ten registered persons, of whom at least (ii) ten persons to be elected via regional and provincial structures to the national council (a) membership representation will be via election by the registered members; (b) structure of member representation 1. regional representation *Prescriptionthere must be a minimum of two (2) with a maximum of four (4) support staff on the regional sub-council.The same structure would applicable to the provincial structure.There would be a maximum of five regions per province.A caucus of the aggregate of the regional structure would elect the provincial subcouncil.The nine provincial sub-councils will nominate and elect the ten membership representatives on the national council.The national council will be much larger. National representation At the national level, nominated and elected by the provincial sub-council, there would be two national sub-councils, namely, the support staff sub-council and the profession librarian sub-council.The national support staff sub-council would be nominated and elected by the support staff on the provincial sub-councils.The national support staff sub-council would be constituted of sixteen ( 16) members who would address issues unique to support staff.The national council would have a minimum of two (2) and maximum of four (4) support staff.By the same token, the national professional sub-council would be nominated and elected by the provincial sub-council.The professional sub-council would address the professional issues of the sector and those which may arise from discussions at the national sub-council level. Conclusion It is almost a decade since the LIS profession embarked on the path of revitalizing the profession through the acquisition of statutory status.The stimulus for this action was the motion that was adopted at the 2004 LIASA Annual General Meeting.Since then the "quest for statutory status wheel" chugged along slowly for the following five years or so.The growing downward spiral of the profession seems to have cajoled the professional body and its membership into action: the profession is now into big strides into acquiring statutory status.The first port of call for the professional body was to seek the views of the rank and file with regard to the necessity of acquiring statutory status for the LIS sector. It is clear from the survey conducted that the rank and file would prefer to belong to a statutory body that would govern the activities and functioning of the LIS sector.Given the findings of this survey, the mandate of LIASA annual general meetings, the vision of the LIASA Executive and the leadership of the organisation in general, there needs to be swift action by the LIASA sub-committee to lobby the relevant authorities for the development of appropriate legislation for the creation of a statutory body to govern the LIS profession.At no stage in all attempts to acquire statutory status has there been so much ground covered as done by the current process.Therefore, it is absolutely imperative that the issue is driven to a conclusion. Notes 1 The authors would like to draw a distinction between the statutory body NCLIS and the proposed SACLIS.The object of NCLIS is to advise the respective ministers on policy issues relating to information provisionthe emphasis is on the end users.SACLIS will advise the minister on matters relating to the providers of informationthe emphasis is on LIS employees. Figure 3 Figure 3 Figure 4 Figure 5 the Department of Basic Education. Table 1 : Distribution of respondents Table 2 : Membership of a trade union Table 3 : Membership of a professional association
2018-12-02T01:17:31.668Z
2012-09-29T00:00:00.000
{ "year": 2012, "sha1": "99c21d492f5efbd532d9cbda8e2835e4884fab66", "oa_license": "CCBYSA", "oa_url": "https://sajlis.journals.ac.za/pub/article/download/32/32", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3afb2159b9b9ccb0776ca1a25bf95559bdc45239", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Political Science" ] }
87850496
pes2o/s2orc
v3-fos-license
Pale Western Cutworm Control on Wheat This report is brought to you for free and open access by New Prairie Press. It has been accepted for inclusion in Kansas Agricultural Experiment Station Research Reports by an authorized administrator of New Prairie Press. Copyright 1986 Kansas State University Agricultural Experiment Station and Cooperative Extension Service. Pale Western Cutworm Control on Wheat L.J. DePew, Research Entomologist The pale western cutworm (Agrotis orthogonia Morr.) is a pest of considerable on wheat and other cereal crops throughout most of the Great Plains region of the United States.It was virtually un- known before 1911 and apparently did not become a pest until the cultivation of range lands and the growing of grain crops became widespread the prairies.In Kansas, a severe outbreak developed in 1936 and since that time outbreaks have occurred at various intervals.It is a sporadic pest that occurs in large num- bers when conditions are favorable. Generally, pale western cutworm infestations are limited to the extreme western tier of counties in Kansas.It is a typical dryland cutworm, preferring semiarid areas.All larval stages are subterranean in habit, feeding on the plant below the soil surface .As cutworms mature, they move along drill rows, cutting off tillers at or just above the crown.The greatest amount of damage occurs from April through June.Even when total damage is comparatively slight on a statewide basis, the pale western cutworm may be highly important to an individual grower whose crops are threatened. The pale western cutworm is a univoltine species, having but one generation annually.Eggs are deposited in the soil during late September and early Octo-ber.They hatch the following spring (February) and reach their full growth in late May and early June.At this time they form earthen cells, shrivel up and remain dormant for the summer.In early August they transform into the pupal stage and emerge as adult moths in late September or early October .After mating, females begin depositing eggs in the soil, thus completing the life cycle . Natural enemies (predators, parasites, diseases) are known to attack cutworm larvae, but they appear to be of little value in Kansas.Consequently, growers must rely on chemical control to prevent economic crop losses.A test was conducted in western Kansas to evaluate several insecticides for pale western cutworm control on wheat. Procedure Four insecticides were applied to Eagle wheat in a randomized complete block design with three replications.Each plot was 33ft.x 33ft.(1/ 40 acre) .Insecticides were applied May 9, 1980, at the rate of 15.5 gallons total spray per acre.Posttreatment larval counts were made 3 and 7 days following application by examining four 1-ft 2 samples in each plot. Plots were harvested and yields calculated in bushels per acre. Results All insecticides at indicated rates (Table 1) significantly reduced cutworms at 3 days posttreatment.Plots treated with Ambush at 0.20 lb a .i./acre had the fewest cutworms, but it was not significantly bette than the other insecticidal treatments .Seven days alter treatment, all treated plots had significantly fewer cutworms than the untreated check.No larvae were found in plots treated with Ambush at 0 .20 lb a.i./acre.The other insecticides were less effective, but not significantly so.Insecticidal treatments gave slight numerical increases in grain yields over the untreated check, but differences were not significant.Yields varied from 41.5 bu/ acre for the check plot to 48.7 .bu/acrefor the 0.10 lb Pounce plots.Low cutworm numbers probably were influential in preventing greater yield differences .No apparent phytotoxicity was observed as a result of any of the chemical treatments. New Controls Approved Until recently, endrin was the only insecticide registered for pale western cutworm control.In 1983, however, the Environmental Protection Agency (EPA) ranted a specific exemption under the provisions of .SECTION 18 (FIFRA) for the use of permethrin (Ambush and Pounce) and chlorpyrifos (Lorsban) to control pale western cutworm on wheat in Kansas.All applicable directions, precautions and restrictions on the EPA-registered product label must be followed.The specific exemption for permethrin and chlorpyrifos use on wheat expires June 1, 1983. Table 1 . Evaluation of insecticides applied to wheat for pale western cutworm control and grain yields.
2018-12-29T13:32:54.686Z
1986-01-01T00:00:00.000
{ "year": 1986, "sha1": "8b91c53177f6a0a3782d0bbf8dc698fc3405b598", "oa_license": "CCBY", "oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7324&context=kaesrr", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "57e44feaac04e29af1410799f1b55a1b3846c8a5", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
1960458
pes2o/s2orc
v3-fos-license
Understanding processes of risk and protection that shape the sexual and reproductive health of young women affected by conflict: the price of protection Background It is assumed that knowing what puts young women at risk of poor sexual health outcomes and, in turn, what protects them against these outcomes, will enable greater targeted protection as well as help in designing more effective programmes. Accordingly, efforts have been directed towards mapping risk and protective factors onto general ecological frameworks, but these currently do not take into account the context of modern armed conflict. A literature overview approach was used to identify SRH related risk and protective factors specifically for young women affected by modern armed conflict. Processes of risk and protection A range of keywords were used to identify academic articles which explored the sexual and reproductive health needs of young women affected by modern armed conflict. Selected articles were read to identify risk and protective factors in relation to sexual and reproductive health. While no articles explicitly identified ‘risk’ or ‘protective’ factors, we were able to extrapolate these through a thorough engagement with the text. However, we found that it was difficult to identify factors as either ‘risky’ or ‘protective’, with many having the capacity to be both risky and protective (i.e. refugee camps or family). Therefore, using an ecological model, six environments that impact upon young women’s lives in contexts of modern armed conflict are used to illustrate the dynamic and complex operation of risk and protection – highlighting processes of protection and the ‘trade-offs’ between risks. Conclusion We conclude that there are no simple formulaic risk/protection patterns to be applied in every conflict and post-conflict context. Instead, there needs to be greater recognition of the ‘processes’ of protection, including the role of ‘trade-offs’ (what we term as ‘protection at a price’), in order to further effective policy and practical responses to improve sexual and reproductive health outcomes during or following armed conflict. Focus on specific ‘factors’ (such as ‘female headed household’) takes attention away from the processes through which factors manifest themselves and which often determine whether the factor will later be considered ‘risk inducing’ or protective. Background propose a conceptual framework which locates risk and protective factors associated with the sexual and reproductive health (SRH) of young people in developing countries in an ecological model [1]. This assumes that knowing what factors are likely to increase poor SRH outcomes for young women (including early first sex or early first birth), and how they operate, will help target young women at risk of negative health outcomes and help to design more effective programmes. The framework views young people as living in multiple milieus (macro/institutional, community, school, family, peers and individual), each of which may be a source of both risk and protection. This model is presented as functioning across developing contexts. Predominately, work on SRH has taken a population approach, with limited consideration of its suitability for application to specific contexts, such as conflict. However, a small body of emerging evidence suggests that processes of risk and protection are, to a large extent, contextually determined and need to be understood in relation to distinct groups of people in specific contexts [2][3][4]. While there has been some exploration of specific risk and protective factors in relation to conflict and the short or long term impacts upon some aspects of health [5][6][7], based on this literature overview, there appears to be little that has been specific to SRH and young women. The inter-agency field manual on reproductive health in humanitarian settings highlights the importance of identifying protective factors within initial assessments; however, little is known as to how conflict may undermine protective processes or increase the 'trade-off ' between risks that may be undertaken to increase protection [8]. Conflict is likely to have a powerful impact on the ecology of young people and an ecological model enables a more comprehensive exploration of protective processes, with socio-cultural contexts becoming the focus of attention rather than individual attributes [7]. Risk and risk factors are often used and understood as notions of statistical risk, and commonly associated with increasing the likelihoods of negative outcomes or problem behaviours. Protective factors operate in the context of risk and may be understood as the resources that support and assist an individual, family or community to manage, restrict and/ or overcome difficulties and adversity, and reduce risk [9]. Such conceptualisations of risk and protective factors suggest that they are static and generalisable factors. However, the dynamic nature of a factor, and whether or how it serves to protect or increase risk, can only be understood when wider processes of risk and protection are identified (i.e. how it came to be that certain choices were made or that a particular context occurred). This may involve several dynamic factors, as well as combinations of risk and protective factors (or a tradeoff between them), each of which can produce different outcomes for any individual. Adopting this position, and building upon Blum and Mmari's (2005) ecological framework [1], an overview of the risk and protective factors highlighted by the literature on the young women's SRH in conflict is presented. Yet rather than describing a neat set of risk and protective factors that can be used to underpin policy and practice responses, we present the complex processes of protection that often dictate whether a context or choice is risk inducing or protective. Through this work, we argue that we need to better understand and pay more attention to these processes, the trade-offs which occur and the price often paid by young women through it. Methods This paper adopts a literature overview approach (Grant and Booth 2009), providing a narrative of the relevant literature [10]. A search of the literature was performed using Web of Knowledge, limited to the title and abstract. The search period was from 2000 to 2013, and only papers in English were included. The search was driven by the overarching question 'What are the risk and protective factors associated with the sexual and reproductive health of young women in contexts of armed conflict?' The search strategy combined terms according to four broad themes: The search also involved hand searches of relevant journals, and the following up of citations, appropriate grey literature and key authors. Different types of literature (quantitative, qualitative, conceptual and discussion pieces, and grey literature) were included. The initial inclusion criteria focused on literature that explicatively discussed conflict, and sexual and reproductive health in relation to this. In addition, we restricted our focus to literature concerned with females and included those of any age between 8 and 18 years. Where age ranges were not clarified, articles that made references to adolescents, girls or youth were retained. Identified literature was read to ascertain risk and protective factors associated with various poor sexual and reproductive health care outcomes, such as unmet need for contraception, early age at first birth, maternal mortality, infant mortality and sexually transmitted diseases. No explicit appraisal of the methodological quality of each piece was undertaken. Whilst we found little research that explicitly identified 'risk' and 'protective' factors, these were extrapolated by a thorough reading of the text. However, we found that these extrapolated factors were not clear cut, and many factors were found to be both protective and risky; for example, refugee camps offer protection against some poor sexual health outcomes through access to services but they can also be risky environments for SRH, especially in relation to sexual violence. An ecological framework of risk and protective factors, based on the pre-populated model used by Blum and Mmari (2005) (Figure 1) was used to analyse the literature thematically and present the results [1]. Identified factors were mapped according to six environments that were also used by Blum and Mmari; macro/institutional; Community; school; family; peer and individual. In a second stage, further literature was identified to populate certain levels of the framework, which were under-represented in the literature accessed thus far. For the macro/institutional level, for example, we draw on literature that related to sexual and reproductive health in conflict more generally, and the consequences of conflict on health systems where these could also impact on young women. Literature that describes the impact of conflict on education was used at the school level. Finally, literature concerned with the consequences of conflict on peer relationships and the civic participation of young people was included at the peer level. Macro/institutional level environment Unstable governance Armed conflict can dramatically change the way young women access and benefit from (in theory, at least) structures such as legislative justice mechanisms, stable governance and policing which protect them from sexual violence or coercion, as well as processes for participation and demonstration which allow young women to voice their concerns. Issues of insecurity and fear of reprisal and attack can therefore limit access to health services [11][12][13]. Similarly, progressive social policy for SRH, which facilitates sexual education and access to family planning methods can be curtailed as well as livelihood safety nets to prevent destitution. Poor infrastructure Important health and education infrastructures that facilitate access to good quality SRH services can be severely affected during times of conflict. Verley (2010) reports, how during Shia-Sunni hostilities in Gilgit Town, Pakistan, obstetric service access and provision were severely reduced following the targeting, killing and exclusion of particular faith-based groups in hospitals and clinics, resulting in increased maternal morbidity and mortality [14]. During the Rwandan genocide an estimated 80% of health professionals were killed or fled the country, and medical supplies and equipment were heavily looted and destroyed [15]. More recently, attacks on professionals has caused a deficiency of healthcare providers in Iraq as many have left the country causing disruption to services (Mowafi 2010) [16]. Similarly, during the civil war in Mozambique, Renamo specifically targeted health and education facilitates in an effort to destabilise the country [17]. Institutional settings The institutional setting of SRH services can changeas opposed to disappearing altogether -during conflict. For some young women, there is better access to SRH in refugee camps, or other displaced contexts, compared to their usual home [18]. The Reproductive Health Response in Conflict Consortium and the Inter-agency Working Group on Reproductive Health in Crisis, for example, have spent recent years increasing the priority of reproductive health in crises and have developed a wide range of responses for organisations responding to humanitarian disasters [8]. Nonetheless, despite humanitarian efforts, young people's SRH can be neglected at the institutional level in contexts of displacement. Abdelmoneium (2010) and Wayte et al. (2008) both found that a focus on safe motherhood in Sudan and Timor-Leste, respectively, resulted in other aspects of SRH being side-lined because of limited resources and services, and the prioritisation of life-saving and emergency services [19,20]. New institutional settings like refugee camps can also increase the risk of Sexual and Gender Based Violence (SGBV); Stavrou's (2004) fieldwork in Angola identified the location of Internally Displaced Persons (IDP) camps close to military encampments as a contributory factor for the harassment faced by females [21]. Threats to sexual safety can also come from within camps as a result of the breakdown of social norms and deficient security [22,23], and perpetrators can include humanitarian staff. Fear of sexual attacks and harassment can place restrictions on female mobility and, in the example of displaced Syrian women and girls, there are reports of greater limits to their freedom and mobility in their host countries with concern about attacks being the greatest for those unmarried [24]. Livelihoods Due to the impact of conflict on the macro-economic context, conflict can also have profound changes on the livelihood strategies of households, which can put young women at risk of poor SRH outcomes. In rural Nepal, displacement caused by Maoist and threats from security forces caused a disruption and loss to agricultural livelihoods [25]. As a result, a substantial proportion of conflict-affected girls reported themselves to be working in contexts (for example, hotels and wine shops) where they feared sexual abuse and exploitation. Conflict in Northern Uganda and Southern Sudan also resulted in the engagement of young women in transactional sex as a means of family survival where access to farming was restricted [26]. It should be noted that, in relatively rare circumstances, the engagement of women in new livelihood strategies -such as trading -can support a sense of empowerment and autonomy due to their increased economic importance in the household, although this empowerment is rarely translated into greater representation at the community level [26]. At the Macro level, evidence highlights the ways in which SRH services are undermined. However, institutional adaptation to refocus and prioritise SRH provision within refugee camps and displacement centres has resulted in improved SRH access for some women, though such gains can occur in the context of increased sexual threat. Community level environment Each community has its own norms, beliefs and attitudes that determine how much autonomy and mobility a girl has, how easily she is able to enjoy and exercise her rights, whether she is safe from violence, whether she is forced into marriage, how likely she is to become pregnant, or whether she can resume her education after having had a child. (United Nations Population Fund (UNFPA), 2013, p36) [27]. The extract above describes the complex, and often contradictory, impact of cultural norms and values on the SRH of young women. During times of conflict the breakdown of social cohesion and norms in a society can increase the risk to young women of negative sexual outcomes [28], particularly when protective mechanisms located in family and community structures are disrupted. The normalisation of sexual violence, such as rape, is reflected in the identification of perpetuators as civilians and its continuation into the post-conflict period [29]. Kalisya et al.'s (2011) analysis of HEAL Africa's hospital records in the Democratic Republic of Congo (DRC) between the period 2006-2008 (post-conflict), found that in the majority of sexual assaults of presenting child victims the offender was a civilian, and in 74% of cases was known to the victim [30]. A similar pattern was found for child survivors presenting themselves to the Panzi Hospital in Eastern DRC [31]. Community level protection When considering the community environment as a sphere of influence in relation to sexual violence, War Child (2013) suggests that local structures are at the centre of solutions to protect young women from sexual violence and there are examples of communities coming together to provide protection for young people [29]. Kottegoda et al. (2008), for example, drew attention to the protective nature of traditional midwives in contexts of conflict when access to formal medical access and support was reduced [32]. Footer et al. (2014) found that health workers, community/village leaders and local health organisations in Eastern Burma were active in devising strategies to maintain the provision of health services, despite attacks [13]. Communities have also been key in ensuring the continuity of education -which is widely considered as a key protective factor for young women. In Afghanistan, trusted female members of the community provided home schooling to girls during the Taliban's ban on female education [33]. Community action can also be vital in providing places of security and sources of support for children separated from their families. In the case of the night commuters in Northern Uganda, local volunteers with the Peace Foundation Charity helped secure safe sleeping arrangements for young women, also providing supervision and guidance [34]. Changing norms Conflict, through the breakdown of traditional norms, has the potential to challenge or change harmful practices [8]. Rajasingham-Senanayake (2004), for example, observed that challenged gender norms due to females' roles as armed combatants, income generators and household heads during conflict in Sri Lanka, resulted in the increased agency of women which continued in the post-conflict period [35]. By contrast, changes in sexual behaviours during times of conflict can set young women on a track of high risk behaviours that continue into peacetime [28]. Transactional sex, for example, which may be instigated for survival during war, might continue to be used for material provision in peacetime, either due to lack of options, or to supplement other ways of securing income. Conflict can change norms of what is acceptable and what is a priority, although these changes may or may not be sustained in post-conflict times. Burman and McKay (2007), for example, note the marginalisation of young mothers after conflict (even when as a result of forced marriage) [36], as community members can become highly protective of gender and gender roles following conflict [8]. At the community level, both risk and protection can be seen to operate in different ways to either promote or undermine SRH outcomes for young women. School level environment Access to education Accessing education, mainly through schools, has been identified as a key determinant and protective factor in relation to most measures of SRH in developing contexts [1,27]. However, it is well documented that conflict can significantly affect the school environment [29]. In 2011, 20 million out-of-school young people, roughly equal gender split, were living in countries affected by conflict [37]. In qualitative fieldwork conducted in Angola and Sierra Leone, girls discussed how their involvement with armed groups stopped them being able to attend school, with few having returned in the aftermath of war [21,38]. Indirect interruption to education caused by damage to school infrastructure and possible loss of professional life as a result of conflict, has also been recorded in countries such as Mozambique, East Timor, Afghanistan and Sierra Leone [22,39,40]. Whilst several international conventions and resolutions stipulate the right of children to education, with no exceptions for periods of conflict and postconflict, the tendency to focus on primary education can entail a relative lack of attention being paid to secondary schools in these settings [39]. In contexts of insecurity, the school environment can actually place additional risks on young women's SRH. In Mozambique, Northern Uganda and Burundi, schools have been sites of abduction by rebel and government forces [41,42]. In Sri Lanka, rebel groups carried out recruitment activities in nearby schools with the aim of persuading 'voluntary' enrolment [42]. In 2004, in Beslan, Russia, 1300 children and adults were taken hostage during the school day resulting in the death of 329 -including 189 students [7]. Young women can also be placed at increased risk of sexual violence or abuse on their journey to and from school, and sometimes from the very professionals who are meant to protect them. In West African refugee camps, teachers have been reported to bribe students with the promise of good grades in exchange for sexual favours, [43] although this is not unique to such contexts. In rare circumstances, however, access to a safe school environment can improve in conflict. United Nations High Commissioner for Refugees (UNHCR)-supported education of Liberian children and young people in Guinean refugee camps was reported to be better quality than the education that was received in Liberia in the period prior to the conflict (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989) [44]. The protective role of education in conflict is also reflected by the inclusion of education in United Nations (UN) resolutions designed to ensure the security of children in contexts of armed conflict [42]. Despite recognition that education should continue to serve as a protective factor during conflict for promoting SRH outcomes, evidence shines a light on how it may also increase young women's sexual risk. Family level environment Parents Parental figures play an important role in the transition from childhood to adulthood, including in relation to SRH. Across the global south, Blum and Mmari (2005) found that living with both parents and family stability/ connection was a protective factor in relation to early sexual debut, conception and childbearing [1]. As role models, parents, for example, can on the one hand, instil the importance of gender equality between men and women in relation to decision-making or, on the other hand, perpetuate the dominance of men in social relationships resulting in unequal power relationships [27]. Family structures may help young women develop negotiation skills and encourage them to make their own decisions regarding life choices, including condom use. Family structures Conflict can increase the protective role of families as young women tend to be at higher risk of rape, sexual exploitation and abuse when cut off from family structures [45]. In Angola and Sierra Leone, former girl soldiers frequently describe how their abductions were simultaneously accompanied by orphanhood when their parents were killed during village raids [21,38,41]. In the context of refugee camps, young women without families are the most vulnerable to sexual exploitation in exchange for monetary and material goods, including aid [43]. Practical logistics -such as where sanitation or cooking facilities are located -all have implications for sexual safety of young women [46]. However, in Northern Uganda, families actually used separation as a strategy to protect children from negative sexual experiences. Young 'night commuters' are sent from IDP camps to spend nights in nearby towns to reduce the risk of sexual violence and abduction by rebel groups [47]. Nonetheless, the insecurity of young women in mobility, commuters' sites and public places at night -combined with non-gendered segregated sleeping and a lack of adult supervisionmeans that girls still experience sexual harassment and abuse, including from male night commuters [48]. Role of families In contexts where accessing formal education or health services is impossible or dangerous, conflict heightens the protective capacity of families as sources of information and providers of care. Families support access to care when young women are giving birth through traditional birth attendants (who can play an important supportive role) especially when all other forms of formal health care are inaccessible or have been destroyed [46]. Nonetheless, reliance on the family regarding sexual and reproductive health knowledge can increase the risk of misleading, inaccurate or incomplete information [27]. Household vulnerabilities Conflict can take its toll on the protective nature of family structures through changing roles within the household. Absence of males from the household, through conflict mortality, imprisonment and military membership, can leave households vulnerable to poverty and result in the engagement of females in economic activities which increase the risk of poor SRH [49]. Young women may feel compelled to marry early or take on economic activities which put them at increased risk of SGBV, or engage in transactional sex to provide for their family when there are limited options for securing livelihoods through other means [25,27]. In the context of conflict, the avoidance of death, starvation and destitution is likely to be prioritised above the long term consequences of early motherhood. Conflict can also influence the interaction and relationships between family members; Catani et al. (2008) propose the idea that 'cycles of violence' do not just apply to the intergenerational context, but also to the transfer of behaviour from war to family violence. In their sample of Tamil youth, linear regression analysis revealed that previous exposure to war, measured by the number of events, was a significant predictor of the experience of family violence [50]. Early marriage Whilst early and forced marriages certainly occur outside conflict affected regions, the literature we consulted reveals that such marriages within conflict affected regions have additional dimensions and complexities. Families often believe that marriage can provide security against the risk of SGBV during conflict, for example. Kottegoda et al. (2008), using semi-structured interviews, found that early marriage was described as a protective strategy used by families to reduce the risk of daughters being 'recruited' or abducted into military factions [32]. Similarly, Swaine and Feeny (2004) found early marriage was used as a strategy by families to protect girls from violence in Kosovo [51]. In Angola, married young women were actually reported to be less likely to be abducted during raids on villages [21]. Furthermore, amongst SGBV victims presenting at Panzi hospital in Eastern DRC, women and girls who were single without ever being married were six times more likely to be held captive for the purpose of sexual violence for more than 24 h in comparison to those married, abandoned or widowed [52]. Marriage can also be used by families as a form of justice to protect the honour of girls in the occurrence of SGBV [31,32]. Anecdotal reports suggest that early marriages are increasing in Syrian families, and occurring at a younger age, as a result of conflict factors such as increased family poverty, female withdrawal from education due to barriers imposed by armed conflict and displacement, and increased risk of sexual violence of unmarried adolescents [24,53]. Nonetheless, it has been noted that the changing nature of early marriage, driven by the conflict in Syria, can increase risk of sexual abuse for women as economic and social ties are broken between families, and marriages are arranged outside long established social networks, and without official marriage contracts [53]. Thus, evidence shows that early and arranged marriages may increase and/or decrease negative SRH outcomes in contexts of conflict. The role of parents, families and family structure is evidenced to play a significant role in managing, reducing or exposing young women to increased SRH risk, highlighting the complex and often contradictory nature of risk and protective factors and processes in conflict contexts. Peer level environment Peers As individuals enter young adulthood, peers become an increasing influence, especially in relation to SRH [7]. While peers can create a negative culture and encourage risky sexual behaviours, they can also be a force for good. However, there seems to be little reflection of the SRH risk and protective factors associated with peers in the context of conflict. What is known is that peer relationships are present in conflict although the nature and sources of interactions are likely to differ. In the context of armed groups, the development of meaningful relationships may be difficult due to an atmosphere of insecurity, uncertainty and violence. Captives are taught and encouraged to punish other captives, with cases of forced beatings and killings being reported [54]. In these groups, young women can also find themselves one of several 'co-wives' to commanders which, due to the 'protection' offered by these individuals, can result in competition for affection, resources and power; [55] the manipulation and navigation of these relationships are of great importance. Despite this, examples of the development of positive and long lasting supportive relationships in the context of rebel groups have been reported. Cheney (2007) describes the example of co-wives becoming close friends and confidants as they carry out their duties [54]. In Burman and McKay's (2007) study of reintegration in the aftermath of the Sierra Leone conflict, three returnee girl mothers were found to be living together [36]. In the context of refugee camps, limited resources such as food may similarly result in competition between peers, whilst interaction with individuals undergoing similar experiences may provide opportunities for support and solidarity. Social interaction Armed conflict also has been found to impact on social interaction and engagement in the post-conflict period. In Sierra Leone, Bellows and Miguel (2009) found individuals directly affected by violence were more likely than others to be involved in civic participation, such as being members of community and social groups [56]. In Sierra Leone, Denov (2010) found evidence of the creation of informal peer-support structures, where returning girl soldiers sought comfort and encouragement with other conflict affected young people, thereby reducing feelings of isolation [38]. The research procedure itself was found to facilitate this process with research participants forming friendships. Despite limited evidence, insights into the impact of peer relationships on SRH outcomes suggest they may provide a source of support and protection for young women, or may further increase the struggle for securing protective resources. Individual level environment In the context of conflict, the gender, social status and age of young women increase the risk of sexual violence and other poor SRH outcomes [12]. Whilst rape has been used as a weapon for centuries, a new pathology is emerging of 'rape with extreme violence' [49]. Such acts performed frequently by soldiers against the women and girls (some very young) aim to cause maximum sexual trauma through injury, mutilation or the transmission of infection [57]. Young people are at greatest risk of abduction by military factions, the vulnerability of girls due to their gender continues after conflict. Whilst focus on children soldiers has been on boys, between 1990 and 2003 in 38 countries girls formed part of forces engaged in armed conflict [58]. It is estimated that 30% of RUF forces in Sierra Leone were made up of girls [59] and, in Northern Uganda, approximately one-third of child soldiers were females [58]. The not uncommon exclusion of girls from disarmament, demobilisation and reintegration processes also means former girl soldiers often have fewer opportunities to develop livelihood strategy skills. For the minority of girls who do get included, the male dominated compositions of these programmes -combined with severe overcrowding and lack of securityoften means they are at risk of rape [58]. Victims or agents Haeri and Puechguirbal (2010) warn that, in contexts of conflict, women are generally seen as victims, lacking agency rather than as active individuals who have important characteristics that can make a difference to their circumstances [46]. However, some literature has challenged the very notion of victim's 'passivity' , highlighting that even in contexts of captivity, where if recognised, it is possible to evidence every victim's agency and resistance, to some extent. This literature reframes young women from that of passive victim to one of active agent, able to draw upon personal strengths and resilience to develop strategies which maximise survival chances, including a potential strategy of 'passivity'. In the sexual setting of military groups in Northern Uganda and Sierra Leone, McKay and Mazurana (2004) document how girls use their sexuality, sometimes encouraged by their families, to enhance their chances of survival [58]. They give examples in Northern Uganda where young women seek to 'marry' or to become pregnant by high commanders due to the associated privileges, such as exceptions from hard labour. In the Revolutionary United Front, 'family units' form the basis of organisation with resources being allocated to 'household heads'. Those that do not belong to a family must scavenge for survival. Girls can use sex and 'marriage' to bargain themselves into units and to gain access to food, water and other material goods (also found by Muhwezi et al., 2011;Burman & McKay, 2007) [28,36]. However, the distinction between expressions of agency and coercion is not always easy to make, as in the case when sexual favours are sought from girls by humanitarian workers in refugee camps in Liberia, Guinea and Sierra Leone in exchange for vital aid supplies, highlighting the inherently exploitive nature of relationships that young women are exposed to in contexts of conflict [43]. The constrained agency and resilience in the actions of young women can have serious consequences for their SRH, especially in relation to motherhood. Yet, while such strategies put young women at risk of pregnancy, forced transaction sex and future exclusion in post-conflict communities, [23,36] in the short term it can mean their survival. Recognition that early marriage and early sex may have been a strategic decision during war due to limited choices is key for post-conflict strategies which seek to respond to the long-term consequences of this decision. Discussion The literature highlights the complex nature of how armed conflict impacts the different environments which increase or reduce the risk of poor SRH outcomes for young womensee Fig. 2 for an illustrated version which populates the ecological framework with the risk and protective factors identified in the literature that are associated with poor sexual and reproductive health outcomes in contexts of armed conflict. While there are some similarities with risk and protective factors identified by Blum and Mmari, 2005, it is clear that general models of SRH need reconceptualising in contexts of armed conflict [1]. Diversity in patterns of armed conflict, even within a single country, brings into question presumed protection offered by factors such as proximity to family, access to health professionals or school attendance [60]. While risk factors such as 'alcohol use' or 'knowing where to buy condoms' or 'perceived risk of contracting HIV' , for example, were not discussed in the literature reviewed, it can be assumed that the nature of these factors will be impacted by the context of armed conflict. Whilst the literature shows that conflict changes the ecological positions of young people, a lack of consensus exists around the protective nature of some of the factors discussed above. This raises questions of how we understand processes of risk and protection (i.e. how it came to be that certain choices were made/certain outcomes occurred). Knowledge of risk and protection has evolved separately, yet viewing them as distinct entities is unhelpful because of the often complex presence of both risk and protective factors which impact on one another. The factors and environments raised above may act to both expose to, and/or increase, or to protect from risk depending on the individual and context [61,62]. This is well illustrated by the context of family relationships discussed earlier, which have been shown to be protective against sexual violence at times, while at other times put young women at increased risk of sexual violence through forced or early marriage, or transactional sex. In addition, factors which prevent some poor outcomes like sexual violence, may at the same time increase risk of other poor outcomes like early marriage and early childbearing. Simple binaries need to be challenged, accepting the concepts' inherent complexities both in relationship to each other and in resilient outcomes. Acceptance that there is no formulaic risk/protection pattern (i.e. a static set of 'risk' and 'protective' factors which are distinct and categorised) that can be applied for young women in every conflict-affected community is a starting point. This reflects not just the fact that post conflict settings are 'different' , but rather that the concepts of risk and protection are -by their very nature -dynamic, fluid and contextual. Rather than focusing on a static list of risk and protective factors, what appear to be important in these contexts are the 'processes' of protection -the role of 'trade-offs' and the perceived 'loss and gains' of actions and choices, the prioritisation of risks in understanding protection strategies, and the 'price of protection'. The literature clearly shows that there is often a 'cost' to securing protection, for example, entering marriage at an early age to ensure security which is often shortly followed by a risky early pregnancy, resulting in the prioritisation of some risks over others. In contexts of conflict, for example, physical safety of oneself or family may be prioritised over immediate or longer-term sexual risks or social exclusion following early motherhood. Starvation or death comes today, whereas the consequences of sexual risks may seem distant [36]. Highlighted is the short and long term nature of protective strategies, as well as the 'price' of protection. Risks are often multiple and cumulative, exacerbating the impact of each stressor; this can lead to a spiral of overwhelming risk and adversity exposure [63]. When young women have few choices and resources due to the impact of conflict on protective resources previously available from macro or community environments, sexuality remains as one potential resource which they can draw upon [27]. At the highest levels of risk, protection is either non-existent or fails to counteract the 'poisonous effects of extreme adversity' (p.140) [64]. Conflict renders such concepts as 'rights' and 'dignities of citizenship' as obsolete or secondary to saving lives and maintaining essential services [20,65]. Questions are also raised as to the nature of agency and choice, and the extent to which constrained choice is still choice? Is it possible for young women to be regarded as agentic beings while using their sexuality to access food and temporary security? The role of agency is often viewed as essential in securing assets for protection, with issues of power underpinning the ability to succeed or not [9,66]. Rutter (2001) suggests that key turning points, the opportunities and choices which might be offered, are the most significant factors in determining resilient outcomes [67]. Power and control are seen as defining the parameters of how and to what extent one can adapt to adversity. Others challenge this focus on personal agency, advocating the prioritisation of addressing structural oppression and social inequalities [68][69][70]. Seccombe asks, for example, 'Can families be expected to become resilient without significant structural change in society?' (p.389) [68]. The ecological examination completed above shows that attention needs to be paid to multiple environments and, more importantly, the relationships between them. While young women may be placed at increased risk by institutional level factors in relation to a particular SRH outcome (such as health clinics being destroyed), for example, the additional risks created could be mitigated against (or further increased) at other levels. Blum and Mmari (2005) conclude that more studies which identify risk and protective factors for young people focus on individual level factors on SRH rather than contextual factors [1]. Although little is known about the structural and contextual factors which protect young women against poor SRH outcomes even in contexts of relative peace, [1] it is known that these are significantly affected by conflict. Drawing protection from resources at a structural level may therefore not be an option. Young women may have to rely on personal or family assets which serve to protect and increase personal agency while at the same time also increase vulnerability to SRH risks. Indeed, Petchesky (2008) argues for the need to reconnect 'bodies' to new communities in times of insecurity [65]. Across the literature there was a scattering of examples of women creating new communities of protection while in contexts of insecurity. For example, Petchesky (2008) reports that: 'In Darfur, where the traditional gender division of labour famously assigns women and girls the task of roaming to collect firewood, resulting in a very high incidence of rapes and assaults, committees of women leaders have organised "firewood patrols" which have, in turn, become a forum for discussing and resolving common concerns.' (p. 8) [65]. Connecting young women to each other and providing opportunities for action appear to be important actions to facilitate the development of grassroots strategies which support safe negotiations of SRH. Providing young women and their families with access to resources they need to protect themselves recognises the important role of both agency and structure in protection. However, it is not the role of 'health' or 'women's' professions alone to support these processes. It is important that professionals working in response to a wide range of concerns in conflict recognise the interconnected nature of SRH with livelihoods, education, gender equality and human rights, and the role that other types of intervention can play in facilitating good SRH [32,57,65]. Sexuality and SRH interface with all aspects of life, and therefore need a more integrated response. Efforts to protect young girls and support safe SRH practices should be mainstreamed within all responses to conflict, and vulnerable groups identified and supported. Humanitarian responses focus on meeting survival needs but frequently do not address the cause of, or reasons for, vulnerability [26]. If early marriage or transactional sex is used to secure livelihoods or physical protection, for example, then a focus on improving livelihoods and security might have the biggest impact on improving SRH outcomes. It is clear that conflict breaks down many protective factors across different environments that might have previously been put into place. However, from the literature available it is difficult to confidently account for how some young women manage to safely negotiate positive SRH outcomes. None of the studies documented accounts of young women successfully negotiating SRH that did not involve putting themselves at risk of some poor sexual outcomes through engaging in risky behaviours. Not enough is known about the difficult choices young women make when there are no 'positive' (and safe) choices available (choices without the risk of significant costs in the future) in relation to their SRH. There also appears to be little consideration of the potential paths of resilience during conflict for young women and the protective factors which alter the trajectory from exposure to risk to poor outcomes [9]. It is clear that significant risks will be present in these contexts that may not be avoidable, and yet it is not clear what might prevent or 'buffer' the impact of such risks upon an individual and SRH outcomes. The role of postconflict care in mediating or 'buffering' the long-term impact of exposure to such risks is therefore critical, although it is not clear whether there are informed strategies for facilitating this. The protective resources that a community may hold themselves are not always recognised or appreciated and are sometimes unspectacular, but can be found in the daily activities and struggles of people's lives [71]. Differences between risk and protection are sometimes only subtle, difficult to predict and only identifiable when family life (girls/young people's lives) are examined in detail [72]. Ungar, taking a social constructionist approach to the resilience concept, emphasises the need to listen to marginalised and silenced voices -rather than just those of the privileged and powerful -so as better to understand localised definitions of resilience, risk and protection [4]. Interpreting and responding to what is heard poses a challenge as it may not fit with western /professional values/ethical or personal beliefs, particularly around ingrained and sanctified notions of rights/ oppression. Humanitarian interventions run the risk of unintentionally propagating Western concepts as definitive knowledge and impairing the recovery and rebuilding process post-conflict. Framing young women solely as victims potentially hides or undermines their resilience and resourcefulness, for example [73]. Yet, research is needed which allows a contextual understanding of protective factors which alter the trajectory of risk exposure to poor SRH outcomes for young women affected by conflict [9]. Identifying these processes has potential to support millions of young women around the world to safely negotiate their SRH needs at a time when they may be prioritised by no one else. Conclusion This paper adds to the emerging literature on the SRH of young women affected by armed conflict by considering the impact that conflict can have on risk and protective environments. A literature overview on the risk and protective factors for SRH in armed conflict has formed the basis for this paper, with findings mapped onto an adapted ecological model to present the ways risk and protective factors, and processes, are evidenced promote or undermine young women's SRH in conflict. Having considered the findings, we have argued the limitations to traditionally recognised static universal models and understandings of risk and protection, proposing that notions of risk and protection must be nuanced and understood as contextually dependent. We have argued the need for developing frameworks that are able to take account of the dynamic fluidity of risk and protection, so that processes and 'turning points' to achieving greater SRH for young women can be identified, understood, and promoted. While acknowledging the important role of agency and choice in securing or undermining a young women's SRH, we have pointed to the need to explore and reconceptualise the complex nature of individual agency set within wider structural influences that may shape or determine their ability to secure good SRH outcomes. We have discussed the dynamic relationship between individuals, their wider environment, and the complex and often contradictory ways in which protective or risk processes may play out within those environmental levels. This highlights limitations of an individualistic approach to understanding and promoting SRH, and supports the need for ecological based approaches to promoting SRH protective environments for young women. This paper offers no easy answers to the challenges of improving SRH outcomes for young women affected by armed conflict; rather, it seeks to 'shake up' any taken for granted assumptions on risk and protection by providing insights into their complexity, pointing towards a need for further work. Such further work will need to take into consideration the processes of protection, the prioritisation of risks, risk trade-offs and the price of protection.
2017-08-22T23:59:36.014Z
2017-08-17T00:00:00.000
{ "year": 2017, "sha1": "22d6212c88342915cf38d9c6ec6d02a140a78ab5", "oa_license": "CCBY", "oa_url": "https://conflictandhealth.biomedcentral.com/track/pdf/10.1186/s13031-017-0117-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22d6212c88342915cf38d9c6ec6d02a140a78ab5", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
126149283
pes2o/s2orc
v3-fos-license
Theoretical relation between halo current-plasma energy displacement/deformation in EAST In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak. List of symbols z-axis coordinate of current loop B tor (T) Toroidal magnetic field E (k) Second type of elliptic integral I p (MA) Total plasma toroidal current K (m) Elongation K (k) First type of elliptic integral R 0 (m) Major radius r p (m) Plasma radial length dr p Radial kink mode amplitude Z z-axis coordinate of calculated point Introduction During the disruption scenario in plasma, eddy and halo currents were considered to be the most important sources that came out as a result of this phenomenon [1]. The components which affect primarily are diverter, First Wal (FW) and other main components, because plasma is unstable in vertical displacement. In this scenario, plasma moved upward and downward resulting into plasma disruption, and as a result, halo current has been generated helically. This halo current was produced in SOL component and flows into the vacuum vessel through in-vessel components that may give rise to large force acting on the vessel and in-vessel components [2]. The number of research articles has been published to investigate the halo current in other Tokamak devices such as JET [3], JT-60U [4], and NSTX [5]. Like in Experimental Advanced Superconducting Tokamak (EAST) reactor [6][7][8], there are some failures of the feedback control in VD event caused by disruption. The EAST reactor is designed by Institute of Plasma Physics, Chinese Academy of Sciences, P.R. China which is considered to be the best superconducting and advanced Tokamak in the world. Recently, EAST has been updated and had achieved longer pulse generations at high current mode. Main parameters of EAST reactor are given in Table 1 [9]. The EAST has been demonstrated as long-pulsed plasma operations with toroidal field B t B 3.5 T and plasma current I p B 1 MA [10][11][12]. The EAST upper diverter has been upgraded with W/Cu plasma facing components (PFCs) with ITER like W-monoblack [13]. The lower diverter has not been upgraded using graphite tiles for the first wall. It has a central dome, but the upper diverter does not equip with it. In addition, an EAST disruption database has been built [14] and is useful for quickly selecting disruptive discharges and their relevant parameters. Nearly, 27% of discharges terminated in a disruption [15]. In this reactor, some sensors have been installed for calculating the halo currents at different locations, as given in Table 2. In these experiments, it was observed that the halo current first spread out on outer baffle plate then moves to dome and finally return back to plasma. At first, we have developed model (2) to calculate magnetic field produced by circular current loop. Furthermore, we considered two cases to analyze the plasma when it flows up/down direction as observed by model (2) calculation data. Second, we have theoretically calculated horizontal/vertical forces connected with model (2). It was observed that generation of halo current was huge and it has strong field at the middle and weak or cancelling effects appear at the sideways positions. Third, during disruption, magnetic field appears at the conducting points consisting of large number of magnetic flux. Therefore, halo current model (18) has been developed using Eqs. (9 and 15) with specific parameters (r p , z, z p , u shift , /). A Matlab program has been developed to calculate halo current and magnetic field calculation points. The achievable maximum halo current was about 0.4 times of the plasma current and its maximum TPF values was 0.65 as estimated by set of sensors. EAST halo current is 10 KA for one cassette and total estimation is 400 KA by model (18). Some of the work that has already been published for halo current is given as under Model of magnetic field calculation points for Tokamak In this research work, a new model for calculating different aspects of Tokamak reactor has been designed. In the recent past, number of work has been published already about the shaping and geometrical description of plasma [20,21]. One of the important problem is to identify and simulate the plasma shape and control including the numerical calculations on elongated and shaped equilibria [22]. In this paper, a new model was developed on the basis of an applied mathematical approach [23], that is where ''k'' is elongation, ''d'' is triangularity, ''a'' is minor radius, and ''R 0 '' is the major radius. Using the algebraic techniques, as the magnetic field is B = B X ? B Y , we obtained B x and B z using shape model (1) in energy integral [24]. The calculated points are given as under Magnetic field produced by a circular current loop [25] is ÀKðkÞ þ a 2 þ r 2 þ ðz À bÞ 2 ða À rÞ 2 þ ðz À bÞ 2 EðKÞ KðkÞ þ a 2 À r 2 À ðz À bÞ 2 ða À rÞ 2 þ ðz À bÞ 2 EðKÞ where R 0 = 1.7 -1.8, B 0 = 3.5T, plasma current I p B 1 MA. Plasma orientation and development of halo current In case of plasma, it was observed that some balancing and unbalancing forces acted upon, like the case of plasma as torus which is very close to outboard wall. At this stage, poloidal field has different radial locations and have variable sideways forces as well. In this case, one side of torus has higher values than other side and poloidal field has different values at different locations. Case 1 In this case, we considered the plasma at a position (R, Z) from the origin and assumed that during the disruption occurs ( Fig. 1), the plasma moves horizontally [26,27]. The governing equations are given as under Since where Dx ¼ cos adx p : Since the plasma is in the range of 0 ? 2p, therefore Substitution of Eqs. (2) into (9) gives the changes in magnetic field while applying the horizontal forces on the plasma giving plasma position in different points, as presented in Table 3. For static position, each cross section depends on a, b, r, and z. For one cross section, the peak values of r and z are (± 5.2, ± 4.37); therefore, (B r(max,up) , B r(max,down) ) = (0.3918, -0.3847). Then, the current loop of magnetic field range was (B x , B z ) max = (0.3862, 0.1698) for selected degree of cos (alpha) and plasma one crosssectional area energy was calculated to be 8.0262e ? 004. For tilting position, peak values of r and z were (± 5.4, ± 4.17); therefore, (B r(max,up) , B r(max,down) )-= (0.4225, -0.4247). Hence, the current loop of magnetic field range was (B x , B z ) max = (0.4221, 0.1974) for selected degree of cos(alpha) and calculated plasma one cross-sectional area energy was 8.0258e ? 004 (see Table 3). Case 2 In this case, we considered plasma at a position (R, Z) from the origin and assumed that during disruption occurs (Fig. 2), plasma moves vertically downward [26,27]. The governing equations are given as under Then Since Where Dz ¼ À sin bdz p : Therefore Since the plasma is in the range of 0 ? 2p: and Changes in magnetic field can be described by substituting Eq. (2) into Eq. (15) to get the vertical forces applies to the plasma so as to calculates the plasma position at different points. Like in Case-1, each cross section depended upon a, b, r, and z. For one cross section, the peak values of r, z were (± 3.5, ± 2.3); therefore, (B r(max,up) , B r(max,down) ) = (0.1865, -0.4652). Then, the current loop of magnetic field range was (B x , B z ) max = (0.4637, 0.1995) for selected degree of cos(alpha) and plasma one cross-sectional area energy was 8.0457e ? 004. For tilting position, peak values of r and z were (± 3.8, ± 2.5); therefore, (B r(max,up), B r(max,down) ) = (0.2762, -0.3851). The current loop of magnetic field range was (B x , B z ) max-= (0.5747, 0.2525) for selected degree of cos(alpha) and the calculated values of plasma one cross-sectional area energy was 8.0484e ? 004 (see Table 4). During VDE, plasma changes the area under vertical and horizontal forces and the magnetic field changes in radial direction at center of plasma with major radius (R), x-z displacement and plasma radial/vertical length (z p , r p ). During this plasma change, magnetic field of each point can be calculated by model (2) (vertical and horizontal). The magnetic field appears at the conducting points consisting of large number of magnetic flux, which are given below as At the conducting points, generation of halo current was huge and it has strong field at the middle and weak forces or cancelling effects appeared at the sideways positions. Due to horizontal and vertical forces, some balancing and unbalancing forces appeared and it was expected that asymmetric plasma positions were along the toroidal coordinates. The poloidal halo current which balances the plasma vertical displacement was toroidally asymmetric. According to the plasma positions, these halo currents have different flow poloidal paths along with the toroidal coordinates. Similarly, due to the asymmetry of poloidal halo currents, the second sideways force occured and the sum of these forces gave sideways balancing forces as well: The total forces includes horizontal and vertical and the summation of these forces acted upon the plasma VDE direction gives halo current as given below: During the course of disruption, halo and eddy currents considered to be the main source of electro-mechanical loads that appeared. Consequently, halo current fraction and toroidal peaking factor (TPF) in vessel components depend upon the halo current density. In MHD simulation problems, plasma model comprises of three regions, namely, core, halo, and resistive wall region integrating plasma to external vacuum magnetic field. In EAST reactor, plasma was inherently unstable against vertical displacement and during upward and downward movement creates disruption along with large halo current generation. In this case, when the plasma flows into the vacuum vessel through in-vessel components, halo current produced large values of J 9 B forces acting on the vessel through invessel components. The production and movement of halo current is such that it first appeared on the outer plate in clockwise direction and maximum generation of halo current was estimated to be about 0.4 times of the plasma current. Figure 3 shows the evolution of halo current and filament. EAST halo current is 10 KA for one cassette and total 400KA recorded by model (18). In EAST, Rogowski coils have been designed for both the upper and lower diverters to measure the disruption of halo currents. EAST upper diverter was upgraded with a new tungsten diverter consisting of 80 cassettes in the toroidal direction. Four upper diverter, cassettes have been instrumented with a set of 10 small-cross sections. Rogowski coils to determine where the halo currents enter and exit the diverter, and how much current flows through the water cooling tubes. In this paper, we have successfully performed theoretical investigation between halo current-plasma deformation/ displacement and theoretical calculation of total halo currents as well. Conclusion The developed theoretical model calculates plasma cross sections by B x and B z magnetic field points and displacement subjected to the start of VDE and magnetic field flux variations. The developed two conducting points give an indication to halo current percentages as well. This model can calculate theoretically halo current during the disruption phases in a very short time. Furthermore, mathematical techniques have been developed successfully which shows the relation between halo currents and plasma displacement/deformation in EAST Tokamak. Computational program has been developed to calculate total halo current and magnetic field calculation points. Theoretical investigation of each cassette has been calculated by model. This model can be subjected to experimental data for other Tokamak devices as well.
2019-04-22T13:11:24.655Z
2018-04-28T00:00:00.000
{ "year": 2018, "sha1": "cc782cd6c4e0da5aa09be90569f1b5ff37f96e78", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40094-018-0276-1.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c18784af6e1fdd2c9f882583fc148c75b3ec4c7a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
204495629
pes2o/s2orc
v3-fos-license
A Geodesign Decision Support Environment for Integrating Management of Resource Flows in Spatial Planning Improving waste and resource management entails working on interrelations between different material flows, territories and groups of actors. This calls for new decision support tools for translating the complex information on flows into accessible knowledge usable by stakeholders in the spatial planning process. This article describes an open source tool based on the geodesign approach, which links the co-creation of design proposals together with stakeholders, impact simulations informed by geographic contexts, systems thinking, and digital technology—the Geodesign Decision Support Environment. Though already used for strategic spatial planning, the potential of geodesign for waste management and recycling is yet to be explored. This article draws on empirical evidence from the pioneering application of the tool to promote spatially explicit circular economy strategies in the Amsterdam Metropolitan Area. Introduction With circular economy (CE) becoming a new sustainability paradigm (Geissdoerfer, Savaget, Bocken, & Hultink, 2017), strategies to reduce waste generation through better resource management have been climbing up the policy and planning agendas in numerous cities and regions. Improving waste and resource management entails understanding the interrelations between different material flows (e.g., organic waste, construction and demolition waste, plastics), territories (cities, regions, func-tional territorial units) and groups of actors (industrial actors along the cycle of a given material flow, waste management companies, regional and local authorities, civil society groups, builders and developers). This entails an increased complexity of interdependencies, relations and impacts of new kinds of circular processes and interventions that need to be considered in the decision-making process. Such complexity calls for new Spatial Decision Support Systems (SDSS) for translating the intricate information on material flows and related actors into accessible knowledge that could be used by stakeholders in the spatial planning process. SDSS typically combines tools from participatory Geographic Information Systems (GIS) with decision support tools, which have the capacity to animate and clarify discussions between stakeholders rather than just representing optimal results (de Wit, Brink, Bregt, & Velde, 2009). The geodesign approach is a widely used methodology for exploring and addressing complex territorial challenges in different geographical scales while cooperating with stakeholders in an iterative and bottom-up manner (Li & Milburn, 2016). Therefore, geodesign emerges as a suitable methodology for supporting planning for the CE. However, to date, it has hardly been applied in the development of territorial strategies for reducing the generation of waste and closing the loops of material flows. Given the above-mentioned complexity and the importance of material flows in this field, the application requires modifying the methodology in order to integrate methods and technologies suitable for exploring the volumes and geographies of material flows, life cycle of materials and governance analyses. Technological innovation and rapidly increasing computational power, new means of sharing data and information and digital literacy, have a great potential to be effectively deployed in the pursuit of sustainability (Retief, Bond, Pope, Morrison-Saunders, & King, 2016). The tool proposed in this article, along with its underlying methodology, addresses this challenge by integrating geodesign with the Urban Living Labs (ULLs) approach (e.g., Steen & van Bueren, 2017). ULLs are becoming increasingly popular for engaging citizens and key stakeholders in the process of knowledge co-creation and co-design of experimental solutions to urban challenges in a real-life context. While geodesign is already used for strategic spatial planning, its potential for waste management and CE is yet to be explored. This article explores whether and how geodesign can be used to improve waste and resource management. It also describes a web-based open source tool that adapts geodesign for the purpose of spatial diagnosis and elaborates on territorial and systemic ecoinnovative strategies toward a CE through the Geodesign Decision Support Environment (GDSE). Section 2 outlines the theoretical background for the GDSE and builds on recent geodesign and living lab approaches and technology implementations in the field of spatial planning. Section 3 describes the geodesign-based GDSE methodology to support collaborative resource flow management. The methodology is applied within an ongoing living lab aimed at improving waste and recycling management in the Amsterdam Metropolitan Area (AMA; Section 4). Finally, conclusions on the usefulness and limitations of the GDSE are provided in Section 5. Theoretical Background CE is primarily driven by the agreements between multiple actors to share resources, materials and infrastruc-ture for as long as their physical properties allow. This increases the pool of stakeholders that could act together, which may create collective strategies to achieve higher benefits to everyone's interests. Mathematical models could theoretically be used to optimize the total sum of individual, environmental, social and economic benefits. However, in practice, modelling such a system accurately is too complicated. This type of modelling requires the integration of technology and analytical methods with new collaborative approaches for spatial decisionmaking. We propose an approach that builds on three elements: current technological advances and related analytical methods, the geodesign framework, and the ULL approach as a methodological environment for stakeholder involvement. Technology and Analysis Methods GIS are not only used for cartographic analysis but are increasingly being used for building narratives, qualitative storytelling and within synthesis approaches with the goal for equity and justice (Sui, 2015). Although the usefulness of GIS in all stages of impact assessments have already been recognized (e.g., Eedy, 1995), it is still seldomly applied in sustainability assessments (e.g., Sholarin & Awange, 2015). SDSS are used to help address similar ill-defined problems and are defined as interactive, computer-based systems designed to support a group of users in achieving higher effectiveness in decision-making on spatial issues (Malczewski, 1999). They are meant to support rather than to replace human judgements, and improve the effectiveness rather than the efficiency of a process (Uran & Janssen, 2003). Thus, they are intended to be advisory units that are more capable to digest large amounts of data and can perform quick computations. Decision-making tends to entail social and political conflicts while also relating to values that reflect cultural, historical and social norms that are deemed acceptable by a community (Jones & Morrison-Saunders, 2016). This is crucial for spatial planning and waste management, which are (1) connected to specific geographical contexts with intrinsic cultural, historical and social values, and (2) directly affect the environment and the society in a given territory. Currently, the most common combination of methods for assessing the impacts of potential resource flow changes includes Material Flow Analysis (MFA) and Life Cycle Assessment (LCA; e.g., Guinée, 2002). MFA is a systematic assessment of the flows and stocks of materials within a system that is defined in a space and time (Brunner & Rechberger, 2016) and provides a system understanding of a particular state of resource flows. MFA is typically applied in the built environment (e.g., Crawford, 2011). Although MFA studies have always had explicit spatial and temporal boundaries (e.g., Stephan & Athanassiadis, 2017), what happens within those limits is rather considered as a black box, where materials flow from inputs to outputs through various stocks and processes. These flows and processes are not typically described in great detail spatially, except with a few attempted studies. For example, Roy, Curry and Ellis (2014) spatially allocated construction material flows within administrative units of Kildare County, Ireland. Wallsten (2015) used the context of hibernating the stock of subsurface urban infrastructure to demonstrate how social science approaches can provide hands-on advice for private and local actors involved in material recycling. Vivanco, Ventosa and Durany (2012) developed a model for material and spatial characterization of waste flows, which included indicators that were potentially useful for assessing key policy strategies for waste management and the minimization of transport by locating adequate facilities. Even though there have been existing attempts to introduce a spatial dimension into the MFA methodology, the spatial granularity is very coarse and its usefulness in decision-making has not been validated as of yet. LCA is used to assess environmental, social and economic impacts of products or services through all the stages of their lifetime in comparison to a baseline scenario (Taelman, Tonini, Wandl, & Dewulf, 2018). LCA intends to support decision-making and therefore, the involvement of decision-makers throughout the entire study is crucial in order to avoid issues addressed by the study that may differ from those that the decisionmakers deem as important. Depending on the situation, it may be relevant to include other stakeholders that may be affected by or can influence the consequences of the decision (Weidema, 2000). Failure to involve stakeholders may result in controversies or may hamper the implementation of the suggested environmental improvements. Hence, decision-making in spatial planning and resource management should not be top-down and should include local stakeholders, especially if they are the ones most affected by the decisions made. Although LCA is mostly used for environmental impacts, it may also include several impact categories, such as social or economic impacts (Jeswani, Azapagic, Schepelmann, & Ritthoff, 2010). LCA also aims to include as many substances and compounds, which is required to provide a full impact assessment. The method is widely accepted and standardized in ISO 14040 (Technical Committee ISO, 2019). However, conducting an LCA requires an extensive amount of time and data that is not often available. Moreover, communicating the results usually requires an expert audience (Elia, Gnoni, & Tornese, 2017). This is not in line with typical geodesign workshops that would last only a few days. Thus, the integration of geodesign with living labs prolongs the study period and allows the use of more advanced impact assessment methods. Geodesign Geodesign has emerged as a relevant concept for furthering the development of enhanced SDSS. The use of SDSS for policymaking has changed over the last decades, which can be reflected by an increased role of pub-lic participation in combination with collaborative approaches (Keenan & Jankowski, 2019). The increasingly apparent multi-stakeholder nature of policymaking has led to the recent development of SDSS that aim to address group decision-making (Jankowski, 2009). In parallel, many participatory approaches for spatial decisionmaking emerged, which require more collaborative tools and methodologies (Li & Milburn, 2016). Geodesign is a leading methodology to support spatial planning as it tightly couples the creation of design proposals with impact simulations informed by geographical context (Steinitz, 2012), and ensures a close collaboration between the stakeholders and decision-makers throughout the entire process that starts at problem identification and finishes at proposed interventions. Specifically, geodesign offers a framework that facilitates collaboration in iterative spatial decision processes involving future spatial interventions in a geographic study area. Figure 1 illustrates the structure of this framework. The process involves three iterative feedback loops, which aim to (1) understand, scope, and model a geographic study area, (2) specify methods to operationalize the process, and (3) carry out the geodesign process tasks. Each iteration addresses a set of six questions, each of which is answered by specific models. The framework represents the collaboration as the interaction required between four types of stakeholders: the people of the place, geography-oriented natural and social sciences experts, design and planning professionals, and their IT technologists. Urban Living Labs There are multiple ways to involve the affected people into the planning process. The International Association of Public Participation (IAP2) has devised a spectrum that explains the different levels of public participation ( Figure 2). As seen from this spectrum, merely involving the public into the planning process does not mean that their tacit knowledge and community preferences are used to improve the planning process. SDSS are being used on the full range of the spectrum-from acting as information systems to empowering the stakeholders to become the decision-makers. Living labs constitute an effective method for incorporating innovation and technology into participatory and multidisciplinary planning processes. According to the European Network of Living Labs (ENoLL), living labs can be regarded as "user-centered, open innovation ecosystems based on a systematic user co-creation approach in public-private-people partnerships, integrating research and innovation processes in real-life communities and settings" (ENoLL, 2019). ULLs are comprised of physical and virtual environments, in which public-private-people partnerships experiment with an iterative method to jointly develop innovations (i.e., co-creation) that include the involvement of end-users and aim at identifying and addressing ur- (Steinitz, 2012). Graphic by author Libera Amenta. ban sustainability challenges. Main characteristics of an ULL are geographical embeddedness, experimentation and learning, participation and user involvement, leadership and ownership, and evaluation and refinement (Voytenko, Mccormick, Evans, & Schliwa, 2016). The ENoLL approach is based on the quadruple helix model of partnership, which categorizes actors as the government, industry, the public and academia, who work together to generate innovative solutions in a process involving five phases, namely co-exploring, co-design, co-production, co-decision, and co-governance (ENoLL, 2019). Integrating Geodesign, Living Labs and Technology This article argues that collaboration between actors within an iterative geodesign process with feedback loops plays a central role alongside innovation and the implementation of new technology, which can be facilitated through a living lab approach. The integration between geodesign, the living lab approach, GIS, MFA and LCA into a single support environment ( Figure 3) allows for the following innovations: (1) MFA in a geographical context: via a new method of Activity-Based Spatial Material Flow Analysis (AS-MFA; Resource Management in Peri-Urban Areas [REPAiR], 2017) by geo-locating activities and actors involved in resource flows; (2) Visualization of resource flows: via AS-MFA data analysis and visualization tools in order to gain insights into the current status quo at early stages of the solution creation process rather than only at the stage of evaluation; (3) Simulation of proposed changes: applying the solutions as simulations of changes in the overall mapped resource flow network; (4) LCA for impact assessment: using the AS-MFA data to describe the LCA baseline scenario and the simulated resource flow network of proposed strategies. The GDSE provides an environment to support the collaborative efforts towards improving resource management and thus enhancing the transition towards CE. It incorporates all the relevant methodologies identified in the theoretical framework and provides both the researchers and the stakeholders with an overall structure and tools. The environment consists of software, hardware and processware. Software The GDSE is a core product of an ongoing EU-funded research project called REPAiR. It features an open source prototype web application that supports both the decision-making process and the research that is required for each of the five steps to guide the living lab process for a study area (Figure 4), available on the project's website. REPAiR aims to implement the GDSE in living labs in six European metropolitan areas to develop place-based eco-innovative spatial development strategies that aims to have a quantitative reduction of waste flows in the peri-urban areas (REPAiR, 2019b). Within REPAiR, a GDSE-related eco-innovative strategy is understood as: An alternative course of actions aimed at addressing the objectives identified within a Peri-Urban Living Lab (PULL) for developing a more CE in peri-urban areas, which can be composed of a systemic integration of two or more elementary actions, namely ecoinnovative solutions (EIS). (REPAiR, 2018a) To facilitate the ease of reading, from this point forward, "eco-innovative solutions" will also be referred to as either "solutions" or "EIS", while eco-innovative strategies will also be referred to as "strategies". While designed and tested for the specific purposes of the REPAiR case studies, the GDSE is meant to be easily reusable, which is one of the guiding principles of the software development process. Thus, the GDSE is built with free and open source components and has an open license. All versions of the source code are available on a public GitHub repository (https://github.com/ MaxBo/REPAiR-Web). Figure 5 shows the current backend integration of various components into a single platform that supports a range of functions: data management and storage, data visualization, stakeholder input, simulation and assessment of alternatives, and connection to an external LCA assessment. Data storage and management is done via the Open Science Framework (https://osf.io). GeoServer Technology and analysis Living Lab (http://geoserver.org) is used to publish and host spatial data layers, as web feature services, incorporated and visualized in the GDSE, which are externally prepared using QGIS (https://qgis.org). All the AS-MFA data used for the analysis and assessment are stored in a PostgreSQL object-relational database (https://www.postgresql.org). LCA is conducted externally. All outputs are displayed in the GDSE. Vagrant (https://www.vagrantup.com) is used for providing a reproducible, operating system which is independent of the software environment setup. Two main roles that are supported by the GDSE are the researcher and the stakeholder (Table 1). A researcher (or a group of researchers) is responsible for organizing the geodesign process, finding and involving the relevant stakeholders, collecting, preparing, uploading and selecting relevant data, performing impact assessment, preparing and holding the interactive workshop sessions, collecting stakeholder input from those sessions for use in subsequent ones. A stakeholder (or a group of stakeholders) uses the system at workshop sessions, which are facilitated and moderated by re-searchers. The GDSE provides different functions within two separate environments for the previously described roles: the setup mode and the workshop mode. Hardware The GDSE hardware component features interactive touch-enabled screens to facilitate workshop communication in two ways: (1) between users and the GDSE software (tools and support information), and (2) dialogue between the users. The touch tables ( Figure 6) can easily be switched between horizontal or vertical mode, depending on the purpose (group discussions or presentations). Processware The processware involves a series of interconnected workshops and the guidelines on how to organize these workshops. These are part of the REPAiR's PULLs (REPAiR, 2019a). A PULL workshop is a meeting in which stakeholders from the field of waste and resource management Read the generated summary of the whole included into the evaluation of the conclusions geodesign process gather to discuss waste management issues related to the future use of an area or region. Stakeholders work together in small groups of 2 to 6 participants, with each group using the GDSE on a touch table in a co-design process of solutions that together make up CE strategies. PULL workshops typically follow the Charrette System's five-part format (Lennertz & Lutzenhiser, 2006): (1) Pre-workshop survey + introduction and goals; (2) Support information + GDSE demonstration; (3) Division in small groups and (cross-group) touch table assignment using the GDSE; (4) Presentation of results; (5) Plenary session and discussion/post-workshop survey. A REPAiR PULL features four types of workshops, which are categorized according to the first four phases of the REPAiR co-creation process in living labs: coexploring, co-design, co-production, and co-decision (REPAiR, 2018a). The fifth phase, 'co-governance' does not involve PULL workshops. Co-Exploration Workshop This workshop takes place at the end of the coexploration PULL phase and aims at: (1) Developing a common understanding of the territory, including the mapping of wasted landscapes, or wastescapes (Amenta & van Timmeren, 2018), and stakeholders; (2) Categorizing and defining the main CE challenges and objectives. Table 2 shows the process leading to the workshop. The first two geodesign questions are addressed with the help of GIS and MFA. This involves mapping the region, defining the stakeholders and experts, and selecting and mapping key material flows. The GDSE is used to show and interactively discuss the study area and its status quo (maps, charts, stakeholders and key flows), and thereby help to construct a common knowledge among local research teams and other participants of the PULL. Moreover, the GDSE supports groups of stakeholders to jointly define challenges and objectives as well as think about paths for developing eco-innovative strategies. Concretely, spatial and social analyses, as well as material flows and stocks are displayed and discussed using interactive maps and Sankey diagrams linked to these maps. The process model relates to the dynamics of the system and is meant to represent the material flows within the chosen temporal and spatial scope. Therefore, the first task is identifying a key flow (e.g., organic waste, construction and demolition waste, electronic waste) for further investigation. The key flow is chosen during a collaborative process according to the criteria defined by the stakeholders. As explained in Section 2.1, MFA is typically used for detailed analyses of resource flows. The GDSE does not only incorporate a standard MFA method but also connects it with a geographical context. By introducing a new AS-MFA method (REPAiR, 2017) while geographically locating activities and actors involved in the resource flows, this enables further (iterative) identification of stakeholders and experts for potential strategies. Co-Design Workshop This workshop takes place at the end of the PULL phase co-design. Its main aims are: (1) Identifying, mapping and visualizing key activities and actors in the value chains that should be included in the discussion and development of ecoinnovative solutions; (2) Identifying specific CE challenges in the study area; (3) Identifying and mapping actor networks for each individual eco-innovative solutions development. Co-Exploration Representation Model How should the study area be described? GIS Definition and mapping of Region-Focus, and Sample Areas Definition and mapping of Wastescapes Process Model How does the study area operate? MFA & GIS Selection of key resource flows Definition and mapping of material flows and waste management system Table 3. Addressing geodesign questions at PULL phase co-design. Co-Design Evaluation Model Is the current study area working well? GIS & LCA Sustainability assessment of the status quo Assessment of the status quo's resource flow circularity Change Model How might the study area be modified? MFA Definition and common understanding of what constitutes an EIS Characteristics and effect of EIS on the process model terial flows and actors (e.g., companies) in the area based on their commercial activity. The GDSE stores the developed solutions, their descriptions and also the selection of the potential actors involved. The third geodesign question ("is the current study area working well?") refers to an assessment of the status quo or baseline scenario that allows for future comparisons with the proposed strategies (alternative future scenarios). The GDSE evaluates the status quo in terms of flow indicators based on the MFA data and a sustainability assessment. Flow indicators are first identified using existing literature (Zhang, Yang, & Yu, 2009) and then are selected through a collaborative process by the stakeholders during a co-design workshop. REPAiR defines an initial list of flow indicators, which includes flow amounts (for each material or their combination, e.g., vegetal waste vs. separate vegetables and fruits), flow structure (e.g., percentage of renewable material in each flow), flow intensity (e.g., amount of flow consumed/conducted per person), flow efficiency (relationship between economic factors and each material flow), and flow density (material consumption/conduction to sustain urban development) (REPAiR, 2019a). To undertake the sustainability assessment of the status quo for the study area, the REPAiR team has developed a framework for conducting a sustainability assessment on four impact categories (Taelman et al., 2018). This framework will be used to assess the impacts of developed ecoinnovative strategies at later stages of the PULL. Co-Production Workshop This workshop takes place at the end of the PULL phase co-production and aims to attain: (1) The ranking of objectives per decision-maker group; (2) A set of flow targets the group wants to achieve; (3) One strategy per small group and key flow. Table 4 illustrates how GDSE addresses geodesign questions 4 and 5 with the help of GIS and MFA. The third phase aims to develop one eco-innovative strategy per small group and key flow to address the objectives previously defined in earlier workshops. Each small group will select several solutions, which will together make up their eco-innovative strategy. Co-production workshops focus mainly on the development of eco-innovative strategies, expert knowledge on specific eco-innovative solutions that make up the strategies, and relative importance of sustainability indicators, which are based on the LCA methodology and which measure the various impacts of the strategies developed. Main outcomes of this workshop are ranked CE objectives, weights of the sustainability indicators, selected eco-innovative solutions and developed eco-innovative strategies. Multicriteria (MCA) methods support the comparisons of impacts of the strategies on sustainability. Co-Decision Workshop This workshop takes place at the end of the PULL phase co-decision and aims to reach a common understanding of: (1) The differences and similarities between the ranked objectives per stakeholder small groups; (2) The flow indicators that were used for setting targets for specific objectives; (3) The differences and similarities between the strategies implemented in terms of the related solutions, across stakeholder groups, and locations of EIS implementations; (4) How the specific processes in the value chain of the key flows contribute to the different impacts, in particular to the extent to which the developed strategies modify the key flows and meet the various target sets; (5) Potential sustainability assessments of the strategies developed by individual small groups; (6) Agreements and disagreements (i.e. consensus level) on objectives, targets, related strategies and where the selected EIS have been implemented for all key flows. Table 4. Addressing geodesign questions at the PULL phase of co-production. Co-Production Change Model How might the study area be modified? Decision Model How should the study area be changed? MCA Relating EIS to objectives Ranking of objectives Pairwise comparison of the relative importance of sustainability indicators Defining the targets Table 5 shows how the GDSE supports the co-decision phase. The last two geodesign questions are addressed with the help of LCA, and flow assessment calculations. The main outcomes are a concrete plan with detailed implementation actions for each eco-innovative strategy, a list of actors and stakeholders to collaborate in the implementation of each specific strategy, a timeline for actual implementation of each strategy and the corresponding EIS. The assessment of proposed strategies is done using two methodologies: LCA and the assessment of flow changes. While the flow changes are assessed in real time during the workshop, the LCA is performed after the workshop by LCA practitioners. This is due to the complexity of the LCA as well as the current lack of software interoperability. Assessing flow changes is done by comparing the status quo flow indicator set during the co-design phase with the anticipated changes introduced by the strategies in the co-production phase. Once a combination of solutions and their implementation areas are chosen by the workshop participants, a flow calculation algorithm redistributes the flows in between the economic activities, keeping the overall mass balance of the affected flows consistent. The algorithm hypothetically distributes the total surplus or shortfalls within an economic activity in between all the actors present in a chosen geographical area of implementation. That way, the flow changes are reflected in the chosen indicators and their values can be compared with the targets that were set up in the co-production phase. At the time of writing this article, some modules of the GDSE are not yet fully operational. However, the GDSE has already been used in the workshops described in this article, which have been held in parallel to the GDSE development process. The GDSE is designed with help of intended end-users, in line with the living lab approach, in which end-users test and provide constant feedback on the support tools. This is also in line with the recommendations of Uran and Janssen (2003) that SDSS should be developed to serve their intended purpose instead of those of the study team. The next section presents the application of the GDSE methodology to an Amsterdam case study. The Amsterdam Peri-Urban Living Lab The GDSE methodology is tested and applied as part of the ongoing living lab of the AMA, which encompasses the city of Amsterdam, the provinces North Holland and Flevoland. This is comprised of 32 municipalities, and a total population of over 2.4 million inhabitants. With an area of 539 km 2 , the AMA focus area (Figure 7) is located in the peri-urban areas in the west and south west of the AMA and constitutes a pilot case study of REPAiR. Yearly household waste data was gathered for the AMA. The datasets came from the CBS, Statistics Netherlands. Waste data for companies was retrieved via the Dutch register for electronic waste notifications and communication of the National Contact Point for Waste (Dutch acronym: LMA), which describes the supply, composition and processing of company/industrial waste in the Netherlands. Both datasets describe waste flows for the year 2016. This data is entered by the collectors and managed by the government and contains information on the type of waste (Eural code), waste generator (e.g., name and location of the company), and waste collector (name and location of waste treatment), and the type of waste treatment. Using a GDSE for Co-Developing Eco-Innovative CE Strategies in Amsterdam The first four phases of the PULL process in the AMA involved four types of workshops, namely coexploration, co-design, co-production, and co-decision (REPAiR, 2018a). At the time of writing this article, the GDSE had been used at the first three phases of the ongoing PULL process in the AMA. Three PULL workshops have thus been organized with local governments and Table 5. Addressing geodesign questions at the PULL phase co-decision. Co-Decision Impact Model What differences might the change cause? LCA, Flow assessment calculation Sustainability and flow assessment of Eco-Innovative Strategies Decision Model How should the study area be changed? Designing rules of system Establishing and documenting the agreements and conflicts between differente interests and groups of decision makes Triggering future local development and supporting decision-making processes policy makers, local business representatives, international partners of the REPAiR project consortium, and the PULL hosting team. This section presents results from these workshops. Results: Co-Exploration The workshop aimed to define key waste and resource management challenges in the study area by the means of: (1) Verifying challenges already identified in previous interviews with stakeholders and literature review; (2) Adding new challenges if required or needed; (3) Developing challenges to a detailed level along with suggested solution paths. The first step was to share with the participants relevant information on the AMA, which was collected, categorized and uploaded to the GDSE by the PULL team using the GDSE setup mode. Then the stakeholders were required to discuss and modify (i.e., validate, correct, remove, complement) all the information where deemed necessary. This information included (1) maps of the focus area (topographic and related to resource and waste management), (2) relevant charts with the first list of circularity challenges of the area, and (3) the first list of main stakeholders of the PULL process. "Challenge trees" were used as the main materials to present CE challenges in the AMA to stakeholders both in an A3 article format and digitally in the GDSE. Each branch on a challenge tree (Figure 8, right panel) represents one main challenge for the AMA, and each sub-branch represents specific challenges for a particular main challenge branch. Above each challenge branch, there are two fringes, each containing a question for the participants. The questions were: "what if we do this? (where and who should be involved?)" and "what should be assessed?". Participants were asked to provide feedback on each challenge tree by suggesting modifications and inserting sticky notes for each fringe. The results were directly fed into the GDSE (Figure 8). The main workshop outcomes included a categorized list of CE AMA challenges along with possible solution paths. Results: Co-Design The main objective of this workshop was to develop initial sketches of eco-innovative solutions towards CE in the AMA, based on the CE objectives identified in a previous workshop. The specific workshop aims were to: (1) Verify and rank the identified objectives with the selected stakeholders; (2) Develop initial sketches for how to meet the objectives, developing preliminary sets of EIS that follow a common GDSE-friendly template. The output from the previous co-exploration workshop (CE challenges) was used as input for this workshop. "Solution sheets" were used as main materials to communicate eco-innovative solutions to participants, and to describe solutions using a common template. A solution sheet (Figure 9) was an A3-formatted sheet that contained specific information about a solution. A sheet contained three panels, namely solution card (containing main characteristics, category and description), CE dia- gram of the solution, system diagram with activities and flows in the solutions. Participants were asked to review, complete the sheet and suggest how to modify the solution ( Figure 9). The main workshop outcome was a catalogue of solutions that addressed the ranked CE objectives in the AMA. The solutions in this catalogue were digitized and directly fed into the GDSE to make the solutions available for ensuing PULL workshops. Results: Co-Design/Co-Production The third PULL workshop was the most recent and was categorized as part of both the co-design and coproduction phase. It aimed at further developing the solutions discussed in the previous workshop. The workshop included three parallel sessions, each focusing on one key flow category: food waste, wastescapes, and construction and demolition waste. A GDSE-enabled touch table was available for each session (Figure 10). The GDSE was used to provide support information on flows, solutions, activities, and actors. The participants were asked to work on one session table at a time and to select solutions for further development. Specific main goals of the workshop were to: (1) Co-develop EIS, following a GDSE-friendly template, based on an EIS initial set; (2) Match EIS with CE objectives. New solution sheets were used as materials. The GDSE was used as main software tool on three touch tables to help users retrieve information concerning the solutions they were discussing and working on. Stakeholders used the GDSE to analyze possible actors and existing waste streams related to the ecoinnovative solutions they worked on. Figures 11 and 12 illustrate how the stakeholders used the GDSE to map actors relevant to a food waste EIS, and to visualize waste streams connected to this EIS. The main outcome of this workshop was the updated EIS catalogue for the AMA. Through a research by design approach, together with local stakeholders, young designers and students of industrial ecology, architecture, urbanism, and with the help of the GDSE, 27 ecoinnovative solutions were developed on the basis of aspects, such as relevance for practice, possible areas for further EIS implementation, actors to be involved, business model to implement, and potential policy changes. Figure 13 shows an example of one eco-innovative solution: mycelium blocks for wastescapes modelled in the GDSE. The CE diagrams are displayed for this solution: at the current linear state (on the left) and a new proposed, more circular, value chain (on the right). Actors involved in this solution can also be retrieved in an interactive map. The EIS catalogue has been uploaded to the GDSE and will be used by participants of subsequent co-production and co-design workshops to support the process of combining EIS into strategies. Effectiveness of PULL workshops Surveys were conducted before and after the workshops. Pre-workshop surveys contained questions about the participants' workshop expectations, general expertise and interest in eco-innovative solutions. The surveys were completed by an average of 19 workshop participants, whose backgrounds included human geography, urban design, architecture, and MSc students in Architecture, Industrial Ecology and Urbanism. They rated their own expertise/interest in EIS as 6.4 on a 1-10 scale. Post-workshop surveys contained questions on their experience and specific aspects of workshop effectiveness (Table 6). In general, participants gave good ratings to all workshops, and in particular, the third workshop had the highest rating for average effectiveness and for specific workshop features. The next steps for the PULL in the AMA will involve further operational EIS development that resulted from this workshop towards more detailed solutions that can be represented, assessed and compared iteratively in the GDSE. Dedicated PULL meetings will be held separately for each material flow investigated, and will host smaller groups of stakeholders who are experts in the different material flows in order to further detail the EIS in the GDSE. Stakeholders will be asked to jointly define, and interactively modify strategies for specific key flows by combining one or more implementations of solutions ( Figure 14). The GDSE will provide real-time feedback on the impacts of strategies on flow changes and sustainability indicators (Figures 15 and 16). Conclusions To address the question of whether and how geodesign can be used to improve waste and resource management, this article proposes a geodesign-based tool for supporting a collaborative process of developing ecoinnovative strategies to advance CE in peri-urban areas. Geodesign can provide a helpful framework for improving waste and resource management, which is evident by the observations and outcomes of the PULL workshops, and the positive reactions of the participants in the surveys. In fact, geodesign allows for a structured and comprehensive organization of the process and diagnosis of challenges, design and selection of solutions, and 14. GDSE screenshot showing a strategy for key flow "food waste", which is composed of three solutions, each with their own area of application and list of actors implementing them. Any solution in the strategy can be edited in a separate pop-up window (e.g., EIS "from bread to beer" in the strategy shown here). decision-making on strategies for a given territory with close stakeholder involvement. In addition, the GDSE integrates spatial data on material flows and related actors, which are presented in a visual and accessible way and ensures a sound and accessible evidence base in the participatory process. In order to address several limitations of geodesign, the GDSE integrates human creativity into a digital interface with complex spatial and metabolic analysis methods in the participatory context of living labs. This allows for informed coordination of waste management activities in space and evidence-based co-design of innovative and spatial solutions with stakeholders. This integration anchors the geodesign process in ongoing experimentations in study areas and enables a continuous engagement of stakeholders in the analysis, building on relatively simple visualization of complex data on material flows in space, and in the co-design of innovative circular solutions. Geodesign thinking enables the process of adding a spatial dimension to typically non-spatial analysis methods (e.g., MFA). Moreover, as the stakeholders argued, the GDSE's key advantage is the ability to make the exploration, design and decision-making process transparent to the participants. Naturally, there are limitations to the GDSE approach. Firstly, even though the potential of the GDSE to support participatory development of spatial waste and resource management strategies has been demonstrated and validated by the stakeholders involved, the tool is still work in progress. The strategies developed so far with the GDSE have not yet been taken up and implemented by the Amsterdam region stakeholders. Secondly, the GDSE's capacity to assist in the analysis phase and spatial visualization of material flows depends on the availability of data. Likewise, the quality of data is a critical concern for the GDSE's ability to model the impacts of the strategies co-created with stakeholders. While a robust dataset on material flows was available in the Amsterdam pilot case study, considerable efforts were needed to collect and feed the data into the GDSE and the availability of such data cannot be taken for granted in all regional contexts. Thirdly, given the complexity and uncertainty involved in enacting CE strategies, a successful GDSE application in the living labs critically depends on the ability to attract and retain the engagement of not only key territorial stakeholders along the entire value chain, but also experts with specific technical knowledge on both the processes and technologies envisaged in the co-designed strategies. Considering the busy agendas of some stakeholders, this proves challenging in practice, as they need to commit and allocate precious time to repeated interactions in the living lab over several months, which cannot be taken for granted. Thus, future GDSE applications require developing robust procedures for identifying the most relevant and knowledgeable stakehold-ers and keeping them involved in the process. Successful implementations in living lab workshop requires the involvement of an experienced moderator. Fourth, while the GDSE allows for the estimation of the impacts of strategies co-designed in the living lab, there is a considerable amount of uncertainty about their actual real-life effects. This highlights the need for monitoring the outcomes of the decision-making process facilitated by the GDSE and the implementation of the strategies developed. Integrating monitoring measures within the proposed approach would allow for validation and the creation of a scope for an iterative learning process among the stakeholders. Overcoming these limitations will require further development and testing of the tool as well as scrutiny of the implementation of the strategies developed using the GDSE in a longer temporal perspective. To conclude, the GDSE-urban-living-lab combination provides a relational space including stakeholders in a structured process in a specific location, spanning over a longer time period, allowing for a more sustained process of co-exploration of the status quo, cocreation of knowledge, and co-production of solutions and strategies. This long-term iterative engagement between stakeholders not only empowers them but also enables a more in-depth analysis for a better integration of various strands of knowledge, while building on inputs from research at each iteration. An open source GDSE facilitates the implementation of innovation in a living lab. The GDSE is developed in cooperation with end-users, which facilitates not only continuous tailoring of the tool based on end-user feedback, but also a smoother adaptation of this open source tool to other case studies, or in a different living lab setting. Future work will focus on a comparative analysis of GDSE applications in different regional settings.
2019-10-03T09:11:49.955Z
2019-09-27T00:00:00.000
{ "year": 2019, "sha1": "abecb827628e0e6cc7f0b8e773cc1cd7975970c7", "oa_license": "CCBY", "oa_url": "https://www.cogitatiopress.com/urbanplanning/article/download/2173/2173", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9097e85d436218874e4d4d238a5d81bad3d1b198", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
265522503
pes2o/s2orc
v3-fos-license
SRI-30827, a novel allosteric modulator of the dopamine transporter, alleviates HIV-1 Tat-induced potentiation of cocaine conditioned place preference in mice Objectives: HIV-1 Tat (transactivator of transcription) protein disrupts dopaminergic transmission and potentiates the rewarding effects of cocaine. Allosteric modulators of the dopamine transporter (DAT) have been shown to reverse Tat-induced DAT dysfunction. We hypothesized that a novel DAT allosteric modulator, SRI-30827, would counteract Tat-induced potentiation of cocaine reward. Methods: Doxycycline (Dox)-inducible Tat transgenic (iTat-tg) mice and their G-tg (Tat-null) counterparts were tested in a cocaine conditioned place preference (CPP) paradigm. Mice were treated 14 days with saline, or Dox (100 mg/kg/day, i.p.) to induce Tat protein. Upon induction, mice were place conditioned two days with cocaine (10 mg/kg/day) after a 1-h daily intracerebroventricular (i.c.v.) pretreatment with SRI-30827 (1 nmol) or a vehicle control, and final place preference assessed as a measure of cocaine reward. Results: Dox-treatment significantly potentiated cocaine-CPP in iTat-tg mice over the response of saline-treated control littermates. SRI-30827 treatment eliminated Tat-induced potentiation without altering normal cocaine-CPP in saline-treated mice. Likewise, SRI-30827 did not alter cocaine-CPP in both saline- and Dox-treated G-tg mice incapable of expressing Tat protein. Conclusions: These findings add to a growing body of evidence that allosteric modulation of DAT could provide a promising therapeutic intervention for patients with comorbid HIV-1 and cocaine use disorder (CUD). Introduction While severe HIV-associated neurocognitive disease (HAND) has declined significantly, milder impairments in attention, concentration, memory, and motivation persist, affecting approximately 50 % of HIV-positive patients [1].Moreover, substance use disorders are commonly comorbid with HIV infection and are known to exacerbate the progression of HAND [2].As HIV is not thought to directly infect neurons, the dysregulation of motivational processes has been attributed to the action of HIV-1 proteins [3], some of which have been directly linked to cognitive impairment and brain-injury.Among these proteins, transactivation of transcription (Tat), is known to act as a negative allosteric modulator of the dopamine transporter (DAT), inhibiting dopamine (DA) uptake [4,5].Our recent study demonstrates that the disruption of DAT-mediated dopaminergic transmission caused by Tat contributes to Tat-induced potentiation of cocaine reward and deficits in learning and memory seen in HAND [5], making Tat an attractive pharmacologic target [6].Our previous study reported that SRI-30827, a closely related analog of SRI-32743, attenuated Tat-induced inhibition of [ 3 H]WIN35428 binding through its influence on tyrosine470 and tyrosine88 residues in the EL6 region of hDAT [7].These two hDAT residues are critical for Tat protein's allosteric modulation of DAT [4].Furthermore, SRI-32743 dosedependently reversed Tat-induced potentiation of cocaine-CPP and impairment of novel object recognition (NOR) in mice [6].The doses of SRI-32743 tested were without effect on cocaine-CPP or NOR in mice lacking Tat protein expression.Extending the SRI-32743 results, this study further examined whether SRI-30827 with a quinazoline structure may attenuate Tat-induced potentiation of cocaine-CPP. Transgenic mouse models Adult male inducible Tat transgenic (iTat-tg) mice and G-tg (Tat-null) mice [8] were obtained from colonies at the University of Florida as reported previously [6].Both the iTat-tg and Tat-null mice genetically expresses a "tetracycline-on (TETON)" system, but only iTat-tg mice possess the Tat 1-86 coding gene [8].Integration into the gene regulator for the astrocyte-specific glial fibrillary acidic protein (GFAP) promoter confines Tat expression to the CNS [8,9].Based on the previously demonstrated expression of Tat protein [8,9], the current study utilized doxycycline at a 100 mg/kg/day dose, i.p., for 14 days to maximize induction of Tat protein. Drugs All drugs injected i.p. or s.c. were administered in a volume of 10 mL/kg of body weight.Drugs injected i.c.v. were administered in a fixed volume of 5 μL.Cocaine hydrochloride and doxycycline hyclate (Sigma-Aldrich, St. Louis, MO, USA) were dissolved in saline (0.9 % sodium chloride).SRI-30827 synthesized at the Southern Research Institute (Birmingham, AL, USA) [7] is poorly soluble in saline, and was therefore dissolved in 100 % DMSO and administered at 1 nmol/day, i.c.v. Conditioned place preference (CPP) Cocaine-CPP was performed with a three-chamber apparatus (San Diego Instruments, San Diego, CA, USA) using a counterbalanced design [6].Place conditioning was performed on days 15 and 16 as reported [6], described in Figure 1.On test days, mice were allowed to move freely between chambers in a 30-min preference test. iTat-tg mice Exposure to Tat protein causes potentiation of cocaine-CPP-Prior to place conditioning, there were no significant differences in the initial place preference responses between any of the six groups of iTat-tg mice (one-way ANOVA: F (5,156) =0.22, p=0.95). All groups of iTat-tg mice conditioned with cocaine demonstrated CPP (Table 1), while, as expected, mice conditioned with saline did not (Table 1).When pretreated with vehicle (i.c.v.) and place conditioned with cocaine, control iTat-tg mice that received a 14-day induction with doxycycline demonstrated a significant place preference (factor: treatment × conditioning, F (5,155) =3.90, p=0.002; two-way RM ANOVA with Tukey's HSD post hoc test, Figure 2) that was 2.6-fold significantly greater when compared to their saline-induced littermates (Figure 2, Left panel; *p=0.04,Tukey's HSD). SRI-30827 ameliorates Tat-induced potentiation of cocaine-CPP-SRI-30827 pretreatment significantly reduced cocaine-CPP in doxycycline-induced iTat-tg mice compared to those pretreated with vehicle (Figure 2, left and center panels; † p=0.04,Tukey's HSD).There was no difference in cocaine-CPP between doxycycline-induced and salineinduced iTat-tg mice pretreated with SRI-30827 (Figure 2, center panel; p=0.35,Tukey's HSD).SRI-30827 does not affect cocaine-CPP in iTat mice induced with saline-Among the control mice given 14 days of saline and place conditioned with cocaine, there was no difference in cocaine-CPP between groups pretreated with SRI-30827 or vehicle (Figure 2, left and central panels; p>0.99,Tukey HSD), demonstrating that SRI-30827 itself does not alter cocaine-CPP in the absence of Tat protein. SRI-30827 itself does not demonstrate rewarding or adverse effects-Mice pretreated with SRI-30827 and place conditioned with only saline did not show a place preference for either the saline-alone or SRI-30827 + saline chamber (Figure 2, right panel; p=0.996,Tukey's HSD). SRI-30827 does not alter cocaine-CPP in Tat-null mice incapable of expressing Tat protein-Prior to place conditioning, there were no significant differences between the initial place preference responses between any of the four Tat-null groups (one-way ANOVA: F (3,94) =0.43, p=0.73).Tat-null mice demonstrated a significant cocaine-CPP both globally (two-way RM ANOVA: F (1,94) =32.5, p<0.0001) and in their respective groups (Table 1), but there was no significant difference in cocaine-CPP response across groups regardless of induction with doxycycline and/or pre-treatment with SRI-30827 (Two-way RM ANOVA, F (3,94) =0.22, p=0.88).This suggests that in the absence of Tat protein, this dose of SRI-30827 was without direct effect on cocaine-CPP (Figure 3). Discussion and conclusions Although both Tat and cocaine interact with DAT, cocaine competitively blocks the DAT uptake site, whereas Tat interacts with DAT in a allosteric modulatory manner [5].Together they produce synergistic dysfunction of dopaminergic transmission, thought to contribute to the progression of HAND and altered drug reward [6].Consistent with these findings, 14 day Dox-treated iTat-tg mice displayed a 2.6-fold potentiation of cocaine-CPP compared to the response of vehicle-treated control littermates.Potentiation of cocaine-CPP was not observed in saline-treated iTat-tg or Dox-treated Tat-null mice, implicating the role of Tat in the potentiated drug reward, consistent with earlier reports [6].Treatment with the novel allosteric modulator of the DAT, SRI-30827, reversed Tat-induced potentiation of cocaine-CPP in Dox-treated iTat-tg mice, but had no effect on the magnitude of cocaine-CPP in control animals.These findings extend previous in vitro DAT binding and functional assay data with SRI-30827 [7], and are consistent with results observed with the related allosteric modulator, SRI-32743 [6].Collectively, this work adds to a growing body of evidence that allosteric modulation of DAT may provide a promising therapeutic intervention for patients with comorbid HIV-1 and CUD. The iTat-tg mouse model has been instrumental in clarifying the CNS and behavioral effects of Tat alone, suggesting exposure to HIV-1 Tat protein is sufficient to induce CNS dysfunction.However, other viral components presently unexamined may contribute to HAND and the potentiation of drug reward in patients with HIV-1.Notably, mice infected with Eco-HIV, an engineered virus with 9 of 11 HIV regulatory proteins, exhibit elevated stress-induced reinstatement of cocaine seeking behavior [10] and blunted extinction of cocaine preference [11].Future testing of allosteric modulators in these models may elucidate the contribution of Tat to increased drug reward when in the presence of other concomitant viral proteins, and further characterize the therapeutic potential of allosteric modulators of the DAT. A limitation of the current study lies in the poor solubility and brain penetration of SRI-30827.Circumventing these limitations, the current study utilized DMSO and i.c.v.administration under conditions known not to negatively impact place preference behavior [12].However, structural modification of the SRI-allosteric modulators to improve their solubility and penetrance of the BBB would be expected to increase their clinical value.Confirming this are earlier reports where the more-druggable SRI-32743 produced similar effects as SRI-30827 did presently, but after systemic administration [6].These compounds only ameliorate Tat-mediated DAT dysfunction.Alternatively, inhibiting Tat protein itself might slow viral replication and the production of viral proteins, potentially ameliorating other off-target effects.Molecular modeling, computational screening, and some in vitro analyses have identified several candidate compounds termed "Tat antagonists" [13].The natural product Didehydro-Cortistatin A (dCA) was found to prevent Tat-induced potentiation of cocaine-CPP in iTat-tg mice [12], and although Tat antagonist Ro 24-7429 failed to show measurable effects even at high doses in Phase I clinical trials [14], work continues to develop Tat antagonists as therapeutics to eliminate the activities of Tat protein.Thus, these results demonstrate that DAT allosteric modulators like SRI-30827 and SRI-32743 which attenuate cocaine-and Tat-binding to DAT may provide an early effective intervention for symptomatic relief from neurocognitive deficits along with substance abuse in the early stage of HIV-infected individuals.CPP experimental design schematic.An initial, pre-conditioning preference was determined by measuring the amount of time the individual mice spent in each chamber during a 30-min testing period.Mice were then treated with saline (0.9 %) or Dox (100 mg/kg) via i.p. injection for 14 days before the start of place-conditioning.On days 15 and 16, mice were given saline (0.9 %) s.c. and consistently confined to a randomly assigned outer compartment, with half of each group in the right chambers and half in the left.Three hours later, mice were given a pre-treatment of either SRI-30827 (1 nmol) or vehicle (DMSO) via i.c.v injection.An hour after this pre-treatment, mice underwent drug-conditioning (with either saline or cocaine 10 mg/kg s.c.) and were confined to the opposite, "drugpaired" compartment for 30 min.Twenty-four hours after the completion of their two-day conditioning cycle, mice were tested for post-conditioning place preference by allowing them access to all compartments and measuring the time they spent in each chamber over a 30-min testing period.iTat-tg cocaine and saline conditioned place preference.An initial, pre-conditioning preference was determined by measuring the amount of time the individual mice spent in each chamber during a 30-min testing period.Mice were then treated with saline (0.9 %) or Dox (100 mg/kg) via i.p. injection for 14 days before the start of place-conditioning.On days 15 and 16 (see Figure 1), mice were given saline (0.9 %) s.c. and confined to a randomly assigned outer compartment.Three hours later, mice were given a pre-treatment of either SRI-30827 (1 nmol) or vehicle (DMSO) via i.c.v injection.An hour after this pre-treatment, mice underwent drug-conditioning (with either saline or cocaine 10 mg/kg s.c.) and confined to the opposite, "drug-paired" compartment for 30 min.Twenty-four hours after the completion of their two-day conditioning cycle, mice were tested for postconditioning preference by allowing them access to all compartments and measuring the time they spent in each chamber over a 30-min testing period, with the difference in time spent reported in seconds ± standard error of the mean (SEM).Number of mice in each treatment group is as listed in the figure.*p<0.05versus matching saline-treated post-conditioning response; † p<0.05 versus post-conditioning response of Dox-treated mice administered i.c.v.vehicle prior to cocaine place conditioning.Tat-null cocaine conditioned place preference.An initial, pre-conditioning preference was determined by measuring the amount of time the individual mice spent in each chamber during a 30-min testing period.Mice were then treated with saline (0.9 %) or Dox (100 mg/kg) via i.p. injection for 14 days before the start of place-conditioning.On days 15 and 16 (see Figure 1), mice were given saline (0.9 %) s.c. and confined to a randomly assigned outer compartment.Three hours later, mice were given a pre-treatment of either SRI-30827 (1 nmol) or vehicle (DMSO) via i.c.v injection.An hour after this pre-treatment, mice underwent drug-conditioning (with cocaine 10 mg/kg s.c.) and confined to the opposite, "drug-paired" compartment for 30 min.Twenty-four hours after the completion of their two-day conditioning cycle, mice were tested for post-conditioning preference by allowing them access to all compartments and measuring the time they spent in each chamber over a 30-min testing period, with the difference in time spent reported in seconds ± standard error of the mean (SEM).Number of mice in each treatment group is as listed in the figure.iTat-tg and Tat-null conditioned place preference results.A series of two-tailed Student's t-tests (adjusted for multiple comparisons) were performed for each group to determine if the mice displayed a difference in their place preference as a result of conditioning.Both initial (pre-CPP) and final (post-CPP) values for each group are reported as the mean difference in time spent (in seconds) in the drug-paired compartment.The difference in time spent in the drug-paired compartment was calculated by simply subtracting the amount of time spent in the saline-paired compartment from the amount of time spent in the drug-paired compartment over the 30-min testing period.Variability in this value is reported as standard error of the mean (SEM), in seconds (s).As expected, groups place conditioned with cocaine produced a significant place preference for the drug-paired compartment, while groups conditioned only with saline did not show place preference.Groups are labelled as: induction (saline 14d or Dox 14d), pre-treatment (vehicle or SRI-30827), and drug place conditioning (cocaine PC or saline PC). n Mean of pre-CPP, s Mean of post-CPP, s Difference, s SEM of difference, s q-value NeuroImmune Pharm Ther.Author manuscript; available in PMC 2024 May 06.
2023-12-02T14:51:50.204Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "7cdbd11fe45dcb37775d97ea867afd5646fcc2c3", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/nipt-2023-0022/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "be1e621ab070a138346ca8783d146653b9a772f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
232385406
pes2o/s2orc
v3-fos-license
Comparative Analysis of Different Univariate Forecasting Methods in Modelling and Predicting the Romanian Unemployment Rate for the Period 2021–2022 Unemployment has risen as the economy has shrunk. The coronavirus crisis has affected many sectors in Romania, some companies diminishing or even ceasing their activity. Making forecasts of the unemployment rate has a fundamental impact and importance on future social policy strategies. The aim of the paper is to comparatively analyze the forecast performances of different univariate time series methods with the purpose of providing future predictions of unemployment rate. In order to do that, several forecasting models (seasonal model autoregressive integrated moving average (SARIMA), self-exciting threshold autoregressive (SETAR), Holt–Winters, ETS (error, trend, seasonal), and NNAR (neural network autoregression)) have been applied, and their forecast performances have been evaluated on both the in-sample data covering the period January 2000–December 2017 used for the model identification and estimation and the out-of-sample data covering the last three years, 2018–2020. The forecast of unemployment rate relies on the next two years, 2021–2022. Based on the in-sample forecast assessment of different methods, the forecast measures root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percent error (MAPE) suggested that the multiplicative Holt–Winters model outperforms the other models. For the out-of-sample forecasting performance of models, RMSE and MAE values revealed that the NNAR model has better forecasting performance, while according to MAPE, the SARIMA model registers higher forecast accuracy. The empirical results of the Diebold–Mariano test at one forecast horizon for out-of-sample methods revealed differences in the forecasting performance between SARIMA and NNAR, of which the best model of modeling and forecasting unemployment rate was considered to be the NNAR model. Introduction Unemployment is a socio-economic problem facing all countries of the world, affecting both the standard of living of the people and the socio-economic status of the nations. Unemployment represents the result of a poor demand in the economy; a low demand implies a lower need for labor, which will lead either to reduced working hours or redundancies. Although unemployment is a consequence of a fundamental change in an economy, its frictional, structural, and cyclical behavior contributes to its existence. The pandemic led to a large number of unemployed in Romania; in March, the unemployment rate rose to 4.6% compared to 3.9% in February 2020. The provisions of the Literature Review The phenomenon of unemployment is the result of the dysfunctions of the economy, in the field of employment, being present both in the period of market economy transition and in the period of economic growth [1]. Unemployment is a very important labor market issue, being a mismatch between the labor demand and supply. This indicator has major social and economic implications, being one of the factors to be examined in macroeconomic growth and very important in comparing the country's economic performance from a work perspective [2], affecting people's living standard and the nation's socio-economic status. In this context, unemployment represents one of the biggest social problems of the world, being present in each country, the intensity of the phenomenon differing according to the economic development of a society. Population growth implies an increase regarding workforce, the jobs being insufficient in the short term [3]. The adjustment of the economic structure, the education system, and the establishment of the specialty does not satisfy the needs of economic restructuring; the professional skills of the rural labor force cannot satisfy the demand for jobs, aggravating the severity of unemployment. One of the solutions to this problem is the establishment of an early unemployment warning system, the forecast being absolutely necessary [4]. Forecasting the unemployment rate is very important for many economic decisions, especially setting relative policies by the government. The unemployment rate is correlated to the economic development of a society; therefore, different forecasting techniques are used for its forecast, from the simple OLS (ordinary least squares) method to the GARCH (generalized autoregressive conditional heteroskedasticity) models and neural networks. The econometric models are often related to stationary time series, seasonality, and trend analysis, and exponential smoothening to the simple OLS technique including ARIMA (autoregressive integrated moving average) models [5]. The ARMA and GARCH models were used by Chiros [6] to predict the unemployment rate in the UK; Parker and Rothman [7] modeled quarterly unemployment rates using the AR model (2), Power and Gasser [8] highlighted that the ARIMA (1,1,0) model has better forecasting performance for unemployment rates in Canada. Etuk et al. [9] indicated that the ARIMA (1,2,1) model is suitable for forecasting the unemployment rate in Nigeria. Rothman [10] used six nonlinear models for out-of-sample forecasting, Koop and Potter [11] used the autoregressive threshold (ART) for modeling and forecasting the monthly unemployment rate, and Proietti [12] used seven forecasting models (linear and nonlinear). Johnes [13] used autoregressive models, GARCH, SETAR (Self-Exciting Threshold AutoRegressive) and neural networks in order to predict the monthly unemployment rate in the United Kingdom, the SETAR model registering the best results. Peel and Speight [14] also concluded that the SETAR model is better, in terms of root mean squared error (RMSE), compared to AR models. As an alternative to ARMA models, Gil-Alana [15] used an exponential Bloomfield spectral model to model unemployment in the UK, the results indicating that this model is suitable for forecasting this phenomenon. Forecasting the unemployment rate in Italy, Naccarato et al. [16] used both official data and the Google Trends query rate, estimating two different models: ARIMA and VAR (vector-autoregressive models), the VAR model registering a lower forecast error. The autoregressive integrated moving average (ARIMA) models were introduced by Box and Jenkins [17], also developing the practical process to select the most suitable ARIMA model. ARIMA models are more secure in case of short-term forecasts compared to long-term forecasts [18]. For seasonal and non-seasonal data, the SARIMA (seasonal model autoregressive integrated moving average) is used. The SARIMA model is an extension of the simple ARIMA models, being used for inflation forecasting [19][20][21], for exchange rate forecasting [22,23], for tourist arrivals and income forecasting [24,25], as well as for unemployment forecasting. The literature includes a lot of studies on forecasting using ARIMA models, respectively the Box-Jenkins methodology, which is widely used by many researchers to highlight future unemployment rates [26]. Among them, Wong et al. [27] developed autoregressive integrated moving average (ARIMA) models in order to analyze and forecast important indicators in the Hong Kong construction labor market: employment level, productivity, unemployment rate, underemployment rate, and real wage. Ashenfelter and Card [28] analyzed unemployment, nominal wages, consumer prices, and the nominal interest rate, using the autoregressive moving average model. Kurita [29] forecasted the unemployment rate using autoregressively integrated fractional moving average, the model being much better than naive predictions. Predictions of unemployment rate in the world using the ARIMA model were made by Chih-Chou and Chao-Ton [30], Etuk et al. [22] and Nkwatoh [31] in Nigeria using the ARIMA and ARCH model, Kanlapat et al. [32] in Thailand, Nlandu et al. [33] in Barbados, using the seasonal integrated autoregressive moving average model (SARIMA), Dritsakis and Klazoglou [34] in the USA using SARIMA and GARCH models, and Didiharyono and Syukri [35] in South Sulawesi using the ARIMA model. In the European Union, the unemployment rate is forecasted using Box-Jenkins and TRAMO/SEATS methods [36,37]. In European countries, the unemployment rate was predicted using the Box-Jenkins methodology in Germany using the ARIMA and VAR models [38], in the Czech Republic using the SARIMA model [39,40], in the German regions using a model spatial GVAR [41], in Greece, both as a dynamic process and as a static process using SARIMA models [42,43], and in Slovakia using ARIMA and GARCH models [44]. Unemployment predictions using VAR were realized also by Kishor and Koenig [45], taking into account that data are subject to revisions. The accuracy of forecasts based on VAR models can be measured using the trace of the mean-squared forecasts error matrix, generalized forecasts error second moment [46], transfer functions [47], and combined forecasts based on VAR models are a good strategy for improving predictions' accuracy [48]. Wang et al. [49] used back propagation neural networks (BPNN) and the Elman neural network to predict unemployment rate. Neural networks are also used by Peláez [50] to forecast the unemployment rate, together with econometric models. As the asymmetric behavior of unemployment rate can be modeled using a nonlinear time series model, Skalin and Terasvirta [51] proposed STAR. Peel and Speight [14] forecasted the unemployment rate in the UK using self-exciting threshold autoregressive (SETAR) models and an autoregressive model, in terms of RMSE, SETAR models registering better forecasting performance. Koop and Potter [11] used threshold autoregressive (TAR) in order to forecast the US unemployment rate, Johns [13] forecasted the unemployment rate using AR(4), AR(4)-GARCH(1,1), SETAR (3,4,4), and neural network, highlighting that SETAR is the best model. According to the international definition [52], the unemployed are people aged between 15 and 74 who at the same time satisfy three conditions: they do not have a job, are available to start work in the next two weeks, and have been actively looking for a job anytime in the last four weeks. The unemployment rate represents the share of the unemployed in the active population, the active population in a country including all persons who provide labor available for the production of goods and services during the reference period, including employees and the unemployed. Unemployment was first introduced in Romania in 1991, and the first study to assess unemployment according to ILO standards was conducted in 1994 [1]. Specific to a country in transition, unemployment in Romania was the result of the enterprise restructuring and the contraction of production [53]. In the first period after 1990, although many occupations appeared in Romania, the number of unemployed increased; 1994 had the highest registered unemployment rate [54]. In the period 1995-1996, the number of unemployed decreased by 46.28% and then increased significantly until 1999 due to socio-economic imbalances that arose from the closure of other productive structures. After 1999, the economic activities were restructured and privatized, especially in the case of large companies, leading to large layoffs, but also to the emergence of new jobs, the result being the unemployment reduction. Since 2000, employment in Romania has registered a continuous increase, with small fluctuations, leading to a reduction in unemployment [55]. In order to substantiate the macroeconomic policies in Romania, it is important and topical to forecast the labor supply, employment, and unemployment. In Romania, as in other European countries, unemployment is monitored and assessed very seriously. The most common method used in order to predict the unemployment in Romania involves ARIMA models. Son et al. [56] analyzed the unemployment rate in EU-27 countries, focusing on Romania, concluding that the unemployment rate can be modeled by using a linear autoregressive model. Others studies using ARIMA models in order to predict the unemployment rate in Romania were realized by Madaras [57], Bratu [58], and Simionescu [59], while Dobre and Alexandru used the VARMA and VAR models [60], and at the level of two Romanian counties (Brasov and Harghita), studies used the Box-Jenkins methodology and NAR model based on the artificial neural network. Comparing the forecasted values with the officially recorded unemployment rate from the same period, we noticed that by the end of the period, the differences between the real and the predicted values became larger in the NAR model than in the ARMA model forecast, medium-term forecasts, forecasts based on the ARMA model being more accurate. Other forecasts of the unemployment rate in Romania were realized by Bratu and Marin [61] using several techniques: econometric, exponential modeling, smoothing technique, and moving average method; of these, predictions based on the exponential smoothing technique recording the highest degree of accuracy. Voinegu et al. [62] predicted the unemployment rate using Holt's improved model, the monthly series being constructed and disseminated in three forms: adjusted, seasonally adjusted, and trend adjusted. Other predictions used the Kalman approach, the Kalman filter being appropriate for calculating the natural unemployment rate [63]. In the short term, Zamfir [64] modeled the unemployment rate using stochastic models. Simionescu [65] predicted the unemployment rate in Romanian counties using Internet data and official data as well as a methodology consisting of different types of models with panel data. In the case of the quarterly unemployment rate, updated vector-autoregressive models (VAR models) and a Bayesian VAR model were used, but the VAR model exceeded the Bayesian approach in terms of predicted accuracy [66]. In order to analyze the dynamics of the unemployment rate in Eastern Europe, including Romania, Lukianenko et al. [67] constructed econometric regression models with nonlinearities due to discrete changes in modes. Using the Markov switching model, regularities were captured by modeling the asymmetry in the unemployment rate during contractionary states, revealing the specifics of the labor market for each country and the differences in the flexibility of reactions to changes in the economic environment. Data and Methodology In order to determine the best model to forecast the Romanian unemployment rate, we have investigated the monthly unemployment rate covering the period 2000M01 to 2020M12. The data were provided by Eurostat (European Union labour force survey, EU-LFS). When choosing models, it is common practice to split the available data into two portions, training and test data, where the training data are used to estimate any parameters of a forecasting method and the test data are used to evaluate its accuracy. Therefore, the training set or "in-sample data" was set to the period 2000M01-M2017M12, and the test set or the "out-of-sample data" was set to the period 2018M01-2020M12. The forecast of unemployment rate will rely on the next two years of the period 2021-2022. The main objective of the paper is to compare the forecasting potential of five models: exponential smoothing models (additive and multiplicative Holt-Winters (HW) models, and ETS model), the SARIMA model, the neural network autoregression (NNAR) model, and the SETAR model, and to predict future values of unemployment rate beyond the period under consideration. Therefore, with the study, the forecasting performance was derived from the five models in view of identifying the best suited forecasting procedure for the monthly unemployment rate, taking into account the following steps: Fit the SETAR model on the training dataset 6. Compare the in-sample forecast accuracy measures for the all models 7. Compare the out-of-sample forecast accuracy measures for the models over the period January 2018 to December 2020 8. Compare the forecast projections of unemployment rate for all models over the period January 2021 to December 2022. Holt-Winters Method and ETS Models We will start our technical demarche by introducing the class of exponential smoothing methods as widely used forecasting procedures referring particularly to the Holt-Winters (HW) method, which is a commonly used forecasting method in time series analysis incorporating both trend and seasonal components, irrespective of whether they are additive or multiplicative in nature. The additive method is preferred when the seasonal variations are roughly constant through the series, while the multiplicative method is preferred when the seasonal variations are changing proportional to the level of the series. The Holt-Winters' additive method can be written as follows: The Holt-Winters' multiplicative method can be written as follows: where t = 1, . . . , n, s represents the length of seasonality (months), L t represents the level of the series, and b t denotes the trend and S t seasonal component [22]. The constants used for this model are α (level smoothing constant), γ (trend smoothing constant), and δ (seasonal smoothing constant). In order to choose the most adequate smoothing constants, we tested different values of the smoothing constants. The model is selected according to the certain forecast accuracy such as MAPE (the mean absolute percentage error), the best model being the model who register the minimum value for MAPE. The ETS (error, trend, seasonal) model represents time series models that support the exponential smoothing methods, consisting of a trend component (T), a seasonal component (S), and an error term (E). These are based on error-trend-season probabilities of Hyndman, being defined an extended class of ES methods using probability calculations based on the state space, with support for model selection and the calculation of standard forecast errors [68]. The long-term movement is characterized by the trend term, the pattern with known periodicity is reflected by the seasonal term, and the error term represents the irregular, unpredictable component of the series. ETS models generate both point forecasts and prediction intervals (or forecast). If the same values of the smoothing parameters are used, the point forecasts are identical but will generate different prediction intervals. The individual components of an ETS specification may be specified as being of the The automatic selection of the model is based on the ETS smoothing. For each ETS model, the probability and the forecast error can be calculated by comparing the information criterion based on probability or an out-of-sample AMSE (The average mean square error estimator finds the parameter values and initial state values that minimize the average mean square error of the step forecasts of the specified ETS model) in order to determine the model that best fits the most accurate data or forecasts. Automatic selection for unemployment rate forecasting using the ETS framework will be done using Akaike Information Criterion corrected (AICc) minimization. The Neural Network Autoregression Model Artificial neural networks are used to model complex nonlinear relationships between input variables and output variables. An autoregression model of the neural network (NNAR) has delayed values of a time series as input in the model, and it predicted values of the time series as output. The major difference of the NNAR method compared to the HW method is the non-existence of the restriction of stationary parameters. Considering the seasonality of the monthly unemployment rate, the specification of the neural network will be NNAR(p,P,k)m, and the graphical representation from Figure 1. By adding an intermediate layer with hidden neurons, the neural network becomes nonlinear, and without the hidden layer, NNA(p,P,0)m becomes SARIMA(p,0,0) (P,0,0)m. values of the time series as output. The major difference of the NNAR method compared to the HW method is the non-existence of the restriction of stationary parameters. Considering the seasonality of the monthly unemployment rate, the specification of the neural network will be NNAR(p,P,k)m, and the graphical representation from Figure 1. By adding an intermediate layer with hidden neurons, the neural network becomes nonlinear, and without the hidden layer, NNA(p,P,0)m becomes SARIMA(p,0,0) (P,0,0)m. The NNAR model represents a feedforward neural network, involving a linear combination function and an activation function. The linear combination function has the following form [70,71]: The hidden layer has a nonlinear sigmoid function in order to issue the input for the next layer: In the case of NNAR(p,k) with p delayed entries and k nodes in the hidden layer, the model involves delayed time series values as entries in a neural network, considering a feed-forward network with a single hidden layer. The seasonal component is present in the data (m = 12), so the last observed values from the same season will be added as inputs, NNAR becoming NNAR(p,P,k)12. The forecasting procedure is iterative; the one-step ahead forecast uses historical inputs; and the two-steps ahead forecast uses the one-step ahead forecast and the historical data. Seasonal Autoregressive Integrated Moving Average Model (SARIMA) Model Taking into account the seasonal pattern exhibited by the monthly unemployment rate, a seasonal process may be considered; therefore, the ARIMA model will become a SARIMA model. The seasonal autoregressive integrated moving average (SARIMA) model is a generalized form of an ARIMA model that accounts for both seasonal and nonseasonal data. The SARIMA model is denoted as ARIMA(p,d,q) (P,D,Q)S and has the following specification based on the backshift operator [72,73]: The NNAR model represents a feedforward neural network, involving a linear combination function and an activation function. The linear combination function has the following form [70,71]: The hidden layer has a nonlinear sigmoid function in order to issue the input for the next layer: In the case of NNAR(p,k) with p delayed entries and k nodes in the hidden layer, the model involves delayed time series values as entries in a neural network, considering a feed-forward network with a single hidden layer. The seasonal component is present in the data (m = 12), so the last observed values from the same season will be added as inputs, NNAR becoming NNAR(p,P,k)12. The forecasting procedure is iterative; the one-step ahead forecast uses historical inputs; and the two-steps ahead forecast uses the one-step ahead forecast and the historical data. Seasonal Autoregressive Integrated Moving Average Model (SARIMA) Model Taking into account the seasonal pattern exhibited by the monthly unemployment rate, a seasonal process may be considered; therefore, the ARIMA model will become a SARIMA model. The seasonal autoregressive integrated moving average (SARIMA) model is a generalized form of an ARIMA model that accounts for both seasonal and non-seasonal data. The SARIMA model is denoted as ARIMA(p,d,q) (P,D,Q)S and has the following specification based on the backshift operator [72,73]: where Y t represents the time series data at period t, B denotes the backshift operator, ε t is a sequence of i.i.d. variables (mean zero and variance σ 2 ), s is the seasonal order, φ i and φ j are the non-seasonal and seasonal AR parameters, θi and θj are respectively non-seasonal and seasonal MA parameters, p, d, and q denote the non-seasonal AR, I, and MA orders, respectively, and P, D, and Q respectively represent the seasonal AR, I, and MA orders. Similar to the Box-Jenkins methodology, also, the SARIMA model follows a five-step iterative procedure: identification, estimation, selection, diagnostics, and forecasting [34,60,69]. Before fitting a particular model to time series data, the stationarity of a series must be checked [74]. In order to identify if the time series in stationary, the graphical representation of the series together with the correlogram of the series in level, Bartlett test, and Ljung-Box test can be applied. In order to test if the series has a unit root, the Augmented Dickey-Fuller and Philips-Perron tests can be used. To obtain a stationary time series, the corresponding value of d is estimated, in the case of a non-stationary series in mean, the series is differentiated, and in the case of a non-stationary series in variance, the series is logarithmized. In addition, the series needs to be tested against the presence of a structural break using the Zivot-Andrews test. The Zivot and Andrews endogenous structural break test is a sequential test that uses the full sample and a different dummy variable for each possible break date. The break date is selected where the t-statistics of a unit root ADF (Augmented Dickey Fuller) test is at a minimum (most negative). Consequently, a break date will be chosen when the null hypothesis of a unit root will be rejected. The Zivot-Andrews test uses three scenarios: a structural break in the level of the series, a one-time change in the slope of the trend, and a structural break in the level and slope of the trend function of the series. Therefore, under the test, the null hypothesis assumes that the series yt contains a unit root without any structural break, against the alternative that the series is a trend-stationary process with a one-time break occurring at an unknown time point. Another important feature that needs to be investigated for a series exhibiting a seasonal pattern under the stationarity condition is to test for the presence of a seasonal unit root using the HEGY test [75]. The HEGY test is used in case of a seasonal and nonseasonal unit root in a time series. A time series y t is considered as an integrated seasonal process if it has a seasonal unit root as well as a peak at any seasonal frequency in its spectrum other than the zero frequency. The test distinguishes between deterministic seasonality-which can be removed by seasonal adjustment-and stochastic seasonality-which refers to unit roots at the seasonal frequencies [76]. Once the stationarity has been achieved, the identification stage involves determining the proper values of p, P, and q, Q based on the correlogram of the stationary series (ACF and PACF plot). Checking the ACF and PACF plots, we should both look at the seasonal and nonseasonal lags. Usually, the ACF and the PACF have spikes at lag k and cut off after lag k at the non-seasonal level. The ACF and the PACF also have spikes at lag ks and cut off after lag ks at the seasonal level. The number of significant spikes suggests the order of the model [74]. An SAR signature usually occurs when the autocorrelation at the seasonal period is positive, whereas an SMA signature usually occurs when the seasonal autocorrelation is negative. In the model selection stage, we need to decide on the optimal model from several alternative estimated models in the situation in which two or more models compete in the selection of the best model for the study. In order to be able to make a decision, we can rely on the penalty information criteria (Akaike Information Criterion (AIC), the Akaike Information Criterion corrected (The AICc includes a penalty that discourages overfitting, and increasing the number of parameters improves the goodness of fit [72]) (AICc), and the Bayesian Information Criterion (BIC), choosing as an optimal model the model with the smallest values of AIC, AICc, and BIC. In the model estimation stage, the parameters of the chosen model are estimated using the method of maximum likelihood estimation (MLE). The diagnostic checking stage is the next stage investigating if the estimated model or models are firstly validated in accordance with the classical tests: t-test for the statistical significance of the parameters and F-test for the statistical validity of the model. Secondly, the main hypotheses on the model residuals need to be tested, showing that they are white noise, homoscedastic, and do not exhibit autocorrelation. The normality of the residuals has been checked using Jarque-Bera test, while for non-autocorrelation, the Ljung-Box test has been applied. When the variance of the residuals is not constant, the issue of conditional heteroscedasticity is one of the key problems that is likely to encounter when fitting models. For checking autoregressive conditional heteroskedasticity (ARCH) in the residuals, the squared residuals correlograms and the ARCH-LM test can be used. In case there is no ARCH in the residuals, the autocorrelations and partial autocorrelations should be zero; regardless, the lags and the Q-statistics should be insignificant. If at the level of this stage, one of the hypotheses is invalidated, we need to return to the first stage of the model and rebuild a better model. Otherwise, if the model passes this stage, the forecasting process can be implemented to predict future time series based on the most reliable model validated in the previous stages. The final stage is forecasting in order to design future time series values, using the most convenient model according to previous stages [43]. SETAR Model The SETAR model is part of the more general class of threshold autoregressive models (TAR) and represents an extension of autoregressive models, bringing as its main advantage in modeling a time series and a higher flexibility in parameters due to a regime-switching behavior. Thus, this particular type of model allows for the prediction of future values of unemployment rate, assuming that the behavior of the time series changes when the series switch the regime, and this switching is dependent on the past values of the series. The model relies on an autoregressive model of lags p, on each regime, and it is denoted to be SETAR(k,p), where k is the number of thresholds (k + 1 regime assumed in the model) and p is the order of an AR(p). Even if the process is assumed to be linear in each regime, the switching from one regime to another transforms the process into a nonlinear one. The general specification of a two-regime SETAR(2,p,d) of the following regime to the others proves the entire regime as nonlinear [66,67,73]. The two-regime version of the SETAR model of order p is given by: are the coefficient in the lower and higher regime, respectively, which needs to be estimated; τ is the threshold value; p(1) and p(2) are the order of the linear AR model in the low and high regime, respectively. y t-d is the threshold variable governing the transition between the two regimes, d being the delay parameter, which is a positive integer (d < p); ε (1) t and ε (2) t are a sequence of independently and identically distributed random variables with zero mran and constant variance [77]. The main phases for setting a SETAR model are the order selection of the model based on AR(p) order identification together with the test for threshold nonlinearity, model identification requiring the selection of the delay parameter d together with the location of the threshold value, model estimation and evaluation, and the last stage forecasting the future values of unemployment rate. Thus, the first stage in applying the SETAR model is to analyze the existence of a nonlinearity behavior, and for that, it is important to first determine the appropriate lag length of the autoregressive model AR(p) for the analyzed time series, and the choice could rely on the minimum value of AIC. Secondly, we will test the existence of nonlinearity using the Tsay F test, the null hypothesis of linearity being rejected if the p-value of the test is smaller than the significance level assumed. Proving that there is nonlinearity in the time series, we can pass to the second stagemodel identification-and we will consider a two-regime SETAR model with the order p of autoregressive parts equal in both regimes, SETAR(2,p,d). In the third stage, the selection of delay parameter together with the location of the threshold value is realized, taking into account that the possible value d is less than order. Therefore, several SETAR models with different delay parameters and threshold values can be identified, and based on a grid search method, the best model is selected to be the model with the smallest value for the residual sum of squares. The model is estimated using the MLE, and then, the adequacy of the selected model is evaluated based on diagnostics tests on residuals. The ARCH-LM test is used for testing the hypothesis of constant variance and Breusch-Godfrey is used for testing for higher-order serial correlation in the residuals. Forecasting Performance Comparison In order to provide predictions of the future values of unemployment rate based on past and present data and analysis of trends, it is important to use both in-sample and out-of-sample forecasting performance methods, even if the out-of-sample is known to offer more reliable results. Therefore, a model with good performance in the out-of-sample forecasting performance is picked as the best model. The forecasting performance of models was evaluated on two sub-samples: in-sample data, 2000M01-2017M12, which is used to estimate and identify the model and also to provide in-sample forecasting performance, and out-of-sample data, 2018M01-2020M12, which is used for analyzing the forecasting performance. Forecasting accuracy offers valuable information about the goodness fit of the forecasting model and shows the capacity of the model to predict future values of unemployment rate. Three criteria have been used to evaluate the performance of models both on in-sample data and out-of-sample data: the root mean squared error (RMSE), the mean absolute error (MAE), and the mean absolute percent error (MAPE). The better forecast performance of the model is that with the smaller error statistics. Another test used to check the existence of differences between the forecast accuracy of two models was the Diebold-Mariano test [78], which assumes in the null hypothesis the absence of such a difference against the alternative of the existence of a statistical difference between the forecast accuracy of the models. Data and Empirical Results We have used in the empirical analysis the ILO unemployment rate for Romania covering the period 2000M01-2020M12, summing up a total of 252 monthly observations. The data source is the Employment and Unemployment database of Eurostat. We used for the model estimation and identification the estimation period 2000M1-2017M12 as training data and the period 2018M01-2020M12 as test data, while the forecast projections have been made for the next two years, 2021-2023. The evolution of unemployment rate revealed an oscillating trend, from peaks (8.1% in January 2001 and January-March 2002) to minimum levels (5% in September 2008). The winter months of the years 2000, 2001, and 2002 registered increases in the unemployment due to lack of jobs, the year 2002 recording the highest rate of the monthly unemployment rate (144%). A potential explanation could be the dismissals that took place as a result of the implementation of restructuring and privatization programs of different sectors of activity. The impasse in the general economic and social development of Romania, the low living standard, and the lack of future perspectives from the period 1998-2000 reactivated the migration phenomenon, causing many Romanians to look for a job in more developed countries. However, after 1998, illegal migration predominated, which was mainly directed to Italy and Spain. Compared to previous years, in 2004, the unemployment rate decreased; the number of persons entering unemployment was lower than the previous year by 92,442 persons. The 278,080 unemployed related to 2004 came from the redundancies that took place as a result of restructuring and privatization programs of different sectors of activity, and of these, only 67,042 people came from collective redundancies; the remaining 211,038 people came from current redundancies personal. Young people represent the best professionally trained age group in Romania, but also the most exposed to unemployment, highlighting the brain-drain phenomenon. The decrease in the unemployment rate in the period 2002-2006 is due both to legal and illegal departures of persons to work abroad. Thus, in 2006, according to the figures offered by Eurostat, it was estimated that over two million Romanians work in the countries of Western Europe or other developed countries. The economic crisis from 2008 created another peak in the evolution of unemployment rate, registering in the first three months of 2010 the values of 7.7%, 7.7%, and 7.9% and oscillating around this value until the first three months of 2015 (7.5%, 7.4%, and 7.2%). The unemployment rate in 2008 decreased compared to the previous year (6.4%), but during the economic crisis of 2008-2009, there was a substantial increase in the unemployment rate. Although the number of jobs in the economy is constantly decreasing, the unemployment rate is decreasing, the explanation of this paradox being given by the following: 1. Working abroad: according to official estimates, in the first nine months of 2010, the number of those who went to work abroad exceeded 380,000, of which 140,000 went on their own, 140,000 went through recruitment agencies, and 102,000 went through the NAE (National Agency for Employment) 2. Retirement of some of the employees. Quarterly, 70,000-80,000 people retire; therefore 200-300,000 employees must be replaced annually. It is very likely that companies will no longer replace some of the people who have retired, so that the number of employees can decrease without increasing the number of unemployed. 3. Undeclared work. In second quarter of 2010, the number of undeclared workers increased by almost 100,000. For the last years, the trend for unemployment rate was continuously downward, with a minimum point in the first month of 2020 (3.8%), and since February 2020, the unemployment rate registered an ascendant trend. The reversed trend was due to the high unemployment rate (18.5%) among young people (15-24 years) and seasonality in the construction and tourism sectors. In 2019, the unemployment rate decreased to 3.9%, compared to 4.2% in 2018, affecting to a greater extent the graduates of lower and secondary education, for which the rate was 6.3% and 4%, respectively, according to data from the National Institute of Statistics (NIS). On the other hand, the unemployment rate for people with higher education was much lower, 1.6% in 2019. In 2020, in the context of the coronavirus crisis, the unemployment rate started to increase since February, with the taking of safety measures, reaching 5.2% in May, which was the highest level since 2017. According to the NIS, the number of unemployed people exceeded 460,000, with over 110,000 more people than the same period last year. In August, the unemployment rate decreased by 0.1 points compared to the previous month, but it increased by 1.5 points compared to the same month last year. Thus, August was the first month since the beginning of the COVID-19 pandemic on the Romanian territory when the unemployment rate registered a decrease. In March, the unemployment rate was 4.6%. In autumn, in October 2020, the unemployment rate increased by 0.2 points compared to the previous month (5.1%), unemployment among men being higher than among women by 0.5 percentage points, according to the NIS. Unfortunately, youth unemployment (18-24 years) is approaching 20%. As for the number of unemployed, Romanians looking for a job were 477,000, with over 100,000 more than in October of the previous year. In January-October 2020, the medium unemployment rate stood at 4.9%, which was up 1.1 points year/year, an evolution determined by the incidence of the health crisis (and the consequences of this unprecedented shock), partially offset by the implementation of an unprecedented relaxed mix of economic policies. Figure 2 revealed that the Romanian unemployment rate exhibited seasonal fluctuations over the period 2000-2020, with peaks in the last and the first months of the year. Figure 2 depicts the evolution of the monthly unemployment rate, revealing a clear seasonal component in the data, which was confirmed also by the autocorrelation plot ( Figure 3). In August, the unemployment rate decreased by 0.1 points compared to the previous month, but it increased by 1.5 points compared to the same month last year. Thus, August was the first month since the beginning of the COVID-19 pandemic on the Romanian territory when the unemployment rate registered a decrease. In March, the unemployment rate was 4.6%. In autumn, in October 2020, the unemployment rate increased by 0.2 points compared to the previous month (5.1%), unemployment among men being higher than among women by 0.5 percentage points, according to the NIS. Unfortunately, youth unemployment (18-24 years) is approaching 20%. As for the number of unemployed, Romanians looking for a job were 477,000, with over 100,000 more than in October of the previous year. In January-October 2020, the medium unemployment rate stood at 4.9%, which was up 1.1 points year/year, an evolution determined by the incidence of the health crisis (and the consequences of this unprecedented shock), partially offset by the implementation of an unprecedented relaxed mix of economic policies. Figure 2 revealed that the Romanian unemployment rate exhibited seasonal fluctuations over the period 2000-2020, with peaks in the last and the first months of the year. Figure 2 depicts the evolution of the monthly unemployment rate, revealing a clear seasonal component in the data, which was confirmed also by the autocorrelation plot (Figure 3). month, but it increased by 1.5 points compared to the same month last year. Thus, August was the first month since the beginning of the COVID-19 pandemic on the Romanian territory when the unemployment rate registered a decrease. In March, the unemployment rate was 4.6%. In autumn, in October 2020, the unemployment rate increased by 0.2 points compared to the previous month (5.1%), unemployment among men being higher than among women by 0.5 percentage points, according to the NIS. Unfortunately, youth unemployment (18-24 years) is approaching 20%. As for the number of unemployed, Romanians looking for a job were 477,000, with over 100,000 more than in October of the previous year. In January-October 2020, the medium unemployment rate stood at 4.9%, which was up 1.1 points year/year, an evolution determined by the incidence of the health crisis (and the consequences of this unprecedented shock), partially offset by the implementation of an unprecedented relaxed mix of economic policies. Figure 2 revealed that the Romanian unemployment rate exhibited seasonal fluctuations over the period 2000-2020, with peaks in the last and the first months of the year. Figure 2 depicts the evolution of the monthly unemployment rate, revealing a clear seasonal component in the data, which was confirmed also by the autocorrelation plot (Figure 3). Holt-Winters Results The empirical results of Holt-Winters additive and multiplicative models revealed that because both models have exactly the same number of parameters to estimate, the training RMSE from both models can be compared, revealing that the method with multiplicative seasonality fits the data best. In addition, based on the informational criteria (AIC, AICc, or BIC), the optimal model is also the multiplicative version of HW. Table 1 gives the results of the both in-sample and out-of-sample forecasting accuracy measures of the Holt-Winters methods for the unemployment rate. According to the RMSE measure, the multiplicative model performs better than the additive one, while based on the other forecast accuracy measures (MAPE, MASE, or MAE), the optimal model is the additive one, for which they registered the minimum values (Table 2). Analyzing the evolution of monthly unemployment rate for the period 2021-2022, it can be highlighted the fact that the forecast projections tend to under evaluate the actual series, not capturing the impact of the pandemics, and revealing a downward trend in both cases, which is more accentuated in the case of the multiplicative model ( Figure 4). ETS Models Results In the process of obtaining a reliable forecast of the monthly unemployment rate, the ETS automatic selection framework, based on minimizing the AICc, revealed the optimal model to be an ETS(M,N,M) with multiplicative error, no trend, and multiplicative season. The empirical results highlighted that on the training dataset, the ETS model produces better results in comparison with HW additive or multiplicative methods ( Table 3). The ETS(M,N,M) model will provide different point forecasts to the multiplicative Holt-Winters' method, because the parameters have been estimated differently, the default estimation method being maximum likelihood rather than minimum sum of squares (Table 4). ETS Models Results In the process of obtaining a reliable forecast of the monthly unemployment rate, the ETS automatic selection framework, based on minimizing the AICc, revealed the optimal model to be an ETS(M,N,M) with multiplicative error, no trend, and multiplicative season. The empirical results highlighted that on the training dataset, the ETS model produces better results in comparison with HW additive or multiplicative methods ( Table 3). The ETS(M,N,M) model will provide different point forecasts to the multiplicative Holt-Winters' method, because the parameters have been estimated differently, the default estimation method being maximum likelihood rather than minimum sum of squares (Table 4). Table 3. The empirical results of ETS (error, trend, seasonal) models for the forecast of unemployment rate. NNAR Model In order to fit the NNAR model, the series of unemployment rate has been explored on the training dataset in the process of identifying the order of an AR term present in the data, using the correlogram of the series. Based on the ACF and PACF plots, a pure AR(1) process can be highlighted for the non-seasonal part. Analyzing the ACF plot, the decaying spikes at every 12-month interval indicate a seasonal component present in the data ( Figure 6). As the autocorrelation at the seasonal period (ACF at lag 12) is positive, an autoregressive model for the seasonal part should be considered; therefore, the order P was set to 1. Therefore, a NNAR(1,1,k)12 model is fitted, and the in-sample and out-sample root mean square error (RMSE), mean absolute error (MAE), mean absolute scale error (MASE), and mean absolute percentage error (MAPE) are provided in Table 5 where k = NNAR Model In order to fit the NNAR model, the series of unemployment rate has been explored on the training dataset in the process of identifying the order of an AR term present in the data, using the correlogram of the series. Based on the ACF and PACF plots, a pure AR(1) process can be highlighted for the non-seasonal part. Analyzing the ACF plot, the decaying spikes at every 12-month interval indicate a seasonal component present in the data ( Figure 6). As the autocorrelation at the seasonal period (ACF at lag 12) is positive, an autoregressive model for the seasonal part should be considered; therefore, the order P was set to 1. Therefore, a NNAR(1,1,k) 12 model is fitted, and the in-sample and out-sample root mean square error (RMSE), mean absolute error (MAE), mean absolute scale error (MASE), and mean absolute percentage error (MAPE) are provided in Table 5 where k = 1, . . . , 14. In order to fit the NNAR model, the series of unemployment rate has been explored on the training dataset in the process of identifying the order of an AR term present in the data, using the correlogram of the series. Based on the ACF and PACF plots, a pure AR(1) process can be highlighted for the non-seasonal part. Analyzing the ACF plot, the decaying spikes at every 12-month interval indicate a seasonal component present in the data ( Figure 6). As the autocorrelation at the seasonal period (ACF at lag 12) is positive, an autoregressive model for the seasonal part should be considered; therefore, the order P was set to 1. Therefore, a NNAR(1,1,k)12 model is fitted, and the in-sample and out-sample root mean square error (RMSE), mean absolute error (MAE), mean absolute scale error (MASE), and mean absolute percentage error (MAPE) are provided in Table 5 where k = 1, ..., 14. Table 5, MASE and MAPE are lower for the training dataset with 12 nodes in the hidden layer, whereas the out-of-sample MASE and MAPE are lower for 10 nodes in the hidden layer. Therefore, we The selection of the best model relied on the lowest values of all the forecast accuracy measures (RMSE, MAE, MAPE, and MASE), but especially on the values of MAPE and MASE, which are scale independent and used to compare forecast accuracy across series on different scales and seen as an appropriate measure when the out-of-sample data are not of the same length as the in-sample data. Based on the results of Table 5, MASE and MAPE are lower for the training dataset with 12 nodes in the hidden layer, whereas the out-of-sample MASE and MAPE are lower for 10 nodes in the hidden layer. Therefore, we can consider as the best choice the model NNAR(1,1,10) 12 . The forecast of the unemployment rate based on the NNAR(1,1,10) 12 model results revealed a downward trend with a peak in September 2018 (4.43%) and with a forecasting value for 2021-2022 oscillating around the value of 4.35% (Figure 7). SARIMA Model For fitting a SARIMA model, we used data covering the period January 2000 to December 2017. The descriptive statistics values of the unemployment rate for the training dataset are displayed in Figure 8. The series exhibited a strong seasonal pattern over the horizon 2000-2017. . Forecasts from a neural network with one seasonal and non-seasonal lagged input and one hidden layer containing ten neurons. SARIMA Model For fitting a SARIMA model, we used data covering the period January 2000 to December 2017. The descriptive statistics values of the unemployment rate for the training dataset are displayed in Figure 8. The series exhibited a strong seasonal pattern over the horizon 2000-2017. Testing for Non-Stationarity In order to fit a suitable time series model, the stationarity need to be investigated based on Augmented Dickey-Fuller and Philips-Perron tests. The graphical inspection of the autocorrelation and partial correlation plot of Romania's quarterly unemployment rate (Figure 9) revealed that the values of autocorrelation coefficients decrease slowly, pointing out a nonstationary and relatively stable seasonal pattern of our time series. Testing for Non-Stationarity In order to fit a suitable time series model, the stationarity need to be investigated based on Augmented Dickey-Fuller and Philips-Perron tests. The graphical inspection of the autocorrelation and partial correlation plot of Romania's quarterly unemployment rate ( Figure 9) revealed that the values of autocorrelation coefficients decrease slowly, pointing out a nonstationary and relatively stable seasonal pattern of our time series. The time-series plot of the first difference of the series highlighted that the unemployment rate is a non-stationary mean time series. The information is also confirmed by the empirical results of Bartlett and Ljung-Box tests. The time-series plot of the first difference of the series highlighted that the first difference of the unemployment rate seems to be a stationary mean time series. Therefore, the original quarterly series is a non-stationary time series. Diagram (b) from Figure 9 indicates that a possible stationarity exists in first differences. Alternately, we investigated the presence of unit roots by applying the Augmented Dickey-Fuller and Phillips-Peron tests initially to the series in level and then to the series in first differences. The empirical results on unemployment rate are displayed in Table 6, indicating that the series of unemployment rate is stationary in first differences, being integrated of order 1. Testing for Non-Stationarity In order to fit a suitable time series model, the stationarity need to be investigated based on Augmented Dickey-Fuller and Philips-Perron tests. The graphical inspection of the autocorrelation and partial correlation plot of Romania's quarterly unemployment rate (Figure 9) revealed that the values of autocorrelation coefficients decrease slowly, pointing out a nonstationary and relatively stable seasonal pattern of our time series. The time-series plot of the first difference of the series highlighted that the unemployment rate is a non-stationary mean time series. The information is also confirmed by the empirical results of Bartlett and Ljung-Box tests. The time-series plot of the first difference of the series highlighted that the first difference of the unemployment rate seems to be a stationary mean time series. Therefore, the original quarterly series is a non-stationary time series. Diagram (b) from Figure 9 indicates that a possible stationarity exists in first differences. Alternately, we investigated the presence of unit roots by applying the Augmented Dickey-Fuller and Phillips-Peron tests initially to the series in level and then to the series in first differences. The empirical results on unemployment rate are displayed in Table 6, indicating that the series of unemployment rate is stationary in first differences, being integrated of order 1. Table 6. Unit root analysis of the Romanian unemployment rate. The next step was to test the presence of a structural break around 2009 (from Figure 10), taking into account that the presence of a structural break will invalidate the results of unit root tests. Therefore, the Zivot-Andrews test has been used, the empirical result revealing that there is not enough evidence to reject both the null hypothesis that unemployment has a unit root with structural break in trend, and in both intercept and trend (Table 7). Thus, the empirical results proved that the unemployment rate is non-stationary and integrated of order 1, I(1). However, because the series of unemployment exhibits a seasonal pattern over the training period, the study will use a seasonal ARIMA model instead of non-seasonal models; therefore, it is necessary to check whether the seasonality is needed to be differenced or not, testing if the stochastic seasonality is present within the data, the empirical results of Hegy test revealing the rejection of seasonal unit root and the acceptance of only a non-seasonal unit root. Therefore, seasonal difference is not needed. Therefore, we can conclude that the unemployment rate is a non-stationary series, without stochastic seasonality and integrated of order 1. Thus, the rate of unemployment will be modeled at the first difference of the series within the SARIMA model and selfexciting threshold autoregressive (SETAR) model. 10), taking into account that the presence of a structural break will invalidate the results of unit root tests. Therefore, the Zivot-Andrews test has been used, the empirical result revealing that there is not enough evidence to reject both the null hypothesis that unemployment has a unit root with structural break in trend, and in both intercept and trend (Table 7). Thus, the empirical results proved that the unemployment rate is non-stationary and integrated of order 1, I(1). Identification of the Model For the first difference of the UR, the model identification implies the identification of proper values of p, P, q, and Q using the ACF and PACF plot. The seasonal part of an AR or MA model will be seen in the seasonal lags. The ACF plot has a spike at lags 4 and 6 and an exponential decay starting from seasonal lag 12, suggesting a potential non-seasonal MA component-MA(4) or MA(6) ( Table 8). 0.000000 Note: The HEGY test was applied taking into account intercept and trend and seasonal dummies; the maximal number of lags was eight following Schwarz criterion and a number of 1000 simulations. * If the probability is higher than 0.10, then the presence of the non-seasonal unit root cannot be rejected. ** If the probability is higher than 0.10, then the presence of a seasonal unit root cannot be rejected. The PACF plot shows that lags 4, 6, and 12 are significant, capturing also potential nonseasonal AR components together with a seasonal AR(1) (Figure 11). In our case, because the autocorrelation at the seasonal lags (12,24) is positive, a combination of seasonal and non-seasonal autoregressive models can be identified. Thus, several models have been specified, and based on AIC and BIC together with the goodness of fit measures, the best model has been identified. Thus, several models have been specified, and based on AIC and BIC together with the goodness of fit measures, the best model has been identified, taking into account the lowest values of AIC and SBC. The best model has been an ARIMA(0,1,6)(1,0,1)12 considered based on the minimum value of AIC and SBC (Table 9). Model Estimation Based on the model identified in the previous stage, we can proceed to the phase of model estimation using maximum likelihood method (ML), the empirical results being presented in Table 10. All coefficients statistically are significant at the 10% significance level. Table 10. Estimates of parameters for SARIMA(0,1,6)(1,0,1)12. Thus, several models have been specified, and based on AIC and BIC together with the goodness of fit measures, the best model has been identified, taking into account the lowest values of AIC and SBC. The best model has been an ARIMA(0,1,6)(1,0,1) 12 considered based on the minimum value of AIC and SBC (Table 9). Model Estimation Based on the model identified in the previous stage, we can proceed to the phase of model estimation using maximum likelihood method (ML), the empirical results being presented in Table 10. All coefficients statistically are significant at the 10% significance level. Apart from classical tests, the t-test for the statistical significance of the parameters, and the F-test for the validity of the model, the selection of the best model depends also on the performance of residuals. For that, the series of residuals has been investigated to follow a white noise. The empirical results of the Ljung-Box test show that the p-values of the test statistic exceed the 5% level of significance for all lag orders, which implies that there is no significant autocorrelation in residuals ( Figure 12). Apart from classical tests, the t-test for the statistical significance of the parameters, and the F-test for the validity of the model, the selection of the best model depends also on the performance of residuals. For that, the series of residuals has been investigated to follow a white noise. The empirical results of the Ljung-Box test show that the p-values of the test statistic exceed the 5% level of significance for all lag orders, which implies that there is no significant autocorrelation in residuals ( Figure 12). For checking autoregressive conditional heteroskedasticity (ARCH) in the residuals, the ARCH-LM test has been used, and the empirical results confirmed that there is no autoregressive conditional heteroscedasticity (ARCH) in the residuals (Table 11). Therefore, we can conclude that residuals are not autocorrelated and do not form ARCH models, the SARIMA(0,1,6)(1,0,1)12 model being reliable for forecasting (Table 12). The forecast of the unemployment rate based on the ARIMA(0,1,6)(1,0,1)12 model results revealed a downward trend with a forecasting value for 2021-2022 oscillating around the value of 3-4% ( Figure 13). For checking autoregressive conditional heteroskedasticity (ARCH) in the residuals, the ARCH-LM test has been used, and the empirical results confirmed that there is no autoregressive conditional heteroscedasticity (ARCH) in the residuals (Table 11). Therefore, we can conclude that residuals are not autocorrelated and do not form ARCH models, the SARIMA(0,1,6)(1,0,1) 12 model being reliable for forecasting (Table 12). The forecast of the unemployment rate based on the ARIMA(0,1,6)(1,0,1) 12 model results revealed a downward trend with a forecasting value for 2021-2022 oscillating around the value of 3-4% ( Figure 13). Self-Exciting Threshold Autoregressive (SETAR Model) In fitting a SETAR model for the Romanian unemployment rate, the first sta quire the identification of the autoregressive order and testing the existence of non thresholds. The autoregressive order has been identified based on the PACF plot. F ing Desaling [74], we explored the unemployment rate in level for identifying the toregressive order, since the non-stationarity in UR does not cause the non-stationa nonlinear thresholds in the SETAR model, even if the existence of a unit root in one can occur. Significant spikes can be observed at lags 1, 7, and 13 ( Figure 14). Self-Exciting Threshold Autoregressive (SETAR Model) In fitting a SETAR model for the Romanian unemployment rate, the first stages require the identification of the autoregressive order and testing the existence of nonlinear thresholds. The autoregressive order has been identified based on the PACF plot. Following Desaling [74], we explored the unemployment rate in level for identifying the lag autoregressive order, since the non-stationarity in UR does not cause the non-stationarity of nonlinear thresholds in the SETAR model, even if the existence of a unit root in one regime can occur. Significant spikes can be observed at lags 1, 7, and 13 ( Figure 14). Self-Exciting Threshold Autoregressive (SETAR Model) In fitting a SETAR model for the Romanian unemployment rate, the first stages require the identification of the autoregressive order and testing the existence of nonlinear thresholds. The autoregressive order has been identified based on the PACF plot. Following Desaling [74], we explored the unemployment rate in level for identifying the lag autoregressive order, since the non-stationarity in UR does not cause the non-stationarity of nonlinear thresholds in the SETAR model, even if the existence of a unit root in one regime can occur. Significant spikes can be observed at lags 1, 7, and 13 ( Figure 14). At these lags, we have tested the presence of nonlinear thresholds applying the Tsay test of threshold nonlinearity, the empirical results being presented in Table 13, revealing that there is enough evidence to reject the null hypothesis of no nonlinear threshold in autoregressive order 1, 7, 8, 9, 10, 11, 12, and 13, the p-value being mostly less than 1%. Therefore, at these lags, the SETAR model is better than the simple AR model. At these lags, we have tested the presence of nonlinear thresholds applying the Tsay test of threshold nonlinearity, the empirical results being presented in Table 13, revealing that there is enough evidence to reject the null hypothesis of no nonlinear threshold in Entropy 2021, 23, 325 23 of 31 autoregressive order 1,7,8,9,10,11,12, and 13, the p-value being mostly less than 1%. Therefore, at these lags, the SETAR model is better than the simple AR model. For the lags exhibiting a nonlinear threshold, we have used the lowest values of AIC to select the optimal model for which we will design the SETAR model. Thus, an AR (13) with possible values of delay parameter d = 1 . . . 12 < p has been used in setting the SETAR model. Since the number of potential regimes in the autoregressive model depends on the number of threshold values, a grid search method has been performed to determine the regimes and estimate the thresholds value under the condition of one threshold in AR based on the smallest value of sum of squared residuals. Thus, the delay parameter d = 10 registered the smallest value for residuals sum of squares; therefore, a SETAR model with two regimes of order 13 and threshold decay 1, a SETAR(2,13,1) model with a threshold variable could be appropriate to explain the nonlinearity in the data ( Figure 15). For the lags exhibiting a nonlinear threshold, we have used the lowest values of AIC to select the optimal model for which we will design the SETAR model. Thus, an AR (13) with possible values of delay parameter d = 1…12 < p has been used in setting the SETAR model. Since the number of potential regimes in the autoregressive model depends on the number of threshold values, a grid search method has been performed to determine the regimes and estimate the thresholds value under the condition of one threshold in AR based on the smallest value of sum of squared residuals. Thus, the delay parameter d = 10 registered the smallest value for residuals sum of squares; therefore, a SETAR model with two regimes of order 13 and threshold decay 1, a SETAR(2,13,1) model with a threshold variable could be appropriate to explain the nonlinearity in the data ( Figure 15). After the estimation stage, the residuals of the model have been checked for best fit, verifying them for the information of serial autocorrelation, constant variance, and zero mean based on ARCH-LM and Breusch-Godgrey tests. Having the p-values greater than a 1% significance level, we can conclude that the residuals are not autocorrelated and with constant variance (Table 15). The forecast of unemployment rate based on the results of the SETAR(2,13,1) model (Table 16) revealed an upward trend, over evaluating the phenomenon (Figure 16). The forecast of unemployment rate based on the results of the SETAR(2,13,1) model (Table 16) revealed an upward trend, over evaluating the phenomenon (Figure 16). Comparison of Models Forecasting Performance Analyzing the forecasting performance of all models for the in-sample dataset based on RMSE, MAE, and MAPE as well as on the results of the Diebold and Marino test, it can observed that all three criteria suggested that multiplicative HW registered better forecast performance for the training dataset. The p-value of the Diebold and Marino test highlighted the existence of differences in forecast accuracy between almost all models, with the exception of multiplicative HW and ETS, for which the probability being higher than 10% does not provide enough evidence to reject the null hypothesis (Table 17). Comparison of Models Forecasting Performance Analyzing the forecasting performance of all models for the in-sample dataset based on RMSE, MAE, and MAPE as well as on the results of the Diebold and Marino test, it can observed that all three criteria suggested that multiplicative HW registered better forecast performance for the training dataset. The p-value of the Diebold and Marino test highlighted the existence of differences in forecast accuracy between almost all models, with the exception of multiplicative HW and ETS, for which the probability being higher than 10% does not provide enough evidence to reject the null hypothesis (Table 17). The out-of-sample forecasting performance of models has performed with a onestep ahead recursive method. Based on RMSE and MAE values, the NNAR model has better forecasting performance, while MAPE stipulates the SARIMA model to register higher performance. For the out-of-sample data, the empirical results of the DM test pointed out differences in the predictive power for almost all models, with the exception of multiplicative HW and NNAR, for which the p-value is greater than the 10%, so the null hypothesis can not be rejected (Table 18). Analyzing comparatively the forecast performance of all methods during the period 2018-2022 and taking into account the presence of the pandemic shock, it is worth mentioning that ETS and Multiplicative HW are the methods that best capture the pandemic shock from 2020, offering forecast values relatively close to the real values of unemployment rate from the pandemics (Figure 17). Analyzing comparatively the forecast performance of all methods during the period 2018-2022 and taking into account the presence of the pandemic shock, it is worth mentioning that ETS and Multiplicative HW are the methods that best capture the pandemic shock from 2020, offering forecast values relatively close to the real values of unemployment rate from the pandemics (Figure 17). Based on the methods offering the best results for out-of-sample forecasting, NNAR and SARIMA, the forecasted values of unemployment rate for the period 2021-2022 have been examined, revealing the existence of a slight difference ( Figure 18). According to NNAR, the predicted value of unemployment rate for January 2021 is estimated to be 4.35% compared with 5% in December 2020, and over the whole period, the forecast values oscillate around 4.35%. On the other hand, the forecast values based on the SARIMA model revealed a predicted value of 4.22% for the unemployment rate of January 2021 and highlighted a descending trend over the horizon 2021-2022, with a predicted value of 3.54% in December 2022. Based on the methods offering the best results for out-of-sample forecasting, NNAR and SARIMA, the forecasted values of unemployment rate for the period 2021-2022 have been examined, revealing the existence of a slight difference ( Figure 18). An alternative to improving the forecast accuracy is to average the resulting forecasts based on these two methods, which are considered to be suitable for the modeling and forecasting of unemployment rate. Conclusions Making predictions about unemployment rate, one of the core indicators of the Romanian labor market with fundamental impact on the government future social policy strategies, is of great importance, mostly in this period of a major shock in the economy caused by the pandemic. In this context, the aim of the research has been to evaluate the forecasting performance of several models and to build future values of unemployment rate for the period 2021-2022 using the most suitable results. In order to do that, we have employed exponential smoothing models, both additive and multiplicative Holt-Winters (HW) models According to NNAR, the predicted value of unemployment rate for January 2021 is estimated to be 4.35% compared with 5% in December 2020, and over the whole period, the forecast values oscillate around 4.35%. On the other hand, the forecast values based on the SARIMA model revealed a predicted value of 4.22% for the unemployment rate of January 2021 and highlighted a descending trend over the horizon 2021-2022, with a predicted value of 3.54% in December 2022. An alternative to improving the forecast accuracy is to average the resulting forecasts based on these two methods, which are considered to be suitable for the modeling and forecasting of unemployment rate. Conclusions Making predictions about unemployment rate, one of the core indicators of the Romanian labor market with fundamental impact on the government future social policy strategies, is of great importance, mostly in this period of a major shock in the economy caused by the pandemic. In this context, the aim of the research has been to evaluate the forecasting performance of several models and to build future values of unemployment rate for the period 2021-2022 using the most suitable results. In order to do that, we have employed exponential smoothing models, both additive and multiplicative Holt-Winters (HW) models together with an ETS model, the SARIMA model, the neural network autoregression (NNAR) model, and the SETAR model, which allow taking into account a nonlinear behavior and a switching regime on the time series and predicting future values of unemployment rate beyond the period under consideration. The empirical results revealed for unemployment rate a non-stationary nonlinear and seasonal pattern in data. The out-of-sample forecasting accuracy of the models based on the performance measures RMSE and MAE pointed out the NNAR model as performing better, while MAPE indicated SARIMA to have the best performance. The empirical results of the Diebold-Mariano test at one forecast horizon for out-of-sample methods revealed differences in the forecasting performance between SARIMA and NNAR; of these, the best model of modeling and forecasting unemployment rate was considered to be the NNAR model. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: https://ec.europa.eu/eurostat/en/web/products-datasets/-/UNE_RT_M (accessed on 15 January 2021). Conflicts of Interest: The authors declare no conflict of interest.
2021-03-29T05:17:20.309Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "8e4f79b80b2959c004d12f5c6961d8668a0843b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/23/3/325/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e4f79b80b2959c004d12f5c6961d8668a0843b8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
253021046
pes2o/s2orc
v3-fos-license
Subinhibitory Concentrations of Antibiotics Alter the Response of Klebsiella pneumoniae to Components of Innate Host Defense ABSTRACT Carbapenem-resistant Klebsiella pneumoniae isolates classified as multilocus sequence type 258 (ST258) are a problem in health care settings in many countries globally. ST258 isolates are resistant to multiple classes of antibiotics and can cause life-threatening infections, such as pneumonia and sepsis, in susceptible individuals. Treatment strategies for such infections are limited. Understanding the response of K. pneumoniae to host factors in the presence of antibiotics could reveal mechanisms employed by the pathogen to evade killing in the susceptible host, as well as inform treatment of infections. Here, we investigated the ability of antibiotics at subinhibitory concentrations to alter K. pneumoniae capsular polysaccharide (CPS) production and survival in normal human serum (NHS). Unexpectedly, pretreatment with some of the antibiotics tested enhanced ST258 survival in NHS. For example, a subinhibitory concentration of mupirocin increased survival for 7 of 10 clinical isolates evaluated and there was increased cell-associated CPS for 3 of these isolates compared with untreated controls. Additionally, mupirocin pretreatment caused concomitant reduction in the deposition of the serum complement protein C5b-9 on the surface of these three isolates. Transcriptome analyses with a selected ST258 isolate (34446) indicated that genes implicated in the stringent response and/or serum resistance were upregulated following mupirocin treatment and/or culture in NHS. In conclusion, mupirocin and/or human serum causes changes in the K. pneumoniae transcriptome that likely contribute to the observed decrease in serum susceptibility via a multifactorial process. Whether these responses can be extended more broadly and thus impact clinical outcome in the human host merits further investigation. IMPORTANCE The extent to which commensal bacteria are altered by exposure to subinhibitory concentrations of antibiotics (outside resistance) remains incompletely determined. To gain a better understanding of this phenomenon, we tested the ability of selected antibiotics (at subinhibitory concentrations) to alter survival of ST258 clinical isolates in normal human serum. We found that exposure of ST258 to antibiotics at low concentrations differentially altered gene expression, capsule production, serum complement deposition, and bacterial survival. The findings were isolate and antibiotic dependent but provide insight into a potential confounding issue associated with ST258 infections. The majority of KPC-producing clinical isolates in the United States are classified by multilocus sequence typing (MLST) as sequence type 258 (ST258) (9). The ST258 lineage is widely distributed and abundant in Europe, North America, and South America (4). Genetic characterization of ST258 isolates in the United States revealed two predominant capsular polysaccharide (CPS) types (CPS1 and CPS2, now known as KL106 and KL107, respectively), encoded by cps1 and cps2 loci (10). The basis for the success of ST258 and related clones outside antibiotic resistance remains incompletely determined, although CPS likely plays an important role. In general, K. pneumoniae CPS contributes to the often-observed resistance to serum bactericidal activity in vitro (11). Isogenic noncapsulated strains and capsule-defective strains are more susceptible to serum bactericidal activity than wild-type K. pneumoniae strains (12,13), and increased serum killing of strains lacking CPS is in part attributed to increased complement deposition on the bacterial surface (12,14,15). Bactericidal activity of serum is also enhanced by CPS-specific antibodies (13,16), which may in part explain the varied survival of ST258 clinical isolates in human blood and serum in vitro (17). Antibiotics at subinhibitory concentrations can serve as signal molecules, directing changes in gene expression that result in altered virulence and physiology (18)(19)(20)(21)(22). Therefore, exposure of bacteria to subinhibitory concentrations of antibiotics has the potential to impact treatment negatively. To better understand the ability of ST258 to cause human infections, we tested the ability of antibiotics to alter bacterial susceptibility to normal human serum (NHS). RESULTS Subinhibitory concentrations of antibiotics alter ST258 survival in human serum. We first evaluated the ability of subinhibitory concentrations of selected antibiotics to alter ST258 survival in NHS. Antibiotics were selected based on mechanism of action and/or use as a primary therapeutic agent for K. pneumoniae infections (see Tables S1 and S2 in the supplemental material). The 10 clinical isolates tested were recovered from patients with bacteremia and/or a wound infection and were assigned to KL106 (CPS1) and KL107 (CPS2) capsule subclades (Table 1). Pretreatment with five of the antibiotics tested, doxycycline, colistin, mupirocin, rifampin, and tigecycline, caused increased survival in NHS for one or more of the isolates evaluated in these assays (Table 2). Unexpectedly, subinhibitory concentrations of mupirocin caused increased survival in NHS for 7 of the 10 isolates (Table 2). Collectively, these data provide evidence that subinhibitory concentrations of antibiotics can enhance survival of ST258 in NHS. Decreased surface deposition of serum complement following exposure to mupirocin. We next investigated the mechanism underlying increased survival of ST258 in NHS following exposure to mupirocin, the antibiotic that altered survival for the greatest number of isolates. We first measured the deposition of serum complement C5b-9 on the bacterial surface by using flow cytometry (Fig. 1). ST258 isolates 34446 (CPS1) and 35106 (CPS2) were selected for the initial experiments, but we ultimately evaluated complement deposition with all 10 ST258 clinical isolates. Doxycycline and vancomycin were used as controls for these assays because they had no significant impact on the survival of 34446 or 35106 in NHS (Table 2). There was a significant decrease in deposition of C5b-9 on the surface of mupirocin-and/or doxycycline-treated 34422, 34446, and 35106 compared with untreated control bacteria (P , 0.05) (Fig. 1). In comparison, vancomycin failed to alter surface deposition of C5b-9 on these clinical isolates ( Fig. 1). Overall, there was not necessarily a direct correlation between decreased C5b-9 surface deposition and survival in these assays, since only three of the seven ST258 clinical isolates whose survival was increased following pretreatment with mupirocin had corresponding decreased complement deposition. Mupirocin treatment increases cell-associated CPS production. Inasmuch as CPS has been demonstrated previously to protect K. pneumoniae from killing by serum complement ST258 and Subinhibitory Concentrations of Antibiotics Microbiology Spectrum (13,23,24), we tested the ability of mupirocin to alter CPS production by ST258. To determine if CPS contributes to survival of the mupirocin-treated bacteria, we isolated cell-associated CPS and measured uronic acid content following exposure to subinhibitory concentrations of antibiotic (Fig. 2). Compared with untreated bacteria, CPS production was increased significantly in the three clinical isolates that had corresponding decreases in surface complement deposition (i.e., isolates 34422, 34446, and 35106) after pretreatment with mupirocin (P , 0.01) ( Fig. 2A). On the other hand, increased cell-associated CPS failed to correlate with decreased C5b-9 surface deposition for 34446 and 35106 following exposure to doxycycline ( Fig. 1 and Fig. 2A). Consistent with increased CPS production, transmission electron microscopy (TEM) analysis with isolate 34446 indicated that mupirocin pretreatment increased thickness of the CPS layer compared with that of untreated control bacteria or bacteria pretreated with doxycycline or vancomycin ( Fig. 2B and C). Collectively, these data support the idea that mupirocin enhances survival of some ST258 clinical isolates (i.e., 34422, 34446, and 35106) in NHS in part by eliciting increased CPS production. These findings are consistent with those of Álvarez et al., who reported that the amount of CPS rather than K-type is important for K. pneumoniae resistance to complement-mediated killing (24). Mupirocin and/or human serum alters K. pneumoniae gene expression. To gain insight into the molecular mechanisms used by ST258 to survive in NHS, we used transcriptome sequencing (RNA-Seq) to measure changes in the 34446 transcriptome during culture in NHS 6 pretreatment with doxycycline or mupirocin (Fig. 3). We selected 34446 as a representative ST258 clinical isolate for RNA-Seq experiments because it had a strong survival phenotype following exposure to mupirocin and it was also not feasible to conduct these experiments with multiple clinical isolates. Principal-component analysis (PCA) was used as a first step to evaluate RNA-Seq data based on sources of experimental variance (Fig. 3A). Data obtained from bacteria exposed to subinhibitory concentrations of doxycycline 6 NHS clustered with control bacteria not exposed to antibiotic (LB and LB plus NHS) (Fig. 3A). On the other hand, bacteria cultured in NHS were separated clearly by PCA from those not cultured in NHS, regardless of the antibiotic pretreatment (Fig. 3A). In addition, data from bacteria treated with mupirocin 6 NHS clustered in groups separate from control bacteria or those treated with doxycycline (Fig. 3A). Collectively, these results suggest that NHS and/or subinhibitory concentrations of mupirocin but not doxycycline elicited significant changes in 34446 gene expression. Therefore, we analyzed data obtained from mupirocin-treated bacteria 6 NHS in more detail (Fig. 3B). First, culture in NHS alone caused significant changes in the ST258 transcriptome, including upregulation of genes involved in CPS biosynthesis and serum fitness (Fig. 3B). The finding that arnDEF, arnT, wcaJ, wecB, and wzc were upregulated by ST258 during culture in NHS is consistent with a recent study by Short et al., who used a transposon library to identify ST258 genes involved in serum resistance (25). Although mupirocin alone (no NHS) elicited changes in gene expression that were more limited in scope than those elicited by NHS (Fig. 3B, second column), the data are consistent with the increased CPS production (Fig. 2B). That is, treatment with mupirocin alone caused upregulation of transcripts involved in CPS biosynthesis (wzy, wcaJ, and ugd) and those encoding glycosyl transferases (Fig. 2B). In addition, glpD, pyrB, and pyrC, genes involved in K. pneumoniae capsule regulation and serum fitness, were upregulated after pretreatment with mupirocin alone (Fig. 3B) (26)(27)(28). csrD was downregulated following exposure to mupirocin, and mutation of csrD increases K. pneumoniae survival in serum (Fig. 3B) (25,28). Collectively, these data provide support to the idea that the ability of subinhibitory concentrations of mupirocin to alter bacterial gene expression underlies the observed enhanced survival of 34446 in NHS. DISCUSSION Subinhibitory concentrations of antibiotics are known to elicit multiple responses in bacteria, including global changes in gene expression that can facilitate persistence in the host (18). Previous studies have reported synergy or antagonism between antibiotics and serum bactericidal activity, depending on the bacterium (29,30 (30). In contrast, the bactericidal activity of polymyxin B toward Pseudomonas aeruginosa was inhibited completely by 20% human serum (29). The extent to which these phenomena impact treatment of human infections, including those caused by carbapenem-resistant K. pneumonia, remains incompletely determined. As a first step toward gaining a better understanding of this phenomenon, we tested the ability of antibiotics at subinhibitory concentrations to alter ST258 survival in NHS. Although we tested a limited selection of antibiotics at subinhibitory concentrations, 5 of the 10 antibiotics evaluated increased survival in NHS for one or more ST258 clinical isolates. Our finding that mupirocin enhanced survival of the majority (7/10) of ST258 isolates in NHS is unrelated to clinical use of mupirocin, since it is used externally (e.g., on skin as a topical ointment or intranasally to eliminate Staphylococcus aureus from the nose). Mupirocin inhibits bacterial isoleucyl-tRNA and thereby inhibits protein synthesis (31)(32)(33), which in turn triggers the stringent response in some bacteria (34)(35)(36)(37). The gene encoding SpoT, an enzyme that contributes to regulation of stress responses (including the stringent response) in K. pneumoniae, was upregulated during culture in NHS, but it was not changed significantly by exposure to mupirocin under our assay conditions (Fig. 3B) (38). It is possible that the stringent response was induced by mupirocin in these ST258 isolates, but our ability to detect changes in gene expression was limited technically. There was increased C5b-9 surface deposition and concomitant increased cell-associated CPS on only 3 of the 10 ST258 isolates tested ( Fig. 1 and 2). Inasmuch as we measured cell-associated CPS only, any CPS shed from the surface would have gone undetected in our uronic acid assays-and shed CPS has potential to alter serum bactericidal activity. We used an RNA-Seq approach to gain insight into the mechanism underlying increased ST258 survival in human serum following pretreatment with mupirocin. We selected a single isolate (34446) for the RNA-Seq studies because it was one of three isolates that had concomitant increased C5b-9 surface deposition and decreased cell-associated CPS. Our finding that wzy, wcaJ, and ugd as well as genes encoding glycosyltransferases were upregulated following pretreatment with mupirocin (Fig. 3B) is consistent with the observed increase in CPS thickness in isolate 34446 (Fig. 2B). In addition, pyrB and pyrC were upregulated in ST258 isolate 34446 following exposure to mupirocin (Fig. 3B). Weber et al. reported pyrB and pyrC are important for K. pneumoniae growth in serum (26), although specific roles for the encoded proteins in the survival in serum are not known. The ST258 genes reported here as induced by mupirocin and/or NHS may contribute to the observed phenotypes, but it remains incompletely determined how this antibiotic effected the changes that led to increased ST258 survival in serum. Unintended effects of subinhibitory concentrations of antibiotics, such as increasing the ability of bacteria to survive in human serum, may confound treatment of infections. This phenomenon and others related to antibiotics and altered bacterial responses merit further investigation. MATERIALS AND METHODS Antibiotics, bacterial strains, and culture. Linezolid and mupirocin were purchased from ChemPacific Corp. (Baltimore, MD) and AppliChem GmbH (Darmstadt, Germany), respectively. All other antibiotics were purchased from Sigma-Aldrich (St. Louis, MO). The K. pneumoniae isolates used in this study are clinical isolates classified as multilocus sequence type 258 (ST258) ( Table 1). All strains were routinely cultured in Luria-Bertani (LB) broth. For treatment/pretreatment of bacteria with antibiotics, overnight cultures were diluted 1:1,000 in fresh LB and then cultured at 37°C with shaking to an optical density at 600 nm (OD 600 ) of 0.2 to 0.3. Antibiotics were added to bacterial culture aliquots to attain subinhibitory concentrations based on the MIC of each as determined below. Antibiotic susceptibility assay. The MICs of antibiotics were determined using the broth microdilution method as recommended by the Clinical and Laboratory Standards Institute. Briefly, all antibiotics were diluted into cation-adjusted Mueller-Hinton broth (CA-MHB) in wells of sterile 96-well microtiter plates. The concentrations tested ranged from 1 mg/mL to 512 mg/mL. A McFarland standard inoculum of 0.5 was prepared from bacterial colonies cultured on tryptic soy agar plates. The standardized cultures were diluted 1:100 in CA-MHB, and aliquots were added to wells containing antibiotic. Control wells lacking antibiotic or bacteria (medium only) were prepared accordingly. The microplates were incubated for 20 h at 37°C and inspected for turbidity. The MIC represented the lowest concentration of antibiotic that yielded no growth (see Table S1 in the supplemental material). To determine subinhibitory concentrations, bacteria were treated with dilutions of antibiotics starting from one-fourth the MIC in LB at 37°C with shaking at 220 rpm. To determine CFU per milliliter, 100-mL aliquots of culture were plated on LB agar and enumerated after overnight incubation at 37°C. We selected the lowest concentration of antibiotic that did not significantly decrease bacterial viability in these assays (Table S2). Isolation of blood and preparation of normal human serum. Venous blood was obtained from healthy volunteers in accordance with a protocol (01IN055) approved by the Institutional Review Board for Human Subjects at the National Institutes of Health. All subjects gave written informed consent to participate in the study. NHS was prepared by using a standard method (coagulation at 37°C for 30 min followed by centrifugation to pellet cells and coagulated material) and frozen and thawed one time before use. Heat-inactivated serum was prepared by incubating normal human serum at 56°C for 30 min. Serum bactericidal activity assay. To test survival of ST258 clinical isolates in normal human serum (NHS) 6 antibiotic pretreatment, bacteria were cultured in LB to an OD 600 of ;0.2 and treated with antibiotics (Table S2) for 2 h. Antibiotic-free LB cultures were used as controls. After antibiotic pretreatment, bacteria were washed with Dulbecco's phosphate-buffered saline (DPBS) (Sigma-Aldrich, St. Louis, MO), and 40 mL of washed bacteria was mixed with 360 mL of NHS to yield ;10 7 bacteria in 90% NHS. Cultures were incubated at 37°C for 30 min with continuous shaking (1,200 rpm). Culture aliquots were serially diluted and plated on LB agar plates. Colonies were enumerated the following day and used to determine CFU per milliliter. The percentage of survival 6 antibiotic at 30 min was determined relative to CFU per milliliter at 0 min (start of the assay) by using the following equation: % survival = CFU/mL 30 min /CFU/mL 0 min  100. Complement deposition assay. Bacteria were treated 6 antibiotics as described above for 2 h prior to culture in 90% NHS for 30 min at 37°C. Deposition of serum complement component C5b-9 on K. pneumoniae was measured by flow cytometry as described previously (17). Bacteria were pelleted by centrifugation at 1,000  g for 10 min at 4°C, washed with DPBS, and incubated in 1 mL blocking buffer (5% normal goat serum in DPBS) for 1 h. Bacteria were pelleted by centrifugation and resuspended in DPBS. A 100-mL aliquot of resuspended cells was dispensed into new 1.5-mL tubes and labeled with antibodies coupled to fluorescein isothiocyanate (FITC). To detect surface-bound C5b-9, bacteria were incubated with anti-C5b-9 plus C5b-8 antibody (Abcam) in DPBS followed by FITC-conjugated anti-mouse IgG antibody (Jackson ImmunoResearch Laboratories, Inc., West Grove, PA). An FITC-conjugated mouse IgG2a kappa was used as an isotype control antibody (Ab) (eBioscience, ThermoFisher Scientific, Waltham, MA). Antibody labeling was done on ice for 30 min followed by the addition of wash buffer (2% goat serum in DPBS) and centrifugation at 1000  g for 10 min at 4°C. Cell pellets were resuspended in 200 mL of wash buffer and analyzed quantitatively by flow cytometry by using a BD FACSCelesta cell analyzer (BD Bioscience, San Jose, CA). Capsule isolation and quantification. Bacteria were treated 6 antibiotics as described above for 2 h prior and then centrifuged at 3,300  g for 10 min at room temperature. Bacterial pellets were washed with 1 mL DPBS in 1.5-mL microcentrifuge tubes, and the supernatant was discarded. The pellets were suspended in 500 mL DPBS plus 100 mL of 1% zwittergent 3-14 [3-(N,N-dimethyl-myristylammonio)propanesulfonate] (Sigma-Aldrich, St. Louis, MO) in 100 mM citric acid at pH 2.0. The tubes were incubated for 30 min at 50°C with occasional shaking (10 s of mixing at 700 rpm and 20 s at rest). The samples were centrifuged at 14,100  g for 2 min at room temperature. CPS was precipitated from 300 mL of the supernatants in new 1.5-mL tubes by adding 1,200 mL of 100% ethanol (to yield a final concentration of 80% ethanol) and incubating tubes on ice. After 30 min, samples were centrifuged at 16,100  g for 10 min at 4°C. The supernatants were discarded, and pellets were air dried for 30 min at room temperature. The CPS pellets were resuspended in 100 mL of distilled water (dH 2 O) overnight. The CPS concentration was determined by quantifying the amount of uronic acid residues in the samples. Briefly, 120 mL of 12.5 mM sodium tetraborate (Sigma-Aldrich) in concentrated sulfuric acid was added to wells of 96-well plates containing 20 mL of CPS samples. Wells containing serial dilutions of galacturonic acid (0 to 100 mg/mL) were included to generate a linear standard curve. The plate was then incubated at 100°C for 5 min with shaking (500 rpm). After allowing the plates to sit at room temperature for 15 min, 2 mL of 0.15% 3-phenylphenol (Sigma-Aldrich) in 0.5% NaOH was added to each well and allowed to cool to room temperature for 15 min. Control wells received only 0.5% NaOH. The plate was incubated at room temperature for 5 min with shaking (500 rpm), and absorbance at 520 nm was measured on a Synergy MX plate reader (Bio-Rad Laboratories). The absorbance of the blank sample (0 mg/mL in the well) was subtracted from sample absorbance readings, and the CPS concentration (mg/mL) was calculated for each sample using a linear standard curve. Transmission electron microscopy. Antibiotic-treated bacteria (as described above) and LB control samples were fixed for 30 min in 2% paraformaldehyde plus 2.5% glutaraldehyde in 0.1 Sorenson's phosphate buffer (PB) supplemented with 0.05% alcian blue to enhance the capsule structure. Subsequent steps were performed using a microwave processor (see reference 39). Briefly, samples were rinsed in buffer and then fixed in 0.5% OsO 4 plus 0.8% K 4 Fe(CN) 6 in 0.1 M Sorenson's PB, rinsed in buffer and stained with 1% aqueous tannic acid. Samples were rinsed in dH 2 O and en bloc stained with 1% aqueous samarium acetate. Samples had a final rinse with dH 2 O and were dehydrated in ethanol into Epon Araldite resin. Seventy-nanometer sections were imaged in a Hitachi HT7800 transmission electron microscope operating at 80 kV with an XR-81B detector (AMT). Images were taken at the same nominal magnification of Â25,000 with a pixel size of 0.62 nm. Emphasis was given to cross sections of cells where the membrane was crisp, indicating that it was a true cross section. Capsule thickness was measured from at least 3 sides of 20 images from each treatment using ImageJ software v.2.0.0 (https:// imagej.nih.gov/ij/). The program was used to segment for the outer membrane and then for the outer boundary of the capsule. The shortest distance between those two segments was used to calculate capsule thickness. The output measurement unit was pixels. Conversion from pixels to nanometers was done using a 500-nm scale bar (with 50-nm calibrations) on the images. Two independent experiments were performed. RNA isolation, RNA-Seq and data processing. Total RNA was isolated from the samples before and after treatment with mupirocin, doxycycline, and/or NHS using the RNeasy minikit (Qiagen) according to the manufacturer's instructions. Residual genomic DNA was removed from RNA samples using the Baseline-ZERO DNase (Lucigen Corporation). RNA integrity and quality were evaluated on an Agilent 2100 Bioanalyzer using the RNA 6000 Nano assay kit (Agilent Technologies, Santa Clara, CA). Samples with an RNA integrity number (RIN) score of greater than 8.0 were used for further analysis. Two hundred nanograms of RNA was prepared for next-generation sequencing (NGS) using the Illumina Stranded Total RNA Prep Ligation with Ribo-Zero Plus (Illumina, Inc., San Diego, CA) workflow. In lieu of bacterial rRNA depletion with Ribo-Zero Plus, the Qiagen FastSelect-5S/16S/23S kit (Qiagen Sciences, Germantown, MD) was used with the modification of reducing the fragmentation step to 7 min to account for slight degradation of the RNA. After this initial rRNA treatment step, the samples were prepared for cDNA synthesis and library preparation following the Illumina protocol. The libraries were amplified for 14 cycles based on a 200ng input. Final libraries were assessed on BioAnalyzer DNA 1000 chips (Agilent Technologies, Santa Clara, CA) and quantified using a Kapa SYBR FAST Universal qPCR kit for Illumina sequencing (Roche, Basel, Switzerland) on the CFX384 real-time PCR detection system (Bio-Rad Laboratories, Inc., Hercules, CA). Libraries were normalized to 4 nM, pooled, denatured and further diluted to a 1.5 pM stock for clustering and paired-end 2  74 cycle sequencing on a NextSeq with a Mid Output flow cell (Illumina, Inc., San Diego, CA). Statistical analyses. Statistical analyses were performed using Prism 9.1 software (GraphPad Software, La Jolla, CA). With the exception of gene expression data, all other data were analyzed by using a repeated-measures one-way analysis of variance (ANOVA) and Dunnett's posttest. Data availability. All next-generation sequence data are available on GEO (GSE201383). SUPPLEMENTAL MATERIAL Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.1 MB. ACKNOWLEDGMENTS This work was supported by the Intramural Research Program of the National Institutes of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH), and NIH grant R01AI090155 (to B.N.K.). We declare no conflict of interest.
2022-10-21T06:18:06.038Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "ccc65ed380851c8efedd2a409f715cf82daebe5b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ASMUSA", "pdf_hash": "56c4e0e4842f7bfae0d9cd5762c9a84b63eee0be", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270527384
pes2o/s2orc
v3-fos-license
The Trans-facial Approach for Simultaneous Resection and Reconstruction of Retromolar Trigone Tumors: A Pilot Study Introduction Early retromolar trigone (RMT) lesions are difficult to access and free tissue transfer is often an overkill for such small lesions. The aim was to devise a novel surgical approach that would aid the resection without raising a cheek flap and simultaneously provide a local reconstructive option for small lesions in the RMT. Methodology This study was to demonstrate the outcomes of the “trans-facial” approach used to simultaneously access and reconstruct small RMT tumors through an islanded nasolabial flap. Patients with histologically proven squamous cell carcinoma of RMT requiring surgery were included from January 2021 to September 2022. Case selection was done based on the location of the disease and its size (cT1/T2). All needed bone and soft tissue resection via per oral trans-facial approach, along with an ipsilateral neck dissection. The technique is described along with their post-operative and pathologic outcomes. Results Out of the eight patients included in this study, six underwent a bi-alveolar marginal resection and reconstructed using the trans-facial approach. No major complications were noted in the post-operative period. 50% were pT1 tumors and 75% were pN0 status. One patient had a close margin; while, the others had adequate resection margins. All patients were followed up for a median of 18 months with a locoregionally controlled status. Conclusion The trans-facial approach can be a suitable option with a reasonable oncologic outcome to address small RMT lesions. Introduction More than a third of the global oral cancer burden is contributed by South Asia, largely due to the high incidence of tobacco use in the region [1].Smokeless tobacco use is the highest in the world and leads to the involvement of specific subsites such as the gingivobuccal complex and the retromolar trigone [2].Involvement of the latter has always been dealt with caution due to the various routes of spread.Moreover, due to its proximity to the muscles of mastication, any treatment, be it radiotherapy or surgery, often can result in severe trismus and debilitation. Early lesions of the retromolar trigone (RMT) often present without the involvement of the underlying mandible.However, they tend to involve the tonsilo-lingual sulcus as well as part of the adjoining soft palate.Adequate resection of such lesions often comprises marginal mandibulectomy and partial upper alveolectomy along with wide excision of the adjoining mucosa and soft tissues.To gain adequate access to these lesions, a lateral or midline lip split approach is often utilized [3].This reduces the options for reconstruction using local flaps such as the nasolabial flap.Regional flaps such as the pectoralis major myocutaneous flap (PMMC) is often bulky and can cause significant functional and cosmetic morbidity and may not be suitable for all patients [4].These defects must be reconstructed with a pliable skin-lined flap like the free radial artery forearm flap.However, in low-resource settings or where a free tissue transfer is not possible, the use of local or pedicled flaps might be useful. Hence, we attempted a novel technique to approach these early RMT tumors, keeping in mind the associated reconstructive challenges.Here, we demonstrate a "trans-facial" approach for RMT tumors that includes reconstruction using an islanded Nasolabial Flap (NLF). Patients and Methods This study was aimed to demonstrate the outcomes of a novel "trans-facial" approach used to simultaneously access and reconstruct small RMT tumors through an islanded Nasolabial Flap (NLF).Patients with newly diagnosed squamous cell carcinoma of RMT region requiring surgery were included from January 2021 to September 2022.Case selection for this specific approach was done base on the location of the lesion and clinically T1 or T2 status.A total of eight patients were included in this study.The study was approved by the Institutional Ethics Committee (IEC).Radiological assessment was performed to determine skin involvement along with any underlying bone erosion. Treatment decisions were made as per the disease management group's joint clinic consisting of surgical, medical, radiation oncologists as well as a radiologist and pathologist.Patients were excluded if other head and neck sites were involved, previously received any treatment or if the disease process warranted a segmental mandibulectomy or skin excision. Stringent case selection included RMT or posterior buccal mucosa lesions planned for surgical excision of the primary and appropriate for a local or regional flap reconstruction, along with ipsilateral neck dissection was done.A mouth opening of at least 20 mm at presentation was required.A minimum of 30 mm tumor free mucosal margin from the commissure was also needed as shown in Fig. 1, so that the base of the nasolabial flap would not be near the tumor excision area.This was confirmed by examining under anesthesia as well as radiological mapping and estimation of the approximate defect size.All patients underwent surgical resection with a clinical discernable margin of at least 1 cm.This could entail a marginal mandibulectomy with or without partial upper alveolectomy using per oral and trans-facial approach. Surgical Procedure The surgical plan included the completion of the ipsilateral selective neck dissection before commencing with the primary resection.The extent of neck dissection was based on the status of neck metastasis and the level of involvement.For cN0 patients, clearance of ipsilateral levels I to III/IV was performed; while, level V was included in cN + cases with lower neck involvement or clinical extracapsular spread (ECS). During the neck dissection, care was taken to preserve the facial artery and vein in continuity across the lower border of the mandible.In case the facial vessels were not preserved, the nasolabial flap was not islanded and instead based on a random pattern blood supply with a broad base.After the neck dissection was complete, the planned nasolabial flap was marked (Fig. 2).We started by confirming the surface marking of the facial artery at the lower border of the flap using Mason's point or a Doppler probe [5].Once traced, a broad flap is marked based on the defect size.Figure 3 represents a schematic diagram of the Nasolabial flap and its reach to the retromolar trigone region. The mucosa is incised intraorally, confirming the adequacy of the base of the nasolabial flap.The medial edge of the flap was incised and the facial vessels and its labial branches were identified and preserved by careful dissection.The superior/distal end of the vessel is then ligated and the incision is extended as per the marked flap design.The flap is elevated with the skin, subcutaneous tissue along with the underlying muscle by carefully preserving the artery and veins in the flap.Once the flap is partially raised, we enter the oral cavity by connecting through the anterior mucosal margin. The anterior mucosal resection margin in the ipsilateral buccal mucosa should correspond to the posterior margin of the modified nasolabial flap.Next, the anterior mucosal cut is deepened to open into nasolabial defect and sufficiently extended to improve access.Further excision of the tumor can be completed through a combined approach Fig. 1 Case selection for Trans-facial approach-Retromolar trigone lesion using this defect and through oral cavity (Fig. 4).Due to the wide access obtained transfacially, marginal mandibulectomy with coronoidectomy as well as an upper aleveolectomy can be performed.The specimen is then rotated outward and the posterior tonsillar and soft palate mucosal margins can be accessed intraorally-might be difficult to perform through the defect due to the specimen obstructing vision. Once the specimen is delivered, the margins are assessed.The nasolabial flap is sutured in the defect (Fig. 5).Once the defects is covered adequately, the donor site is carefully closed along the nasolabial crease avoiding and correcting any Burrow's triangle formation and any deviation of the oral commissure (Fig. 5). The histopathology reports were reviewed to record the distance from tumor to mucosal, soft tissue and bone margin, separately, in grossing as well as microscopic examination.Adjuvant radiotherapy (RT) was given when depth of invasion was more than 10 mm, or more than 5 mm with other adverse factors like perineural invasion (PNI) or poor grade of differentiation, or in the presence of positive neck nodes.Concomitant chemotherapy (CCRT) was added in cases of positive margins or presence of extracapsular spread (ECS).All other clinical and pathologic parameters were obtained from the electronic medical records of the hospital. Results All eight patients were included in this study and underwent resection of the primary tumor via the trans-facial approach with a modified nasolabial flap.The demographic details of the patients are compiled in Table 1.All participants were tobacco chewers, male, and between the age group of 35 to 56 years.Clinically and radiologically, they were classified as T1 or T2 tumors with cN0 status.Six patients underwent marginal mandibulectomy and partial upper alveolectomy as part of the resection while 2 underwent only a marginal mandibulectomy.No major intraoperative events were present.The mean operative time was 72 min ± 18 min for the primary tumor resection through the trans-facial approach.An additional 41 min ± 12 min was present if the flap was islanded completely on the facial vessels.All patients had an uneventful post-operative period with a mean hospital stay of 5 days (± 2 days). There were no flap-related complications or oro-cutaneous fistulas in any of the cases.One patient developed a parotid fistula which was addressed with an anticholinergic agent during the hospital stay.In two patients, a controlled fistula was left at base of the nasolabial defect which was corrected after 4 and 6 weeks, respectively.Half of the patients were pT1 tumors and 75% were pN0 status.One patient had an inadequate margin of 2 mm (superior bone margin) on final histopathology but was free of tumor.The same patient also had a poorly differentiated tumor, lymphovascular emboli and two positive neck nodes for which adjuvant therapy was advised.Another patient had a final diagnosis of hyperplastic squamous mucosa, though the preoperative biopsy was moderately differentiated squamous cell carcinoma.All patients were longitudinally followed up to 18 months (median) with no wound-related complications reported.On follow-up, 88% were alive and locoregionally controlled (n = 7) and 12% of the cohort developed recurrence and is alive with disease (n = 1). Discussion The aim of performing any oncologic procedure is to offer the patient a reasonable functional, esthetic and oncologic outcome.With the surgical evolutions in the last few decades, balancing these objectives has become a reality by minimizing the access needed to perform the procedure without compromising oncologic outcomes.In this regard, these advances have been led primarily by technology in the form of minimally invasive cameras, lenses, and even robotic systems.These newer surgical techniques reported are also reliant on the development of enabling technology.Hence, very few fundamental techniques in surgery have seen any change.We present our novel technique to access certain oral cavity lesions that maximizes the principles mentioned before. The most common technique used to access posteriorly based lesions that are not amenable to transoral resection is the elevation of a cheek flap [6].This leads to an eventual communication between the oral cavity and neck.This communication can lead to major complications such as delayed wound healing, infections, and oro-cutaneous fistula eventually leading to the possibility of secondary hemorrhage.These can lead to a delay in receiving adjuvant therapy, translating into reduced oncologic outcomes. The trans-facial technique is primarily indicated for posteriorly based lateral oral cavity lesions, such as the retromolar trigone, buccal mucosa, and palatal lesions, where a conventional transoral resection might be difficult.This may be due to restricted mouth opening caused by the disease or severe submucous fibrosis, in which case, the retraction of soft tissue is very difficult.Additionally, the planned reconstruction of these lesions is also a challenge as most free tissue transfers might be larger than the defect.Local and regional flaps have been mentioned in the literature with good outcomes. The basic philosophy behind the trans-facial approach is to avoid contaminating the sterile neck field with the oral cavity and eventually reduce these complications.Since access to the primary site can be achieved through the incision for the nasolabial flap itself, the elevation of a cheek flap is avoided.Splitting the lip is also avoided improving the function of the oral aperture and overall esthetics.Neurologic functions of the marginal mandibular nerve and the mental nerve are also preserved as these are not encountered for access [7]. As with all surgical approaches, a meticulous case selection is required to avoid inadequate margins or poor esthetic and functional outcomes.The possible limitations of this technique include limited applicability in field cancerized mucosa, large dentate segment of bone to be resected which would be difficult to deliver through the trans-facial defect, and in situations where the nasolabial flap would not be possible or indicated.While this novel technique has worked in our setup, its applicability in other centers and surgeons is needed. Fig. 3 Fig. 3 Schematic diagram of NLF based on facial vessels and its reach to RMT lesion Fig. 4 Fig. 5 Fig. 4 Resection of the tumor through the Trans-facial approach
2024-06-17T15:12:26.798Z
2024-06-15T00:00:00.000
{ "year": 2024, "sha1": "7374b0fafc42913d5f1e2b801c0df081c3f5f31d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12663-024-02226-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fbed9aef855c10f3664a7c83fcd483b3c1a44028", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36547736
pes2o/s2orc
v3-fos-license
Evaluation of JULES-crop performance against site observations of irrigated maize from Mead, Nebraska . The JULES-crop model (Osborne et al., 2015) is a parametrisation of crops within the Joint UK Land Environment Simulator (JULES), which aims to simulate both the impact of weather and climate on crop productivity and the impact of croplands on weather and climate. In this evaluation paper, observations of maize at three FLUXNET sites in Nebraska (US-Ne1, US-Ne2 and US-Ne3) are used to test model assumptions and make appropriate input parameter choices. JULES runs are performed for the irrigated sites (US-Ne1 and US-Ne2) both with the crop model switched off (prescribing leaf area index (LAI) and canopy height) and with the crop model switched on. These are compared against GPP and carbon pool FLUXNET observations. We use the results to point to future priorities for model development and describe how our methodology can be adapted to set up model runs for other sites and crop varieties. Introduction The Joint UK Land Environment Simulator (JULES) Clark et al., 2011) is a process-based model that simulates the fluxes of carbon, water, energy and momentum between the land surface and the atmosphere. It is used in carbon cycle, climate change and impacts studies, and can be run on its own ("stand-alone" mode) or as a component of a coupled Earth system model. As described in the model description paper (Osborne et al., 2015), JULES-crop is a parametrisation of crops that has been added to JULES in order to improve land-atmosphere interactions in areas where crops are predominate in addition to enabling the simulation of the effect of weather and climate on food and water resources. JULES treats each vegetation type as existing on a separate tile within a grid box. Energy and carbon flux calculations are performed separately for each tile, and prognostics, such as leaf area index (LAI) and canopy height, are calculated and stored for each tile separately. Each vegetation tile has a different set of input parameters and leaf-level carbon assimilation is calculated differently depending on whether the tile is modelling a plant with a C3 or a C4 plant photosynthetic pathway. JULES-crop introduces a distinction between natural plant functional types (PFTs) and crops. Crop tiles have their growth and development parametrised by a crop development index (DVI) and have different calculations for the allocation to plant carbon pools, leaf area index and height compared to natural PFTs. However, in most other respects, Published by Copernicus Publications on behalf of the European Geosciences Union. such as the calculation of gross primary productivity (GPP) and respiration, natural PFTs and crops are modelled in the same way within the JULES code. In its current stage of implementation, JULES-crop is available only in offline JULES runs, although there are plans to extend it for use in coupled runs in the future. Other land-surface models have also been extended include specific representations of key crops. For example, Community Land Model (CLM)-crop has been evaluated at the site level for several crop types (maize, soybean and spring wheat (Drewniak et al., 2013); winter wheat (Lu et al., 2016) and physiology parameters were calibrated to optimise productivity (Bilionis et al., 2015). ORCHIDEE-CROP has been evaluated for maize and winter wheat at a number of European sites (Wu et al., 2016) and was shown to reproduce the seasonality of leaf area index and carbon and energy fluxes. Similarly, the incorporation of a phenology scheme into the SImple Biosphere (SIB) model improved the prediction of both leaf area index and carbon fluxes for maize, soybean and wheat crops at a number of sites in North America (Lokupitiya et al., 2009). Song et al. (2013) implemented crop-specific phenology and carbon allocation schemes into the Integrated Science Assessment Model (ISAM) landsurface model and calibrated against observational data from a corn-soybean rotation at Mead and Bondville (US) sites. This model was able to reproduce the diurnal and seasonal variability of carbon, water and energy fluxes. In Osborne et al. (2015), global runs using JULES-crop were carried out for four generic crop types -maize, soybean, wheat and rice -and the effect of including the new crop parametrisation was shown on sensible heat flux, moisture flux and net primary productivity (NPP) for some key countries. The model yield was also compared against global and country FAO crop yields. Site runs were performed at four FLUXNET sites with a maize-soybean rotation: Mead (US-Ne2 and US-Ne3), Bondville (US-Bo1) and Fermi (US-IB1). For input parameters that applied to both natural vegetation and crop tiles, C3 crops were given the parameter values of a standard C3 grass tile within JULES and C4 crops were given the values of a standard C4 grass tile. Osborne et al. (2015) speculated that an improved fit to observations could be obtained if these parameters were tuned to be more crop specific. The other published study using JULES-crop to date, Williams and Falloon (2015), used the global set-up and the generic parametrisation of the four main crops from Osborne et al. (2015) to investigate the sensitivity of the yield from JULES-crop to the driving data variables, assessing both the relative importance of different variables and whether there is an advantage to using subdaily driving data rather than using daily driving data and performing an internal disaggregation to subdaily timescales. It also investigated the effect on the yield of initialising the model from climatology. No attempt was made to find more appropriate crop parameter values. In this model evaluation paper, we use the observations available at the Mead FLUXNET sites US-Ne1, US-Ne2 and US-Ne3 to investigate how well each individual component of JULES performs for maize and how much of an improvement can be achieved by using more appropriate parameter values, taking into account advances in the JULES code since the Osborne et al. (2015) study. This investigation splits into three distinct parts. We initially look at which processes and parameters can be tuned directly to maize observations from the Mead sites, without running the model. Second, for parts of the code shared between natural PFTs and crops in the model (the calculation of gross primary productivity and respiration), we test the performance of the tuned parameters by running JULES with the crop model switched off and forcing with observed leaf area index (LAI) and canopy height, to remove the feedback between net primary productivity and LAI. Finally, we will use the tuned parameters in JULES runs for irrigated maize at Mead with the JULES-crop parametrisation switched on. This paper is organised as follows. Section 2 gives information about the observations and the model set-up used for the JULES runs presented in this paper, both those with and without the JULES-crop parametrisation switched on. Particular attention is paid to the choice of input parameter values, which are tuned to the available observations. Section 3 compares the results from the model runs against the observations. Section 4 contains an overall assessment about the suitability of the model for modelling maize at these sites and discusses ways that the model could be improved. It also comments on the more general applicability of the parameters and methods used in this paper for tuning JULES for other sites and crop varieties. A summary of the JULES-crop parametrisation and the other relevant parts of the JULES code is given in Sect. A. 2 Experimental set-up Observations There are three FLUXNET sites at the University of Nebraska Agricultural Research and Development Center near Mead, Nebraska, which are located within 1.6 km of each other: US-Ne1, US-Ne2 and US-Ne3. Both US-Ne1 and US-Ne2 are irrigated with a central pivot system, whereas US-Ne3 is entirely rainfed Suyker et al., 2004Suyker et al., , 2005. US-Ne1 grows maize, whereas US-Ne2 and US-Ne3 are maize-soybean rotations. The observations span from 2001 to 2015 (although not all variables were available for this entire period). The observations of the biomass of green leaves, yellow leaves, stem and reproductive parts of maize (kernel, cob, husk, ear shank, silk) were made after the plant material was dried to a constant temperature of 105 • C. In the observations, green leaves encompasses all green leaf material from Geosci. Model Dev., 10, 1291-1320, 2017 www.geosci-model-dev.net/10/1291/2017/ the collar to the leaf tip, yellow leaves are defined as greater than 50 % necrotic (or entirely yellow) leaf and the stem includes stem, leaf sheaths, immature or undeveloped ears and unfurled leaves. Hourly incident and absorbed Photosynthetically Active Radiation (PAR) (400 to 700 nm) observations are available from the Mead FLUXNET sites. Absorbed PAR was calculated using two point quantum sensors above the canopy, pointing up and down, and two line quantum sensors below the canopy, pointing up and down. The line quantum sensors below the canopy integrate over an area 1 cm by 1 m, in order to even out effects such as sunflecks. The observations were used in three ways: to determine the input parameters to the JULES runs (air temperature, carbon pools, leaf nitrogen, absorbed PAR, canopy height, LAI), to drive the JULES runs themselves (meteorological variables, LAI, canopy height) and to compare the JULES run results against (GPP, carbon pools, LAI, canopy height). Observations from all three sites were considered in the input parameter tuning, whereas only observations from the irrigated sites were used to drive and validate the model runs. Model set-up The relevant features of the JULES land-surface model, including the JULES-crop parametrisation, are described in Sect. A. Two types of JULES runs were used in this study: 1. Maize is treated as a natural PFT tile (i.e. crop model is switched off), with LAI and crop height prescribed from observations (linearly interpolated to create a daily time series). 2. Maize is considered as a crop tile (i.e. crop model is switched on). The runs were driven by hourly observations of downward shortwave radiation, downward longwave radiation, precipitation, air temperature, wind speed, pressure, specific humidity and diffuse radiation fraction. Each year and site was modelled as a separate run, each starting on 1 March. Annual global CO 2 atmospheric concentrations were taken from Dlugokencky and Tans (2016). The following sections describe in more detail how the choice of input parameters was made. Observations from both the irrigated sites at Mead and the rainfed site at Mead were considered when tuning the model input parameters that were designed to take the same value whether irrigation is switched on or off in the model. However, in these cases, observations from the rainfed site are clearly denoted on the plots, in order to check for cases where these model approximations break down. It was assumed that there was no limitation from nitrogen availability. A summary of the model input parameters used in both types of runs are given in Tables 1, 2a, 2b, 3a, 3b and 4. Crop development parameters The cardinal temperatures T b , T o and T m in this analysis have been kept the same as Osborne et al. (2015), which were chosen based on the literature review in Sánchez et al. (2014). As in Osborne et al. (2015), there was the assumption of no dependence of thermal time on the photoperiod. The thermal times were calculated using the available Mead data for the sowing date, the date at which 50 % of the plants had emerged 1 , the date at which 50 % of the plants were at the R1 or "estimated R1" growth stage (i.e. had begun the reproductive phase), the date at which 50 % of the plants had reached the R6 growth stage (maturity) and the harvest date, together with the observed hourly air temperature and Eq. (A1). These thermal times are given in Table 5. In the runs presented in Sect. 3, the thermal times for sowing to emergence, emergence to flowering and flowering to harvest for each year at a site are used in JULES-crop directly, to simulate the crop development as closely as possible for a finished crop season, where the harvest date is known. 2 The sowing date is prescribed (i.e. l_prescsow=T). An option for sowing date to be calculated dynamically using rate of change of day length and soil temperature and moisture does exist (l_prescsow=F), but this is not considered here as it is still under development and not recommended for use (Osborne et al., 2015). Since harvest dates are available, T mort was set low enough that it did not trigger harvest. Carbon partitioning The carbon partitioning parameters α i , β i were tuned to observations of the biomass of green leaves, yellow leaves, stem and reproductive parts of maize. The ratio of carbon to biomass in each part of the plant was assumed to be the same and constant in time. The C leaf pool in the model contains green leaves only (since C leaf is directly linked to LAI and photosynthesis) and the C harv pool consists of both the reproductive parts of the plants and the yellow leaves. Stem carbon in the model is split between the C stem and C resv pools. The biomass observations were linearly interpolated to get a daily time series and then differentiated with respect Figure 1. Top: ratio of rate of change of C leaf to rate of change of above-ground carbon C ag ; middle: ratio of rate of change of C stem + C resv to rate of change of above-ground carbon; bottom: ratio of rate of change of C harv to rate of change of above-ground carbon. Solid black line uses the original crop parameters from Osborne et al. (2015), dashed black line uses the tuned parameters. Blue, green and red lines are derived from US-Ne1, US-Ne2 and US-Ne3 observations respectively. to time. Ratios of these rates were then plotted as a function of DVI (Fig. 1). Using these plots alongside the function for root carbon from de Vries et al. (1989) (since there were no direct measurements of root biomass available from the Mead sites), new, tuned values for α i , β i were found. These tuned parameters (dashed lines) show an improvement in the proportion of the increase in above-ground carbon that goes to the green leaves ( Fig. 1, top) and the proportion of the increase in above-ground carbon that goes to the stem ( Fig. 1, middle) for DVI < 0.8 as compared to the parameters used in Osborne et al. (2015) (solid line). However, note that, even after the tuning, the proportion of the increase in aboveground carbon that goes to the green leaves does not drop off sharply enough for DVI > 0.8 compared to the observations. The tuned partition fractions are shown more clearly in Fig. 2 (colours), together with the functions given in de Vries et al. (1989) (the α i , β i in Osborne et al., 2015 were fitted to these functions with minor adjustments as a result of global runs). It was not possible to fit p root accurately to the expression from de Vries et al. (1989) for approximately a DVI of 1.0 to 1.4 given the constraints above. In addition, in reality, water stress can also increase the fraction of NPP going to the roots (see discussion in, e.g., de Vries et al., 1989 andSong et al., 2013), but this effect is not taken into account in JULEScrop. However, we do not see a notable difference between the irrigated sites US-Ne1 and US-Ne2 (blue and green lines respectively) and rainfed site US-Ne3 (red lines) in Fig. 1. Remobilisation of stem carbon The stem biomass observations were used to tune the value for the stem reserve remobilisation constant τ . The relation governing the stem reserve remobilisation can be rearranged to where M stem is stem biomass (including reserves), M max stem is the maximum value of M stem in that site in that year and d max is the day since M max stem occurred. Geosci. Model Dev., 10, 1291-1320, 2017 www.geosci-model-dev.net/10/1291/2017/ Therefore, plotting 1 − M stem M max stem against 1 − 0.9 d max should give a straight line with gradient τ . Using the assumption that the day with maximum stem biomass was approximately the same day as the day with the maximum stem biomass measurement, a straight line was fitted to the observations and an approximate value of τ = 0.12 was obtained. However, as can be seen in Fig. 3 (which displays both the new, tuned value τ = 0.12 (black, dashed line) and the value used in Osborne et al., 2015 of τ = 0.35 (black, solid line), which was obtained from de Vries et al., 1989), this parametrisation does not capture the large spread in the observations (blue, green and red lines). The uncertainty this introduces into the model is not critical, since there are no strong feedbacks involved (unlike, for example, uncertainty in specific leaf area (SLA) just after emergence), but it will affect the outputted yield. Senescence The observations of green leaf biomass and above-ground biomass were used to tune the senescence parameters µ, ν and DVI sen . The above-ground biomass measurements were combined with the partition fractions from Sect. 2.3.2, the carbon to biomass ratios from Sect. 2.3.7 and the senescence parametrisation from Eq. (A4) to get a time series for green leaf biomass (Fig. 4, centre and right plots, black lines), normalised to the maximum value in each year. This could then be compared to the normalised observed time series for green leaf biomass (Fig. 4, left, coloured lines). It is clear that, if the parametrisation from Osborne et al. (2015) is used (Fig. 4, centre plot, solid black lines), senescence starts late and then progresses too abruptly as compared to the observations. However, with the new parametrisation (with the new free parameters µ, ν and DVI sen ), it is possible to get a much better fit to the observations (Fig. 4, right plot, dashed black lines). Note that this tuning partially compensates for the bias in the proportion of carbon going to the leaves between DVI 0.8 and 1.0 in Fig. 1 (top). If this bias was not present, senescence could start more gradually, which would enable a better fit to leaf carbon at around DVI = 1.75. Also, the tuned lines underestimate the leaf biomass at around DVI = 1.75, which will help to compensate for the model being unable www.geosci-model-dev.net/10/1291/2017/ Geosci. Model Dev., 10, 1291-1320, 2017 to capture the drop in photosynthetic capacity in the green maize leaves towards the end of the season. Crop height Stem biomass measurements up until the maximum in each year and the corresponding crop height measurements from the Mead FLUXNET sites were used to fit the allometric constants κ and λ, through rearranging Eq. (A5) to h = κ M λ stem where κ = κ(1 − τ ) λ . For consistency, it is important that the τ used in this expression is the same value as the τ used in Eq. (A2). Figure 5 shows the observations (points), along with the fit using parameters from Osborne et al. (2015) (solid black line, λ = 0.4, κ = 3.06) and a tuned fit (dashed black line, λ = 0.38, κ = 3.43). Specific leaf area The allometric constants γ and δ relating specific leaf area to DVI (Eq. A7) are tuned using Fig. 6, which plots SLA observations against DVI (points), the tuned fit (dashed line) and the parameters used in Osborne et al. (2015) (solid line). The crop in the model is very sensitive to SLA for low values of DVI because of the feedback between leaf area index and leaf carbon. The model lines in Fig. 6 have the steepest gradient for low values of DVI, where there is also a greater spread of observations. Carbon to biomass ratio in stem and leaves The observations of carbon fraction of the green leaf biomass (canopy mean) against day after sowing is shown in Fig. 7. Initial amount of carbon in crops Assuming that, near emergence, approximately half of the plant carbon is above ground (Fig. 2), values for the parameters governing initialisation of C init = 8.0 × 10 −4 and DVI init = 0.1 can be derived from the above-ground biomass measurements plotted in Fig. 8 and the carbon to biomass ratios. Since there are no measurements below DVI = 0.1, and the model is very sensitive to these parameters, we do not attempt to set a DVI init below 0.1 and extrapolate. Note also that the initial value of carbon is very sensitive to the ther- Table 3. Values of the crop-specific JULES parameters used to represent maize. Units are given in brackets; (-) denotes dimensionless. These parameters are all specified in the JULES_CROPPARM namelist. Values of the crop-specific JULES parameters used to represent maize. Units are given in brackets; (-) denotes dimensionless. These parameters are all specified in the JULES_CROPPARM namelist. JULES Osborne et mal time for emergence. Figure 8 also shows that the value C init = 1.0 × 10 −2 , which was used in Osborne et al. (2015) to initialise the crop at DVI init = 0.0, is too high to be consistent with the above-ground biomass observations. Yield fraction As discussed above, the C harv pool in JULES contains both the reproductive parts of the maize crop (kernel, cob, husk, ear shank and silk) and the yellow leaf carbon and the proportion of this carbon pool that contributed to yield carbon f yield is set by the user. The value of f yield can be derived using the latest observations in each season of the biomass of the reproductive part of the crop, the proportion of this reproductive biomass which is composed of kernels, and the yellow leaf biomass. The yield fraction is then calculated as the kernel fraction of the sum of the reproductive part of the crop and the yellow leaves, leading to an approximate value of f yield = 0.74 ( Fig. 9). This assumes that there is no significant change in f yield between the last measurement of the season and the harvest and also that the carbon fraction of the biomass in the both the reproductive parts and the yellow leaves is the same. Typically, an accurate value of f yield is not important in impact studies, since this constant can be incorporated into a yield gap parameter. Parameters required by natural PFT tiles only To obtain the allometric parameters required to relate the plant carbon pools to plant height and LAI when the crop model is switched off, LAI bal was assumed to be approximately equal to LAI up to the maximum LAI at the site for each year. As discussed in Sect. A2, a ws is assumed to be equivalent to 1-τ , i.e. a ws = 0.88. The stem biomass observations can be used to obtain values for a wl , b wl and η sl , for a set ratio of carbon to biomass in the stem (see Sect. 2.3.7). First, a value for η sl of 0.017 kg C m −1 (m 2 leaf) −1 was obtained by plotting the stem biomass observations against LAI multiplied by crop height for points up until the maximum LAI for each site in a particular year (Fig. 10, left). Second, a wl and b wl were simultaneously fitted to (a) the stem biomass observations against LAI for points up until the maximum Geosci. Model Dev., 10, 1291-1320, 2017 www.geosci-model-dev.net/10/1291/2017/ LAI for each site in a particular year (Fig. 10, right), (b) crop height against stem biomass observations, up until the maximum stem biomass measurement for each site in a particular year (Fig. 11) and (c) LAI bal against LAI up until the maximum LAI for each site in a particular year (Fig. 12). This gave a wl = 9.5 × 10 −3 kg C m −2 and b wl = 1.767. As we saw in Fig. 6, Eq. (A12) is not a good approximation for maize, particularly when DVI is less than 0.5. For the purpose of these runs, an approximate value at DVI = 1 was used. Parameters required by both crop tiles and natural PFT tiles 2.5.1 Canopy radiation scheme The JULES default C4 grass settings for the PAR leaf scattering coefficient ω PAR = 0.17 and the PAR leaf reflection coefficient α refl,PAR = 0.1 were used (these are very similar to the values quoted in Sellers (1985) that were used in Osborne et al. (2015) to model maize. The soil albedo was set to 0.133, which was the value from the nearest grid box in the ancillary used in the HadGEM2-ES model (Collins et al., 2011;Jones et al., 2011), which was used in the Osborne et al. (2015) global runs. The canopy clumping factor was tuned by comparing the fraction of incident PAR absorbed by the canopy (frac- tion of absorbed PAR (FAPAR)), using absorbed and incident PAR observations and interpolated LAI observations, to the model FAPAR, using observed diffuse radiation fraction and interpolated LAI observations up until flowering. The python package pySellersTwoStream (see code availability section) was used to calculate the model FAPAR since it is able to reproduce the results of the JULES radiation scheme exactly but can be called directly from our (python) Geosci. Model Dev., 10, 1291-1320, 2017 www.geosci-model-dev.net/10/1291/2017/ Figure 9. Yield fraction against the sum of the biomass in the reproductive parts of the maize crop (kernel, cob, husk, ear shank and silk) and the yellow leaf biomass, using the last measurement of the season. Dots, vertical crosses (+) and diagonal crosses (x) are US-Ne1, US-Ne2 and US-Ne3 observations respectively. Solid black line shows the value used implicitly in Osborne et al. (2015) and dashed black line shows the new, tuned value. analysis scripts, without the need for extra JULES runs for each combination of parameters tested. Absorbed PAR through the canopy in the model closely follows a exponential decay function. Calculating FAPAR involves integrating this exponential decay over the canopy; Fig. 13 (centre row) shows the resulting FAPAR distribution against total LAI for a uniform canopy (canopy clumping factor a = 1). For mostly direct radiation (diffuse radiation fraction 0.2-0.3), the rate of decay with layer LAI in the model shows a clear dependence on the zenith angle (Fig. 13, centre right), whereas for mostly diffuse radiation (diffuse radiation fraction 0.8-0.9), this zenith angle dependence is greatly reduced (Fig. 13, centre left). While the observations (Fig. 13, top row) also show a strong zenith angle dependence as the fraction of diffuse radiation decreases, the observations are, in general, consistent with a much lower effective decay constant (in particular, the model FAPAR values are higher than the observations at intermediate LAI values ∼ 2). The observed FAPAR values also have a much larger scatter than seen in the model FAPAR. Decreasing the canopy clumping factor is equivalent to decreasing the effective decay constant in the model. Figure 14 shows the value of the clumping factor that would be needed to reproduce each FAPAR observation, given the observed LAI and diffuse radiation fraction. While there is a large spread in clumping values derived in this way, these re- sults appear to indicate that a clumping factor between 0.5 and 0.8 would be consistent with the majority of the observations. In this study, we therefore set a = 0.65. Figure 13 (bottom row) shows that using this clumping factor value to calculate model FAPAR gives a better fit to the observations, particularly for the intermediate LAI values. Erectile, vertical and horizontal leaf angle distributions (for a uniform canopy) were also investigated, but the spherical distribution gave the best fit to the FAPAR observations. The FAPAR observations can not be used to tune the model once green leaf area index has started to drop significantly, as the observations include PAR absorbed by any part of the plant, whereas the JULES canopy scheme models the PAR absorbed by photosynthesising leaves only. Whether the model canopy scheme needs to be extended to include the shading of green leaves by yellow leaves and other non-root biomass depends on the distribution of the remaining green leaves through the canopy (essentially, the model is roughly assuming that all the green LAI is at the top of the plant and so does not get shaded by other plant material). Differ-Geosci. Model Dev., 10, 1291-1320, 2017 www.geosci-model-dev.net/10/1291/2017/ Sellers (1985) modelled maize assuming that green and dead leaves are evenly distributed throughout the canopy, whereas de Vries et al. (1989) showed that "maximum leaf photosynthesis in a senescencing crop declines with time. The oldest leaves in the base of the canopy are affected first". Photosynthesis light response curve In the literature, the photosynthetic capacity of maize leaves (per leaf area) declines with age and the older leaves are lower in the canopy Dwyer and Stewart (1986); Stirling et al. (1994). As discussed in Sect. A4, change in photosynthetic capacity through the canopy can be modelled in JULES by a non-zero k nl , which we assume is due to change in nitrogen per unit leaf area through the canopy. The nitrogen per unit leaf area as a function of layer LAI at anthesis (60 days after sowing) in Massignam et al. (2001) for the highest nitrogen availability level (150 kg N ha −1 ; residual soil nitrate 31 kg ha −1 ) was consistent with a k nl of approximately 0.07. Since this is low, in this study, the variation of nitrogen per unit leaf area through the canopy is neglected; i.e. k nl = 0.0. The inclusion of a non-zero k nl would have the effect of increasing GPP, as the plant would be able to make more efficient use of the incoming radiation. In this study, trait-based physiology was switched off (i.e. l_trait_phys=F). However, the same results could be obtained by switching trait-based physiology on and choos- ing values for the new parameters that are equivalent to the ones used here. Figure 15 shows the observations of the nitrogen mass per unit carbon mass (left) and per unit leaf area (right) averaged over the canopy. In both plots, nitrogen rapidly decreases with time at the beginning and end of the season, which cannot be captured by JULES. The inclusion of a non-zero k nl would also not solve this problem, as this would simply increase the nitrogen per leaf area mid-season, as can be seen in Fig. 16 for k nl = 0.2. In this study, the temperature dependence of V cmax is fixed by fitting Eq. (A14) to the expression given in de Vries et al. (1989) (Fig. 17). The default JULES C4 grass parametrisation of V cmax is more sharply peaked, has its maximum at a higher temperature and is more asymmetrical. Also plotted is the expression for the temperature dependence for maize V cmax from Massad et al. (2007). Puntel (2012) modelled V cmax for maize at the Mead site and fit the results with MaizeGro, using the default temperature dependence, which gives a peak at approximately 33 • C. Puntel (2012) verified this relation by successfully fitting the model to results from modern maize cultivars from Kim et al. (2007), Crafts-Brandner and Salvucci (2002) and Naidu et al. (2003), which all show the peak in V cmax at approximately the same temperature. Puntel (2012) related the normalisation of V cmax to the leaf nitrogen per biomass; for example, at 30 g N kg −1 at the V14 growth stage, maximum assimilation at 25 • C was 37 µmol m −2 s −1 . The temperature dependence of maize at high temperatures was looked at in more detail in Crafts-Brandner and Salvucci (2002), which included an investigation into the dependence on the rate of temperature change. The experiment with the more gradual temperature change in Crafts-Brandner and Salvucci (2002) corresponds well to the high temperature dependence of the de Vries et al. (1989) expression. The canopy average V cmax,norm was tuned using the value of V cmax at 25 • C at 340 vppm CO 2 at a specific leaf weight of 450 kg h −1 , which is the canopy average at DVI = 1 (for maize cv Pioneer) from de Vries et al. (1989). n l0 was set to the approximate value of the observations in Fig. 15 (left) at DVI = 1, which then constrains n e (since n e = V cmax,norm /n l0 when k nl = 0). The quantum efficiency α was set to the value from de Vries et al. (1989) of 0.055 µmol C m −2 s −1 (µmol photons m −2 s −1 ) −1 for maize, which was quoted for temperatures lower than 45 • C (above this temperature, it drops sharply -an effect which is not reproduced in JULES). This is consistent with values in the literature (e.g. Massad et al., 2007, and references therein) and consistent with the fitted values of α from Puntel (2012). The value of α for maize is not dependent on leaf age or position (Dwyer and Stewart, 1986). This method of tuning the JULES parameters has assumed that the two limiting rates are predominantly W c and W light , not W e . Note, however, that the photosynthesis light response curve in de Vries et al. (1989) has an exponential dependence on the absorbed radiation, which causes the shape to vary slightly from the non-rectangular hyperbolae used in JULES (with hard-wired values of curvature from Collatz et al., 1992), leading to lower values of photosynthesis below approximately 1500 µmol photons m −2 s −1 . The parameters involved in calculating the leaf internal carbon dioxide partial pressure, q crit and f 0 (in Eq. A19), were not expected to strongly limit the results since this current study focusses on carbon fluxes rather than water fluxes, the runs are irrigated and the rate W e is not expected to be limiting. q crit was left at its default C4 grass value (as in Osborne et al., 2015) and f 0 was set to 0.4 (consistent with the range of maize measurements quoted in de Vries et al., 1989). Respiration Values for µ rl and µ sl (from Eq. A22) were obtained for maize from de Vries et al. (1989) of µ rl = 0.39 and µ sl = 0.43 (note that this assumes one constant value for the nitrogen per carbon in leaves over the crop season and τ = 0.12). Fixing the value for the dark respiration coefficient f dr (used in Eq. A20) is complicated by the inclusion in the code of inhibition of leaf respiration in the light. Also, Atkin et al. (1997) demonstrated that the dark respiration in darkness decreases as the time the leaf has been in darkness increases. This complicates the use of the light response curves for fitting this parameter, since this means that parameters measured during the day will not necessarily correspond to those needed in JULES for modelling the average dark respiration over a 24 h period. Using de Vries et al. (1989) values for the maximum rate of leaf photosynthesis at 450 kg biomass per hectare and maintenance respiration at 25 • C for maize gives f 24 h dr = 0.0081 over the course of 24 h. Even with a correction for inhibition of dark respiration in the light, this is inconsistent with the spread of fitted values of dark respiration to maximum assimilation to light response curves measured at the site between 10:00 and 14:00 local time, presented in Puntel (2012) (leaf is exposed to ambient light pre-measurements), which are much higher, unless the dark respiration derived from the light curves is assumed to have a contribution from what JULES considers the "growth respiration". In general, the dark respiration coefficient estimated from light response curves for maize appears to be higher than the value derived from the maintenance respiration measurement in de Vries et al. (1989) (e.g. Collatz et al., 1992;Dohleman and Long, 2009), which is consistent with there being a component from growth respiration. In our JULES runs, we will use f dr derived from the maintenance respiration observation in de Vries et al. (1989), corrected assuming that in the day of measurement 50 % of leaves experienced inhibition of the dark respiration by light; i.e. f dr is set to 0.0081/0.85 = 0.0095 (this assumption was later tested, and found to be accurate to within 2 %). de Vries et al. (1989) gave a growth respiration coefficient of 0.22, 0.18, 0.19 and 0.18 for maize leaves, stem, roots and cob/grain respectively. These values can not be used directly in JULES since, as described earlier, the growth respiration coefficient in JULES is a constant for each carbon pool. Here, we set r g to 0.25 for every PFT, as in the JULES Global Land (GL4.0) configuration (Walters et al., 2014) (note, however, that this approximation of a constant r g for each plant carbon pool would break down for other crops e.g. soybean). It is also worth noting that Puntel (2012) found that the maximum assimilation rate had a much stronger relationship with leaf nitrogen than the leaf dark respiration rate. In addition, Stirling et al. (1994) shows a strong dependence in dark respiration in maize over time (using fits to light response curves), which can not be captured in JULES: at degree day 220 (roughly where the leaf area reaches a maximum), it is approximately twice as high at degree day 50. As we have discussed, maintenance respiration and V cmax covary in JULES, but the growth respiration is linked to net primary productivity, which increases in the crop up until approximately anthesis. Therefore, the total leaf respiration in Osborne et al. (2015) and this study respectively. the model will vary in time, and will have a different dependence on time to V cmax . However, the issues we have already identified with the modelling of the evolution of V cmax over time will impact the accuracy of the modelling of the maintenance component of the leaf respiration over time. Leaf dark respiration rates also differ between different maize hybrids (Earl and Tollenaar, 1998). There is therefore a large uncertainty in the parameter f dr and the overall determination of growth respiration. Results and Discussion In this section we present the results from the JULES runs and compare with observations from the Mead sites. The runs with the crop model switched off and prescribed LAI and height are useful for evaluating the parameter choices for photosynthesis and respiration, without the additional complication of the feedback between LAI and NPP, as will be discussed first. The results from the full crop-model configuration will then be evaluated. Gross primary productivity Plots of modelled GPP (blue) against observed GPP (green) are shown in Fig. 18 for years in which irrigated maize was grown at the Mead FLUXNET sites US-Ne1 and US-Ne2. While the overall shape of the plots is good, it is clear that GPP in the model is significantly overestimated after the midseason peak in observed GPP (corresponding to where LAI declines as the crop leaves senesce). As discussed earlier, the model V cmax at a certain temperature stays constant, whereas in reality it would decline over the crop season. Implementing this decline into JULES would result in a much closer fit between the model GPP and observed GPP. To a lesser extent, there also appears to be an overestimation of GPP in the model before senescence. This was investigated in more detail by comparing plots of FLUXNET GPP against observed Absorbed Photosynthetically Active Radiation (APAR) with plots of model GPP against APAR, for hourly measurements before the crop reaches DVI = 1, for LAI bins of size 1. Figure 19 shows the LAI bin 3.5 to 4.5. There is a clustering of points due to the hourly resolution of the data, which is most clearly seen in the model output. Hours with high diffuse radiation fractions (red) are similar in both the FLUXNET data and the model output, although the scatter in the FLUXNET data is higher, as expected from the plots of observed FAPAR (Fig. 13). For lower diffuse radiation fractions in the model, GPP decreases due to a combination of the effect of sunflecks and an increase in the effective decay constant of absorbed PAR through the canopy at the beginning and end of the day. Even when the scatter in the FAPAR observations is taken into account, the decrease in GPP for lower diffuse radiation fractions does not appear to be as large in the model as in the GPP observations, and this is the source of the overestimation of GPP we saw in the model output in Fig. 18 before the onset of senescence. This effect was investigated further by considering the dependence on air temperature and vapour pressure deficit in the FLUXNET GPP data. As expected, the lower temperature points (Fig. 20, top left) and lower vapour pressure deficit (VPD) points (Fig. 20, top right) are clustered at low values of APAR. However, there does not seem to be a dependence on temperature or VPD at a constant APAR across the range of GPP observations. Soil moisture stress is a factor that we have neglected in our runs (since we have assumed perfect irrigation), which could, if implemented, reduce GPP when the soil moisture is low. However, as Fig. 20 shows for soil moisture content at a depth of 10 cm (bottom left) and 25 cm (bottom right), at higher APAR values, points below a threshold of 30 % appear to be distributed evenly across the range of GPP observations for a constant APAR. Including a decrease in leaf nitrogen concentration through the canopy (while keeping the total amount of nitrogen constant) would have the effect of making the light use of the plant more efficient, which would increase model GPP still further. Decreasing V cmax,norm would have the effect of decreasing model GPP at higher APAR values, but this would not solve the issue at mid-range APAR points ∼ 800 µmol photons (m 2 ground) −1 s −1 and would also worsen the fit of the points with high diffuse radiation fractions. It is therefore difficult to see a clear way in which the model parameter settings or processes should be improved. It would be possible to improve the validation against observations by decreasing α or changing the curvature parameter in the non-rectangular hyperbola implemented for light response within JULES (currently hard wired) but it is difficult to justify this theoretically. Respiration The results from the model runs without the crop model can also be used to test the parametrisation of respiration. Using a number of assumptions, the measurements from Mead can be used to get an approximate value for leaf main-tenance respiration. First, approximate values for NPP were obtained by linearly interpolating the Mead above-ground biomass measurements to get a daily time series, and then differentiating. The fraction of NPP directed to the roots at each DVI was calculated from the expression for maize in de Vries et al. (1989) (plotted in Fig. 2) and then used to obtain the total NPP. Combining these NPP values with the GPP observations and assuming a value for the growth respiration coefficient of r g = 0.25 and summing over the crop season leads to an estimation of the plant maintenance respiration R pm . It is necessary to sum over the whole season, since the NPP and GPP calculated in this way appear to be slightly out of step with each other, and this effect dominates the daily time series of derived maintenance respiration. The interpolated carbon pool observations were used to calculate the factor 1 + µ rl that converts between the leaf maintenance respiration and the total plant maintenance respiration. Note that the stem carbon observations had to be corrected using τ to get C stem . This factor was used to convert the leaf maintenance respiration outputted by the model to the total plant maintenance respiration. Figure 21 shows the R pm derived from observed GPP against R pm /f dr derived from the outputted model leaf maintenance respiration. The x axis therefore is independent of f dr , which can be obtained from the gradient. Data from 2010 is not included (since the crop was damaged by hail). Both the default JULES C4 grass f dr (solid line) and the f dr used in our maize configuration (dashed line) are shown. It can clearly be seen that the new maize f dr is a better fit than the default C4 grass value. While there are many model and parameter assumptions (r g , µ rl , µ sl , β = 1, C root , τ ) that have gone into this plot, this is still an important consistency check of our parameters. Tables 1, 2, 3, and 4 Figure 22 compares the model GPP and the observations, and shows very close agreement. This is influenced by a cancellation of two effects: as identified in the previous section, the GPP per APAR in the model is biased high, whereas the outputted LAI is biased low, shown in Fig. 23. In part the reduction in modelled LAI compared to observations was deliberately introduced when tuning the senescence parameters so that a quicker decrease in LAI compensates partially for the model not including a decrease in leaf photosynthetic capacity. However, it is also clear that the interannual variability of LAI is not reproduced by the model; in particular, in some years (2006,2010,2011 for US-Ne1 and 2011 for US-Ne2), the LAI is too small in the crop season up to anthesis. This is due to the high sensitivity of the plant in its early life to parameter settings, due to the feedback between NPP and LAI. In these site and year combinations (2006,2010,2011 for US-Ne1 and 2011 for US-Ne2), temperatures between DVI 0.1 and DVI 0.2 are higher on average, and so DVI is increasing more rapidly, which gives the plant less time to accumulate NPP, leading to a reduced rate of increase of LAI with respect to DVI in the model runs at this growth stage. On the other hand, the SLA observations for these years in the early crop season are particularly high compared to the rest of the distribution, which means that the observations do not show this reduced rate of increase of LAI at this growth stage. Fitting γ and δ to the SLA observations in just these site and year combinations (2006,2010,2011 for US-Ne1 and 2011 for US-Ne2) gives 18.0 and −0.45 respectively. Using these parameters in JULES runs with the crop model gives much better agreement with LAI observations (Fig. 24). This is also consistent with the result from US-Ne2 in 2010: since the crop emerges 9 days after the crop in US-Ne1, the period of relatively high temperatures mostly falls before the crop is initialised. It is possible that parametrising SLA with day after emergence rather than with DVI might improve the fit between model and observed LAI by reducing the sensitivity of the SLA parametrisation to temperature. Results from JULES runs with the crop model The canopy height is well represented in the runs (Fig. 25). The above-ground carbon in the model also fits the observations well (Fig. 26). The harvest carbon pool (which includes the reproductive parts of the plant and the yellow leaves) is overestimated in the model, which is consistent with the overestimation of GPP during the senescence period. Conclusions The JULES-crop parametrisation of crops within JULES was introduced to improve the carbon and energy fluxes in the model over croplands and to investigate the effect of weather and climate on food and water resources, at global, regional and local scales. In this evaluation paper, we have looked in detail at how the input parameters in this pre-existing model can be tuned for one crop (maize) at one location (Mead, US), where there are a wide variety of observations to probe how the model components perform, both separately and in combination. In previous analyses with JULES-crop, it has been assumed that model photosynthesis and respiration parameters can be set to the default C3 grass values for C3 crops and the default C4 grass values for C4 crops. We have used literature results and the observations available at this site to improve the maize parameters required in both the crop-model part of H JULES (such as partition fractions and allometric constants) and the generic vegetation code. With the new parameters, there is good agreement between modelled GPP and observed GPP up until anthesis if the feedback between NPP and LAI is removed by switching the crop model off and prescribing LAI (and canopy height) when the skies are mostly overcast. The model tends to overestimate GPP for clearer skies. After anthesis, there is a much greater overestimation of GPP, due to the model being unable to capture the decrease in photosynthetic capability at the leaf level over time in the crop. The respiration parameters were more difficult to test in isolation, but integrating model respi-ration over the entire crop season produced results that were consistent with the GPP and carbon pool observations. Running the full crop model, including all the new parameters, produced GPP time series that were very close to the observations. This was helped partially by a cancellation of two biases -the model GPP for a certain LAI was biased high, as we have just discussed, and the LAI in the model was biased low compared to the observations. There were a few anomalous years in which the peak LAI in the model was approximately two-thirds that of the peak LAI in the observations, which may imply oversensitivity to initial conditions. The amount of above-ground carbon was reproduced well, although the amount of carbon in the harvest pool was overestimated in most cases. There should be three main priorities for extending this work to improve the representation of maize at these sites. First, work should be done to tune the parametrisation of soil moisture stress of maize so that the water balance of the irrigated sites could be accurately modelled and runs for the non-irrigated site could also be included. Second, a parametrisation of the maximum rate of carboxylation of Rubisco V cmax should be added that allows it to vary over the course of the crop season. Third, these runs have been tightly constrained by using observed sowing, emergence, flowering and harvest dates to generate the thermal times needed as input to JULES. For most regions, and for any climate projections, this sort of data will not be available. Therefore, it would be a useful test of the model to investigate the performance at the Mead sites if the model is given generic values for the thermal time parameters. While this study has focussed on modelling one crop variety at one site, it also provides a demonstration of how knowledge of the structure of the model can be used to tease apart different components of the model so that they can be tuned or evaluated against observations. This ranged from the tuning of parameters in simple allometric relations such as that relating stem carbon to canopy height, to tuning the canopy parameters using the external representation of the Tables 1, 2, 3, and 4. canopy scheme in pySellersTwoStream, up to running JULES with the crop model switched off and prescribed LAI and canopy height, in order to tune GPP without the complication of the feedback between GPP and LAI. It therefore provides a case study, which can be used when setting up and evaluating the model for other crop varieties and sites. Code availability. This study uses JULES revision 5061, which is between the 4.6 and 4.7 releases. The code can be downloaded from the JULES FCM repository at https://code.metoffice.gov.uk/ trac/jules/ (JULES collaboration, 2017) (registration required). The pySellersTwoStream package is available at https://github.com/ tquaife/pySellersTwoStream (Quaife, 2016). The version used in this study was downloaded on 15 September 2016. Data availability. Unless otherwise noted, all site observations discussed in this paper were obtained from the Site Information pages of the AmeriFlux website hosted by Oak Ridge National Laboratory (http://public.ornl.gov/ameriflux/, AmeriFlux collaboration, 2016) or by personal communication with the Mead sites Research Technologist. Note: these data are currently being transitioned to a new location: http://fluxnet.fluxdata.org/. Appendix A: Model description In this section, we will summarise the relevant features of JULES and the JULES-crop parametrisation within it, paying particular attention to new model features available since the Osborne et al. (2015) study (i.e. post version 4.0). These new options are indicated in Tables 1, 2, 3, and 4. A1 Crop model In JULES-crop, the development status of each crop within a grid box is parametrised by a crop development index (DVI). DVI is −2 before sowing, −1 at sowing, 0 at emergence and 1 at flowering. Under favourable conditions, harvest occurs at a DVI of 2. The DVI has three main functions within the JULES-crop model: it determines the harvest date, the partitioning of NPP between the crop carbon pools and the dependence of the specific leaf area on leaf carbon. The increase in DVI over the course of the crop's lifetime is determined by crop-specific thermal time parameters, set by the user. If the dependence on photoperiod length is neglected (as in Osborne et al., 2015), thermal time becomes an accumulation of effective temperature between one development stage and the next, where effective temperature is defined by i.e. a triangular function, peaking at an optimal temperature T o , which is zero below a base temperature T b and above a maximum temperature T m . T o , T b and T m are parameters specified by the user for each crop. T o , T b and T m are given in Kelvin and thermal time in units of degree days. Crop growth is modelled by accumulating net primary productivity over the course of a day (NPP acc ) and splitting this carbon between the crop root, stem, leaf, harvest and reserve carbon pools for that tile (C root , C leaf , C stem , C harv and C resv respectively) according to where τ is the fraction of stem carbon that is partitioned into the stem reserve pool (containing the remobilisable carbohydrates) and p i (for i = root, stem, leaf, harv) are the partition coefficients defined by where j = root, stem, leaf, harv. α i and β i are numerical constants that are tuned to observational data. α harv and β harv are both set to zero. All other α i and β i are set by the user for each crop. Note that j p j = 1. The crop carbon pools are initialised at DVI init , which is at or just after emergence. At initialisation, the crops are given a certain amount of carbon C init , which is distributed between the carbon pools according to the values of p i at DVI = DVI init . Once p stem drops below 0.01, carbon from the stem reserve pool is mobilised to the harvest pool, by reducing C resv by 10 % each day and adding this carbon to the harvest pool (as proposed in de Vries et al., 1989). Similarly, once the DVI is above a threshold value DVI sen , carbon from the leaf pool is mobilised to the harvest pool to simulate leaf senescence, by reducing C leaf by a fraction, each day when DVI > DVI sen . ν and µ are numerical constants that are tuned to observational data. After DVI init and if the sowing date is prescribed, the model harvests the crop and resets the crop tile if any of the following conditions are satisfied: 1. DVI reaches 2 (i.e. the desired harvest condition); 2. LAI > 15, since once the model reaches such large LAI it is clearly unrealistic; 3. the temperature of the second soil layer from the top falls below a user-defined temperature T mort at any time after DVI = 1; 4. DVI > 1.0, the carbon in the roots, leaves, stem and stem reserve pool of the crop falls below C init and the amount of carbon in the harvest pool is greater than zero; 5. the crop age reaches 1 year, so that a new crop can be sown each year. The crop height h is calculated from the C stem pool using where κ and λ are allometric constants and f C,stem is the fraction of carbon in the dried stem (excluding the stem reserves), all given as input by the user. The green (i.e. photosynthesising) leaf area index (LAI) is calculated from the leaf carbon and the specific leaf area (SLA) by where f C,leaf is the carbon fraction of the dry leaves. The SLA depends on the DVI via where γ and δ are allometric constants which are set by the user. JULES-crop outputs water-limited potential yield if irrigation is switched off and potential yield if irrigation is on, expressed in kg C m −2 . This yield is calculated by multiplying the value of C harv on the day of harvest by a parameter f yield supplied by the user, which represents the fraction of C harv that is economically valuable, i.e. the maize kernel in our runs. A2 Relationship between LAI, canopy height and plant carbon for natural vegetation When the crop model is switched off, different allometric functions are used to approximate the carbon in the leaf, stem and root pools based on the prognostics LAI and canopy height h. These allometric functions make use of a "balanced" leaf area index (LAI bal ), which is calculated from canopy height using LAI bal = a ws η sl a wl h where a ws , a wl , η sl and b wl are all allometric constants, defined in relation to the respiring stem carbon S and the total stem carbon W: We assume here that S is equivalent to C stem and W is equivalent to C stem + C resv in the crop model. Therefore, a ws is equivalent to 1 − τ in the crop model and these equations can be compared directly to Eq. (A5) until the start of the remobilisation of the crop stem reserve pool. The size of the leaf carbon pool C leaf is calculated by multiplying the LAI by the canopy-averaged specific leaf density σ l (in kg C (m 2 leaf) −1 ), which is assumed to be constant, i.e. The root carbon C root is approximated by A3 Canopy JULES has a number of options for calculating the photosynthetically active radiation (PAR) available to leaves at different depths in the plant canopy. In this discussion, we will focus on the canopy radiation scheme used in Osborne et al. (2015) (can_rad_mod 5) and the canopy radiation scheme currently recommended for layered canopies in JULES (can_rad_mod 6), which both treat the direct and diffuse components of the incident radiation separately (as in Sellers, 1985) and include sunflecks. We also assume a zenith angle dependence (l_cosz=T). JULES assumes that the incident PAR is half of the incident shortwave radiation. The amount of incident PAR composed of diffuse radiation is given as part of the driving data. The canopy is split into 10 equal layers of green leaf area index (LAI). The equations for absorption and scattering at each layer for the incident diffuse beam and the incident direct beam are solved separately, taking into account the distribution of leaf angles and the zenith angle. The sunlit fraction of the leaf is also calculated, and absorbs light from the direct component of the direct beam radiation ("sunflecks"), in addition to the diffuse light from the direct beam and light from the diffuse beam. The shaded fraction of the leaf absorbs light scattered from the direct beam and light from diffuse beam only (i.e. no direct sunlight). JULES has two leaf angle distributions currently implemented -spherical and horizontal. As of JULES version 4.6, JULES also includes a canopy clumping factor a, which scales LAI within the canopy radiation scheme and represents variation within and across canopy structures. A4 Modelling C4 photosynthesis In JULES, potential leaf-level photosynthesis (unstressed by water availability and ozone effects) is calculated as the smoothed minimum of three rates, following Collatz et al. (1991Collatz et al. ( , 1992: (a) the Rubisco-limited rate W c , which depends on the maximum rate of carboxylation of Rubisco, (b) the light-limited rate W light and (c) the rate associated with the transport of photosynthetic products for C3 plants or PEP (phosphoenolpyruvate) carboxylase limitation for C4 plants W e . For C4 plants, W c is set to the maximum rate of carboxylation of Rubisco, V cmax . V cmax is calculated using and V cmax (T c = 25 • C) are within 5 % of each other. T upp and T low are used to give the leaf an optimum temperature range, which is superimposed on the Q 10 dependence in f T . If trait-based physiology is switched off in JULES (l_trait_phys=F) V cmax,norm = n e n l , where n l is the mass of nitrogen per mass of carbon in the leaf (with units kg N (kg C) −1 ), which varies through the canopy, and n e is a normalisation constant, fitted to data. The input parameters specified by the user are n 0 l (n l at the top of the canopy) and n e . In the JULES canopy radiation scheme can_rad_mod 5, V cmax,norm is assumed to vary through the canopy according to exp(−k n LAI layer /LAI). In can_rad_mod 6, V cmax,norm varies through the canopy according to exp(−k nl LAI layer ). k n and k nl are PFT-dependent parameters set by the user. The light-limited rate of leaf photosynthesis for C4 plants is calculated in JULES using where α is the quantum efficiency in mol CO 2 (mol PAR photons) −1 and I APAR is the absorbed photosynthetically active radiation (APAR) in mol PAR photons m −2 s −1 . As discussed, can_rad_mod 5 and can_rad_mod 6 include the effect of sunflecks by spitting the leaf into a sunlight and a shaded part, which have different values of I APAR and therefore different W light . The rate associated with PEP carboxylase limitation W e in JULES is where P * is the surface air pressure and c i is the leaf internal carbon dioxide partial pressure, which is calculated for C4 plants using where is the photorespiration point (zero for C4 plants) and c a is canopy CO 2 pressure. q is the canopy level specific humidity deficit, q crit is the critical specific humidity deficit and f 0 is the ratio of c i to c a at which the canopy level specific humidity deficit is zero. c a is calculated from R CO 2 P * / , where R CO 2 is the atmospheric CO 2 mass mixing ratio and = 1.5194 is the ratio of molecular weights of CO 2 and dry air. As an example, for zero specific humidity deficit, an atmospheric CO 2 mass mixing ratio of 5.6×10 −4 (2003 global average; Dlugokencky and Tans, 2016), f 0 = 0.8 (JULES C4 grass default), the value of W e is 5.9V cmax . The rate of gross leaf photosynthesis W is the smoothed minimum of W c , W light and W e (calculated using nonrectangular hyperbolic functions with the curvature parameters hard wired). The net potential (i.e. unstressed) leaf photosynthetic carbon uptake A p is the gross leaf photosynthesis minus the dark leaf respiration R d . The potential leaf photosynthesis is converted to a net photosynthesis by multiplying by a soil water stress parameter β. Stomata at points with negative or zero net photosynthesis or where the leaf resistance exceeds its maximum value are closed (i.e. leaf gross photosynthesis is zero). Leaf resistance is calculated from the net (i.e. water-limited) rate of photosynthesis, (c a − c i ), the leaf temperature and the ratio of leaf resistance for CO 2 to leaf resistance for H 2 O (= 1.6). A5 Respiration In JULES, the (non-water-limited) leaf dark respiration R d (in mol CO 2 (m 2 leaf) −1 s −1 ) is calculated by R d = 0.7f dr V cmax for I APAR LAI > 10 µmol CO 2 (m 2 ground) −1 s −1 f dr V cmax otherwise (A20) to allow for the inhibition of dark respiration during daylight. R d is summed over the canopy levels for sunlit and shaded leaves to get R dc , the canopy dark respiration in (in mol CO 2 (m 2 ground) −1 s −1 ). The plant maintenance respiration in kg C (m 2 ground) −1 s −1 is calculated (for the setting l_scale_resp_pm=T) using = 0.012R dc β 1 + µ rl where N root , N stem and N leaf are the nitrogen in the roots, stems and leaves respectively. µ rl is the mass ratio of nitrogen to carbon in the roots divided by the ratio of nitrogen to carbon in the leaves. µ sl is the mass ratio of nitrogen to carbon in the stem (not including stem reserves) divided by the ratio of nitrogen to carbon in the leaves. The factor 0.012 relates mol CO 2 to kg C. If the option l_scale_resp_pm=F is set, the root and stem terms do not depend on β. In JULES, plant growth respiration R pg is a fixed fraction r g (the growth respiration coefficient) of the gross primary productivity ( G ) minus the plant maintenance respiration: Note that this relation results in the correct growth respiration on timescales of the order of a day or longer (on the model time step scale, R pg will be negative in the night, which is misleading if taken in isolation). The net primary productivity N is therefore = (1 − r g )( G − R pm ). A6 Irrigation In JULES, irrigation is implemented such that the water in the top two soil layers is continuously topped up to a critical level (often the field capacity) during the "irrigation season", if sufficient irrigation water is available. We will consider the irrigation season to last all year (irr_crop=0) and treat the supply of irrigation as unlimited (l_irrig_limit=F). With these settings, the soil water stress parameter β stays approximately equal to 1; i.e. the plant is not water stressed. When irrigation is on, the root distribution has a negligible influence on model performance. A7 Nitrogen limitation Although JULES has a nitrogen cycle implemented (as of version 4.4), it can not yet be used in conjunction with the crop model. We therefore make the assumption here that the crops are not nitrogen limited.
2017-11-19T12:53:38.703Z
2016-10-06T00:00:00.000
{ "year": 2016, "sha1": "9ac93747a8249b03dad5cf80a69d9eb424b372de", "oa_license": "CCBY", "oa_url": "https://www.geosci-model-dev.net/10/1291/2017/gmd-10-1291-2017.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2b40b9249978fd0fa02e29d29b71efa8763e3763", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
219746466
pes2o/s2orc
v3-fos-license
The effect of the use of natural gas on the emissivity of a flame in a cylinder of an automobile diesel engine The issues and features of heat transfer in the cylinder of a high-speed diesel engine of 4CHN 11,0/12,5 dimension when working on diesel and gas engine fuel (gas-diesel process) are considered. The spectral and integrated radiation characteristics of soot particles in the cylinder of the 4CHN 11,0/12,5 gas diesel engine were calculated depending on the angle of rotation of the crankshaft. Introduction Diesels are non-alternative power plants in automotive, tractor and agricultural engineering, and their characteristics, ultimately, determine the operational, energy, economic, environmental and overall dimensions of the equipment used. In modern piston diesels, the workflow is characterized by the intense occurrence of thermal and gas-dynamic processes. It is necessary to strive to ensure that these processes provide an improvement in the effective performance of diesel engines. An accurate analytical description of the set of physicochemical processes associated with heat transfer and occurring in diesel engines has not been created due to the complexity of these phenomena and the multiplicity of factors affecting them. Moreover, the recent increase in the number of developments on the use of alternative fuels in diesel engines (compressed natural gas, alcohol fuels, fuels based on vegetable oils, etc.) does not seriously consider heat transfer processes at all. Therefore, work aimed at studying and improving work processes in diesel engines, including when working on alternative fuels, are in demand and far from being studied [1][2][3][4][5][6][7][8][9]. The issues of heat transfer, which in relation to piston internal combustion engines is of a pronounced local character, are of major applied value. The heat exchange process in the combustion chambers (CC) of diesel engines is radiationconvective or complex. Therefore, when studying such heat transfer, it is necessary to solve the radiative energy transfer equations in conjunction with the equations describing the gas dynamics and hydrodynamics of the processes and convective heat transfer [10][11][12][13][14][15]. When considering radiative heat transfer in a diesel cylinder, it is assumed that the working fluid in the cylinder is a medium that emits, absorbs and dissipates thermal energy. Moreover, the working fluid is a dispersed medium, since it contains soot particles in its volume, which are the main generators of thermal radiation. Like temperature, the local concentration of soot particles in the cylinder volume is heterogeneous. It depends on the mode of operation, the angle of rotation of the crankshaft and mass transfer, i.e. directions and intensities of convective flows in the cylinder. Accordingly, the attenuation coefficient of the beam, which is one of the most important optical parameters of the medium, will change, since its value depends on the concentration of suspended particles. In addition, during the working cycle in the cylinder there is a multicomponent medium consisting of gases (air and gaseous products of combustion), fuel vapors, droplets of liquid fuel and solid soot particles. All this must be taken into account when determining the total heat fluxes perceived by the walls of the combustion chambers [16][17][18][19][20][21]. Experimental part The radiation from the medium inside the diesel cylinder is continuous and similar to gray body radiation, but uneven. And the presence in the volume of soot particles greatly increases the intensity of thermal radiation As has already been noted, many factors affect the thermal radiation in a diesel cylinder. In [4], they are divided into four main groups. With regard to diesels, they have their own characteristics. Firstly, these are the geometric parameters of the emitting volume. Since the shape and dimensions of the combustion chamber are different for different types of diesel engines, the number of spray holes for nozzle, the shape and direction of air flows in the compressor station, the type of mixture formation are different, and the heat flow will be different. Secondly, the radiation characteristics of the condensed phase. These include optical constants (refractive indices and absorption), sizes and size distribution of particles (primarily carbon black), the chemical composition of the condensed phase, etc. Thirdly, the radiation characteristics of the gas phase. These include chemical and thermal nonequilibrium in the CC, the wavelength and spectral range of radiation of the main components of the phase, temperature and its distribution in the CC, gas pressure, chemical composition of the medium, optical properties of the gas phase, and a number of other parameters. Fourth, the physical characteristics of surfaces that limit the emitting volume. This is the temperature of the bounding surfaces, and the reflective and radiating abilities of the surfaces, and the boundary conditions. In relation to piston internal combustion engines, in the first place, the piston and cylinder head are considered. To a lesser extent -cylinder walls. To calculate the radiation heat flux in a diesel engine, it is necessary to know the temperature of the emitter, the degree of blackness of the emitting and absorbing medium, and the degree of blackness of the surfaces of the diesel engine [22][23][24][25][26][27][28]. As for the temperature of soot particles, most researchers agree that the temperature of the particles and the temperature of the gas surrounding them can be assumed the same. Although there is a different opinion. So, in some works it was shown that the temperature difference between soot particles and gas does not exceed 1 K for particle sizes up to 0,3•10 -6 m. In another work, the same temperature of soot particles was experimentally shown for particles about 8•10 -8 m in size and gas with an error of ± 60 K [1]. Blackness ε refers to the most important radiation characteristics. It depends on the nature of the body, temperature and surface roughness. The degree of blackness of the working fluid in the diesel cylinder during the cycle depends on the load. In previously published works on this subject, diesel flame is considered as a gray body, i.e. radiating in all wavelength range λ. In this case, the bulk of the radiated thermal energy falls on a certain range. In different sources this range is different, but on average it is [0,5, 10] microns. In [1], an empirical formula is given for determining the degree of blackness of the working fluid depending on the angle of rotation φ of the crankshaft and the average effective pressure of the regases in the cylinder: However, this expression is applicable only to diesel engines running on traditional diesel fuel. How applicable it is to fuels with a different chemical structure needs to be checked additionally. Interesting results of the experimental determination of the local temperatures of the working fluid in the 1CHN 18/20 diesel cylinder by the color temperature method are given in [2]. The study was carried out at values of pe = 0,5...0,6 MPa and n = 1500 min -1 , photo and filming was carried out with a frequency of 4000...5000 frames per second. It was experimentally established that the working fluid in the cylinder has a very inhomogeneous temperature field with a wide range of temperature gradients (from 30...70 K/mm inside one zone to 300...500 K/mm at the borders of burned and unburned zones). In addition, it turned out that when operating on diesel fuel, a temperature above 1700 K has approximately 28% of the working fluid mass in the cylinder, more than 2000 K -27%, more than 2200 K -22%, more than 2400 K -only about 2% and more than 2600 K -about 0,2%. The rest of the working fluid (about 20%) has a temperature of less than 1700 K [29][30][31][32][33]. All the more interesting are the issues related to the study of local and boundary zones and the processes occurring in them when the diesel engine is operated using fuels of a different chemical composition and, more importantly, a different chemical structure. Compressed natural gas has a number of differences in its motor properties from petroleum diesel fuel, and the use of the gas-diesel process leads to the formation of local zones that directly affect the processes of soot formation and oxidation of soot particles. This, in turn, affects the intensity of radiant heat transfer. The supply of a pilot portion of diesel fuel leads to the formation of zones with a lack of oxidizing agent in the core of the flares and, accordingly, soot formation processes in these zones will be predominant. At the same time, with the further development of flares, new portions of the methaneair mixture will be involved in the combustion processes, in which the soot particles will be oxidized. And, as you know, the combustion of soot particles is accompanied by the release of a large amount of radiant energy. We studied the operation of a diesel engine using compressed natural gas supplied to diesel cylinders along with an air charge ignited by a pilot portion of diesel fuel supplied through a standard power system (the so-called gas-diesel process) and examined the optical properties and radiation characteristics of a flame in a diesel cylinder. A feature of such studies is the presence of a large number of components in the combustion products of another carbon-hydrogen ratio in the fuel molecule, their heterogeneity, that is, the presence of a gas and solid phase, which, of course, affects the emissivity and absorption of the medium, the degree of blackness flame and other radiation characteristics. In our calculations, we used complex programs for modeling the optical properties, radiation characteristics, and thermal radiation of the «SPEKTR» and «CARBON». The comprehensive program «SPEKTR», developed in the FORTRAN language, is designed to calculate the radiation characteristics (RC) of heterogeneous combustion products (HCP) of internal combustion engines. It allows one to carry out calculations for the real components of the gas and condensed phase of the gas-pressure station with any distribution of condensate particles in a wide range of thermo-and gas-dynamic parameters. The «CARBON» program calculates the RC of ICE combustion products. The initial data for the calculations are: dimensions and geometry of the emitting volume, radiation characteristics of surfaces, gas and particle temperatures, pressure, mass fraction of condensate, molar mass, particle density, particle size distribution function, optical properties, concentration of the main components of the gas phase. The calculation results are the RC of individual particles and unit volume, the coefficients of the expansion of the indicatrix in a series by Legendre polynomials, spectral and integral flux densities, spectral and integral degrees of blackness. According to the same work, the temperature in the core of the torch is 900...1000 K, while the gas temperature in the volume of the compressor is from 1200 to 1800 K, depending on the angle of rotation of the crankshaft [24,28,30]. Conclusion For the CC of diesel engines, when calculating the radiant heat transfer, it is necessary to take into account a number of specific features related to both the non-stationary nature of the process and the geometry of the CC and torch in it. Depending on the degree of turbulence of the air flow in the chamber and the shape of the fuel jet, a soot particle concentration field is formed. The presence of a finite duration of fuel injection, the polydisperse composition of the fuel droplets in the flare, the uncertainty of the coordinates of the self-ignition centers, the turbulence of the intracylinder volume as a result of the movement of the piston and the combustion process, its constantly changing volume, the constantly changing concentration of soot particles and their dispersion composition practically exclude the possibility of directly calculating instantaneous local concentrations and dispersion composition of soot particles, which is necessary for calculating radiant heat sharing [22,24].
2020-05-28T09:15:39.645Z
2020-05-28T00:00:00.000
{ "year": 2020, "sha1": "d6d5b4ceae77d30bdd31e909df7cf51e5e646cbf", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/862/6/062065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e1d8f2e164f6ea93e32a80538ba10be32f6c55ad", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
612535
pes2o/s2orc
v3-fos-license
Automatic Animacy Classification We introduce the automatic annotation of noun phrases in parsed sentences with tags from a fine-grained semantic animacy hierarchy. This information is of interest within lexical semantics and has potential value as a feature in several NLP tasks. We train a discriminative classifier on an annotated corpus of spoken English, with features capturing each noun phrase's constituent words, its internal structure, and its syntactic relations with other key words in the sentence. Only the first two of these three feature sets have a substantial impact on performance, but the resulting model is able to fairly accurately classify new data from that corpus, and shows promise for binary animacy classification and for use on automatically parsed text. Introduction An animacy hierarchy, in the sense of Zaenen et al. (2004), is a set of mutually exclusive categories describing noun phrases (NPs) in natural language sentences. These classes capture the degree to which the entity described by an NP is capable of humanlike volition: a key lexical semantic property which has been shown to trigger a number of morphological and syntactic phenomena across languages. Annotating a corpus with this information can facilitate statistical semantic work, as well as providing a potentially valuable feature-discussed in Zaenen et al.-for tasks like relation extraction, parsing 1 , and machine translation. The handful of papers that we have found on animacy annotation-centrally Ji and Lin (2009), Øvrelid (2005), and Orasan and Evans (2001)classify only the basic ANIMATE/INANIMATE contrast, but show some promise in doing so. Their work shows success in automatically classifying individual words, and related work has shown that animacy can be used to improve parsing performance (Øvrelid and Nivre, 2007). We adopt the class set presented in Zaenen et al. (2004), and build our model around the annotated corpus presented in that work. Their hierarchy contains ten classes, meant to cover a range of categories known to influence animacy-related phenomena cross-linguistically. They are HUMAN, ORG (organizations), ANIMAL, MAC (automata), VEH (vehicles), PLACE, TIME, CONCRETE (other physical objects), NONCONC (abstract entities), and MIX (NPs describing heterogeneous groups of entities). The class definitions are straightforward-every NP describing a vehicle is a VEH-and Zaenen et al. offer a detailed treatment of ambiguous cases. Unlike the class sets used in named entity recognition work, these classes are crucially meant to cover all NPs. This includes freestanding nouns like people, as well as pronominals like that one, for which the choice of class often depends on contextual information not contained within the NP, or even the sentence. In the typical case where the head of an NP belongs unambiguously to a single animacy class, the phrase as a whole nearly always takes on the class of its head: The Panama hat I gave to my uncle on Tuesday contains numerous nominals of differ-ent animacy classes, but hat is the unique syntactic head, and determines the phrase to be CONCRETE. Heads can easily be ambiguous, though: My stereo speakers and the speakers at the panel session belong to different classes, but share a (polysemous) head. The corpus that we use is Zaenen et al.'s animacyannotated subset of the hand-parsed Switchboard corpus of conversational American English. It is built on, and now included in, Calhoun et al.'s (2010) NXT version of Switchboard. This annotated section consists of about 110,000 sentences with about 300,000 NPs. We divide these sentences into a training set (80%), a development set (10%), and a test set (10%). 2 Every NP in this section is either assigned a class or marked as problematic, and we train and test on all the NPs for which the annotators were able to agree (after discussion) on an assignment. Methods We use a standard maximum entropy classifier (Berger et al., 1996) to classify constituents: For each labeled NP in the corpus, the model selects the locally most probable class. Our features are described in this section. We considered features that required dependencies between consecutively assigned classes, allowing large NPs to depend on smaller NPs contained within them, as in conjoined structures. These achieved somewhat better coverage of the rare MIX class, but did not yield any gains in overall performance, and are not included in our results. Bag-of-words features Our simplest feature set, HASWORD-(tag-)word, simply captures each word in the NP, both with and without its accompanying part-of-speech (POS) tag. Internal syntactic features Motivated by the observation that syntactic heads tend to determine animacy class, we introduce two features: HEAD-tag-word contains the head word of the phrase (extracted automatically from the parse) and its POS tag. HEADSHAPE-tag-shape attempts to cover unseen head words by replacing the word string with its orthographic shape (substituting, for example, Stanford with Ll and 3G-related with dLl). External syntactic features The information captured by our tag set overlaps considerably with the information that verbs use to select their arguments. 3 The subject of see, for example, must be a HUMAN, MAC, ANIMAL, or ORG, and the complement of above cannot be a TIME. As such, we expect the verb or preposition that an NP depends upon and the type of dependency involved (subject, direct object, or prepositional complement) to be powerful predictors of animacy, and introduce the following features: SUBJ(-OF-verb), DOBJ(-OFverb) and PCOMP(-OF-prep)(-WITH-verb). We extract these dependency relations from our parses, and mark an occurrence of each feature both with and without each of its optional (parenthetical) parameters. Results The following table shows our model's precision and recall (as percentages) for each class and the model's overall accuracy (the percent of labeled NPs which were labeled correctly), as well as the number of instances of each class in the test set. Binary classification We test our model's performance on the somewhat better-known task of binary (ANIMATE/INANIMATE) classification by merging the model's class assignments into two sets after classification, following the grouping defined in Zaenen et al. 4 While none of our architectural choices were made with binary classification in mind, it is heartening to know that the model performs well on this easier task. Overall accuracy is 93.50%, while a baseline model that labels each NP ANIMATE achieves only 53.79%. All of the feature sets contribute measurably to the binary model, and external syntactic features do much better on this task than on finegrained classification, despite remaining the worst of the three sets: They achieve 78.66% when used alone. We have found no study on animacy in spoken English with which to compare these results. Automatically parsed data In order to test the robustness of our model to the errors introduced by an automatic parser, we train an instance of the Stanford parser (Klein and Manning, 2002) on our training data (which is relatively small by parsing standards), re-parse the linearized test data, and then train and test our classifier on the resulting trees. Since we can only confidently evaluate classification choices for correctly parsed constituents, we consider accuracy measured only over those hypothesized NPs which encompass the same string of words as an NP in the gold standard data. Our parser generated correct (evaluable) NPs with precision 88.63% and recall 73.51%, but for these evaluable NPs, accuracy was marginally better than on hand-parsed data: 85.43% using all features. The parser likely tended to misparse those NPs which were hardest for our model to classify. Error analysis A number of the errors made by the model presented above stem from ambiguous cases where head words, often pronouns, can take on referents of multiple animacy classes, and where there is no clear evidence within the bounds of the sentence of which one is correct. In the following example the model incorrectly assigns mine the class CONCRETE, and nothing in the sentence provides evidence for the surprising correct class, HUMAN. Well, I've used mine on concrete treated wood. For a model to correctly treat cases like this, it would be necessary to draw on a simple co-reference resolution system and incorporate features dependent on plausibly co-referent sentences elsewhere in the text. The distinction between an organization (ORG) and a non-organized group of people (HUMAN) in this corpus is troublesome for our model. It hinges on whether the group shares a voice or purpose, which requires considerable insight into the meaning of a sentence to assess. For example, people in the below is an ORG, but no simple lexical or syntactic cues distinguish it from the more common class HUMAN. The only problem is, of course, that, uh, that requires significant commitment from people to actually decide they want to put things like that up there. Our performance on the class MIX, which marks NPs describing multiple heterogeneous entities, was very poor. The highlighted NP in the sentence below was incorrectly classified NONCONC: But the same money could probably be far better spent on, uh, uh, lunar bases and solar power satellite research and, you know, so forth. It is quite plausible that some more sophisticated approaches to modeling this unique class might be successful, but no simple feature that we tried had any success, and the effect of missing MIX on overall performance is negligible. There are finally some cases where our attempts to rely on the heads of NPs were thwarted by the relatively flat structure of the parses. Under any mainstream theory of syntax, home is more prominent than nursing in the phrase a nursing home: It is the unique head of the NP. However, the parse provided does not attribute any internal structure to this constituent, making it impossible for the model to determine the relative prominence of the two nouns. Had the model known that the unique head of the phrase was home, it would have likely have correctly classified it as a PLACE, rather than the a priori more probable NONCONC. Conclusion and future work We succeeded in developing a classifier capable of annotating texts with a potentially valuable feature, with a high tolerance for automatically generated parses, and using no external or language-specific sources of knowledge. We were somewhat surprised, though, by the relatively poor performance of the external syntactic features in this model: When tested alone, they achieved an accuracy of only about 50%. This signals one possible site for further development. Should this model be used in a setting where external knowledge sources are available, two seem especially promising. Synonyms and hypernyms from WordNet (Fellbaum, 2010) or a similar lexicon could be used to improve the model's handling of unknown words-demonstrated successfully with the aid of a word sense disambiguation system in Orasan and Evans (2001) for binary animacy classification on single words. A lexical-semantic database like FrameNet (Baker et al., 1998) could also be used to introduce semantic role labels (which are tied to animacy restrictions) as features, potentially rescuing the intuition that governing verbs and prepositions carry animacy information.
2014-07-01T00:00:00.000Z
2012-06-03T00:00:00.000
{ "year": 2012, "sha1": "a49455799383c3957922638dd6d4cc7c00730655", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "c0393ea8366ca7b3204b2b1f0e34f85b1fcd7b3b", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
123620593
pes2o/s2orc
v3-fos-license
Coupling of the regional climate model COSMO-CLM using OASIS3-MCT with regional ocean, land surface or global atmosphere model: description and performance We present the prototype of a regional climate system model based on the COSMO-CLM regional climate model coupled with several model components, analyze the performance of the couplings and present a strategy to find an optimum configuration with respect to computational costs and time to solution. The OASIS3-MCT coupler is used to couple COSMO-CLM with two land surface models (CLM 5 and VEG3D), a regional ocean model for the Mediterranean Sea (NEMO-MED12), two ocean models for the North and Baltic Sea (NEMO-NORDIC and TRIMNP+CICE) and the atmospheric component of an earth system model (MPI-ESM). We present a unified OASIS3-MCT interface which handles all couplings in a similar way, minimizes the model source code modifications and describes the physics and numerics of the couplings. Furthermore, we discuss solutions for specific regional 10 coupling problems like handling of different domains, multiple usage of MCT interpolation library and efficient exchange of 3D fields. A series of real-case simulations over Europe has been conducted and the computational performance of the couplings has been analyzed. The usage of the LUCIA tool of the OASIS3-MCT coupler enabled separation of the direct costs of: coupling, load imbalance and additional compu15 tations. The resulting limits for time to solution and costs are shown and the potential of further improvement of the computational efficiency is summarized for each coupling. It was found that the OASIS3-MCT coupler keeps the direct coupling costs of communication and horizontal interpolation small in comparison with the costs of the additional computations and 1 Geosci. Model Dev. Discuss., doi:10.5194/gmd-2016-47, 2016 Manuscript under review for journal Geosci. Model Dev. Published: 20 April 2016 c © Author(s) 2016. CC-BY 3.0 License. sulting limits for time to solution and cost are shown and the potential of further improvement of the computational efficiency is summarized. 20 It was found that the OASIS3-MCT coupler keeps the direct coupling cost of communication and horizontal interpolation below 5 % of the extra cost of coupling for all investigated couplings. For the first time this could be demonstrated for an exchange of approximately 450 2D fields per time step necessary for the atmosphere-atmosphere coupling between COSMO-CLM and MPI-ESM. A procedure for finding an optimum configuration for each of the couplings was developed consid-25 ering the time to solution and cost of the simulations. The optimum configurations are presented for sequential, concurrent and mixed (sequential+concurrent) coupling layouts. The procedure applied can be regarded as independent on the specific coupling layout and coupling details. Introduction Most of the current Regional Climate Models (RCMs) lack frameworks for the interactivity between The neglected meso-scale feedbacks and inconsistencies of the boundary conditions (Laprise 35 et al., 2008;Becker et al., 2015) might be well accountable for a substantial part of large-and regional-scale biases found in RCM simulations at 10-50 km horizontal resolution (see e.g. Kotlarski et al. (2014) for Europe). This hypothesis gains further evidence from the results of convectionpermitting simulations, in which these processes are not regarded either. These simulations provide more regional-scale information and improve e.g. the precipitation distribution in mountainous re-40 gions but they usually do not show a reduction of the large-scale biases (see e.g. Prein et al. (2013)). Besides various improvements, a significant increase of climate change signal was found by Somot et al. (2008) in the ARPEGE model with the horizontal grid refined over Europe and two-way 50 coupled with a regional ocean for the Mediterranean Sea. These results strongly suggest that building Regional Climate System Models (RCSMs) with explicit modeling of the interaction between meso scales in the atmosphere, ocean and land-surface, with large scales in the atmosphere (and ocean) is necessary to consistently represent regional climate dynamics and gain further insights into regional climate change. nis et al. (2012) showed that a cost reduction by a factor of three or less can be achieved using an optimal layout of model components. Later Alexeev et al. (2014) presented an algorithm for finding an optimum model coupling layout (concurrent, sequential) and processor distribution between the model components minimizing the load imbalance in CESM. These results indicate that the optimized computational performance is weakly dependent on the 95 computing architecture or on the individual model components but depends on the coupling method. Furthermore, the application of an optimization procedure was found beneficial. In this study we present a detailed analysis of coupled COSMO-CLM performances on the IBM POWER6 machine Blizzard located at DKRZ, Hamburg. We calculate the speed and costs of the individual model components and of the coupler itself and identify the causes of reduced speed or 100 increased costs for each coupling and reasonable processor configurations. We suggest an optimum configuration for different couplings considering costs and speed of the simulation and discuss the current and potential performances of the coupled systems. Particularities of the performance of a coupled RCM are highlighted together with the potential of the coupling software OASIS3-MCT. We suggest a procedure of optimization of an RCSM, which can be generalized. However, we will show 105 that some relevant optimizations are possible only due to features available with the OASIS3-MCT coupler. The paper is organized as follows: The coupled model components are described in section 2. Section 3 focuses on the OASIS3-MCT coupling method and its interfaces for the individual couplings. The coupling method description encompasses the OASIS3-MCT functionality, method of 110 the coupling optimization and particularities of coupling of a regional climate model system. The model interface description gives a summary of the physics and numerics of the individual couplings. In section 4 the computational efficiency of individual couplings is presented and discussed. Finally, the conclusions and an outlook are given in section 5. For improved readability, Tables 1 and 2 provide an overview of the acronyms frequently used throughout the paper and of the investigated 115 couplings. Description of model components The further development of the COSMO model in Climate Mode (COSMO-CLM) presented here aims at overcoming the limitations of the regional soil-atmosphere climate model, as discussed in the introduction, by replacing prescribed vegetation, lower boundary condition over sea surfaces and 120 the lateral and top boundary conditions with interactions between dynamical models. The models selected for coupling with COSMO-CLM need to fulfill the requirements of the intended range of application which are (1) the simulation at varying scales from convection-resolving up-to-50 km grid spacing, (2) local-scale up to continental-scale simulation domains and (3) full capability at least for European model domains. We decided to couple the NEMO ocean model for 125 the Mediterranean Sea (NEMO-MED12) and the Baltic and Northern Seas (NEMO-NORDIC), alternatively the TRIMNP regional ocean model together with the sea ice model CICE for the Baltic and Northern Seas (TRIMNP+CICE), the Community Land Model (CLM) of soil and vegetation (replacing the multi-layer soil model TERRA), alternatively the VEG3D soil and vegetation model and the global Earth System Model MPI-ESM for two-way coupling with the regional atmosphere. Table 2 gives an overview of all coupled-model systems investigated, their components and institutions at which they are maintained. An overview of the coupled models selected for coupling with COSMO-CLM (CCLM) is given in table 3 together with the main model developer, configuration details of high relevance for computational performance, the model complexity (see Balaji et al. (2017) and a reference in which a detailed model description can be found. The model domains are 135 plotted in Figure 1. More information on the availability of the model components can be found in Appendix A. In the following, the model components used are briefly described with respect to model history, space-time scales of applicability and model physics and dynamics relevant for the coupling. COSMO-CLM 140 COSMO-CLM is the COSMO model in climate mode. COSMO model is a non-hydrostatic limitedarea atmosphere-soil model originally developed by Deutscher Wetterdienst for operational numerical weather prediction (NWP). Additionally, it is used for climate, environmental (Vogel et al., 2009) and idealized studies (Baldauf et al., 2011). The COSMO physics and dynamics are designed for operational applications at horizontal reso-145 lutions of 1 to 50 km for NWP and RCM applications. The basis of this capability is a stable and efficient solution of the non-hydrostatic system of equations for the moist, deep atmosphere on a spherical, rotated, terrain-following, staggered Arakawa C grid with a hybrid z-level coordinate. The model physics and dynamics are discribed in Doms et al. (2011) amd Doms and Baldauf (2015) respectively. The features of the model are discussed in Baldauf et al. (2011). 150 The COSMO model's climate mode (Rockel et al., 2008) is a technical extension for long-time simulations and all related developments are unified with COSMO regularly. The important aspects of the climate mode are time dependency of the vegetation parameters and of the prescribed SSTs and usability of the output of several global and regional climate models as initial and boundary conditions. All other aspects related to the climate mode e.g. the restart option for soil and atmosphere, 155 the NetCDF model in-and output, online computation of climate quantities, and the sea ice module or spectral nudging can be used in other modes of the COSMO model as well. The model version cosmo_4.8_clm19 is the recommended version of the CLM-Community (Kotlarski et al., 2014) and it is used for the couplings but for CCLM+CLM and for stand-alone simulations. CCLM as part of the CCLM+CLM coupled system is used in a slightly different version 160 (cosmo_5.0_clm1). The way this affects the performance results is presented in section 4.4. MPI-ESM The global Earth System Model of the Max Planck Institute for Meteorology Hamburg (MPI-ESM; Stevens et al. (2013)) consists of subsystem models for ocean, atmo-, cryo-, pedo-and the bio-sphere. The hydrostatic general circulation model ECHAM6 uses the transform method for horizontal com- 165 putations. The derivatives are computed in spectral space, while the transports and physics tendencies on a regular grid in physical space. A pressure-based sigma coordinate is used for vertical discretization. The ocean model MPIOM is a regular grid model with the option of local grid refinement. The terrestrial bio-and pedo-sphere component model is JSBACH Schneck et al., 2013). The marine biogeochemistry model used is HAMOCC5 (Ilyina 170 et al., 2013). A key aspect is the implementation of the bio-geo-chemistry of the carbon cycle, which allows e. g. investigation of the dynamics of the greenhouse gas concentrations (Giorgetta et al., 2013). The subsystem models are coupled via the OASIS3-MCT coupler which was implemented recently by I. Fast of DKRZ in the CMIP5 model version. This allows parallelized and efficient coupling of a huge amount of data, which is a requirement of atmosphere-atmosphere 175 coupling. The reference MPI-ESM configuration uses a spectral resolution of T63, which is equivalent to a spatial resolution of about 320 km for atmospheric dynamics and 200 km for model physics. Vertically the atmosphere is resolved by 47 hybrid sigma-pressure levels with the top level at 0.01 hPa. The reference MPIOM configuration uses the GR15L40 resolution which corresponds to a bipolar 180 grid with a horizontal resolution of approximately 165 km near the Equator and 40 vertical levels, most of them within the upper 400 m. The North and the South Pole are located over Greenland and Antarctica in order to avoid the "pole problem" and to achieve a higher resolution in the Atlantic region . 185 The Nucleus for European Modelling of the Ocean (NEMO) is based on the primitive equations. It can be adapted for regional and global applications. The sea ice (LIM3) or the marine biogeochemistry module with passive tracers (TOP) can be used optionally. NEMO uses staggered variable positions together with a geographic or Mercator horizontal grid and a terrain-following σ-coordinte (curvilinear grid) or a z-coordinate with full or partial bathymetry steps (orthogonal grid). A hybrid 190 vertical coordinate (z-coordinate near the top and σ-coordinate near the bottom boundary) is possible as well (for details see Madec (2011)). COSMO-CLM is coupled to two different regional versions of the NEMO model, adapted to specific conditions of the region of application. For the North and Baltic Seas, the sea ice module (LIM3) of NEMO is activated and the model is applied with a free surface to enable the tidal forcing. 195 Whereas in the Mediterranean Sea, the ocean model runs with a classical rigid-lid formulation in which the sea surface height is simulated via pressure differences. Both model setups are briefly introduced in the following two sub-sections. Lebeaupin et al. (2011), Beuvier et al. (2012 and Akhtar et al. (2014) (Madec, 2008) to the regional ocean conditions of the Mediterranean Sea, hereafter called NEMO-MED12. It covers the whole Mediterranean Sea excluding the Black Sea. The NEMO-MED12 grid is a section of the standard irregular ORCA12 grid (Madec, 2008) with an eddy-resolving 1/12 • horizontal resolution, stretched in latitudinal direction, equivalent to 6-8 km horizontal resolution. Mediterranean Sea In the vertical, 50 unevenly spaced levels are used with 23 levels in the top layer of 100 m depth. A 205 time step of 12 min is used. The initial conditions for potential temperature and salinity are taken from the Medatlas (MEDAR-Group, 2002). The fresh-water inflow from rivers is prescribed by a climatology taken from the RivDis database (Vörösmarty et al., 1996) with seasonal variations calibrated for each river by Beuvier et al. (2010) based on Ludwig et al. (2009). In this context, the Black Sea is considered as a river 210 for which climatological monthly values are calculated from a dataset of Stanev and Peneva (2002). The water exchange with the Atlantic Ocean is parameterized using a buffer zone west of the Strait of Gibraltar with a thermohaline relaxation to the World Ocean Atlas data of Levitus et al. (2005). Norway. The horizontal resolution is 2 nautical miles (about 3.7 km) with 56 stretched vertical levels. North and Baltic Seas The time step used is 5 min. No fresh-water flux correction for the ocean surface is applied. NEMO-NORDIC uses a free top surface to include the tidal forcing in the dynamics. Thus, the tidal potential has to be prescribed at the open boundaries in the North Sea. Here, we use the output of the global tidal model of Egbert and Erofeeva (2002). 225 The lateral fresh-water inflow from rivers plays a crucial role for the salinity budget of the North and Baltic Seas. It is taken from the daily time series of river runoff from the E-HYPE model output operated at SMHI (Lindström et al., 2010). The World Ocean Atlas data (Levitus et al., 2005) are used for the initial and lateral boundary conditions of potential temperature and salinity. TRIMNP and CICE TRIMNP (Tidal, Residual, Intertidal Mudflat Model Nested Parallel Processing) is the regional ocean model of the University of Trento, Italy (Casulli and Cattani, 1994;Casulli and Stelling, 1998). The domain of TRIMNP covers the Baltic Sea, the North Sea and a part of the North East Atlantic Ocean with the north-west corner over Iceland and the south-west corner over Spain at the Bay of Biscay. TRIMNP is designed with a horizontal grid mesh size of 12.8 km and 50 vertical layers. 235 The thickness of the top 20 layers is each 1 m and increases with depth up to 600 m for the remaining layers . The model time step is 240 s. Initial states and boundary conditions of water temperature, salinity, and velocity components for the ocean layers are determined using the monthly ORAS-4 reanalysis data of ECMWF (Balmaseda et al., 2013). The daily Advanced Very High Resolution Radiometer AVHRR2 data of the National Oceanic and Atmospheric Administration of USA are 240 used for surface temperature and the World Ocean Atlas data (Levitus and Boyer, 1994) for surface salinity. No tide is taken into account in the current version of TRIMNP. The climatological means of fresh-water inflow of 33 rivers to the North Sea and the Baltic Sea are collected from Wikipedia. The sea ice model CICE version 5.0 is developed at the Los Alamos National Laboratory, USA (http://oceans11.lanl.gov/trac/CICE/wiki), to represent dynamic and thermodynamic processes of 245 sea ice in global climate models (for more details see Hunke et al. (2013)). In this study CICE is adapted to the region of the Baltic Sea and Kattegat, a part of the North Sea, on a 12.8 km grid with five ice categories. Initial conditions of CICE are determined using the AVHRR2 SST. VEG3D VEG3D is a multi-layer soil-vegetation-atmosphere transfer model (Schädler, 1990) designed for 250 regional climate applications and maintained by the Institute of Meteorology and Climate Research at the Karlsruhe Institute of Technology. VEG3D considers radiation interactions with vegetation and soil, calculates the turbulent heat fluxes between the soil, the vegetation and the atmosphere, as well as the thermal transport and hydrological processes in soil, snow and canopy. The radiation interaction, the moisture and turbulent fluxes between soil surfarce and the atmo-255 sphere are regulated by a massless vegetation layer located between the lowest atmospheric level and the soil surface, having its own canopy temperature, specific humidity and energy balance. The multi-layer soil model solves the heat conduction equation for temperature and the Richardson equation for soil water content. Thereby, vertically differing soil types can be considered within one soil column, comprising 10 stretched layers with its bottom at a depth of 15.34 m. The heat conductivity 260 depends on the soil type and the water content. In case of soil freezing the ice-phase is taken into account. The soil texture has 17 classes. Three classes are reserved for water, rock and ice. The remaining 14 classes are taken from the USDA Textural Soil Classification (Staff, 1999). Ten different landuse classes are considered: water, bare soil, urban area and seven vegetation types. Vegetation parameters like the leaf area index or the plant cover follow a prescribed annual 265 cycle. Up to two additional snow layers on top are created, if the snow cover is higher than 0.01 m. The physical properties of the snow depend on its age, its metamorphosis, melting and freezing. A snow layer on a vegetated grid cell changes the vegetation albedo, emissivity and turbulent transfer coefficients for heat as well. 270 An evaluation of VEG3D in comparison with TERRA in West Africa is presented by Köhler et al. (2012). Community Land Model The Community Land Model (CLM) is a state-of-the-art land surface model designed for climate applications. Biogeophysical processes represented by CLM include radiation interactions with ve-275 getation and soil, the fluxes of momentum, sensible and latent heat from vegetation and soil and the heat transfer in soil and snow. Snow and canopy hydrology, stomatal physiology and photosynthesis are modeled as well. Subgrid-scale surface heterogeneity is represented using a tile approach allowing five different land units (vegetated, urban, lake, glacier, wetland). The vegetated land unit is itself subdivided into 280 17 different plant-functional types (or more when the crop module is active). Temperature, energy and water fluxes are determined separately for the canopy layer and the soil. This allows a more realistic representation of canopy effects than by bulk schemes, which have a single surface temperature and energy balance. The soil column has 15 layers, the deepest layer reaching 42 meters depth. Thermal calculations explicitly account for the effect of soil texture (vertically varying), soil liquid 285 water, soil ice and freezing/melting. CLM includes a prognostic water table depth and groundwater reservoir allowing for a dynamic bottom boundary conditions for hydrological calculations rather than a free drainage condition. A snow model with up to five layers enables the representation of snow accumulation and compaction, melt/freeze cycles in the snow pack and the effect of snow aging on surface albedo. 290 CLM also includes processes such as carbon and nitrogen dynamics, biogenic emissions, crop dynamics, transient land cover change and ecosystem dynamics. These processes are activated optionally and are not considered in the present study. A full description of the model equations and input datasets is provided in Oleson et al. (2010) (for CLM4.0) and Oleson et al. (2013) (for CLM4.5). An offline evaluation of CLM4.0 surface fluxes and hydrology at the global scale is provided by 295 Lawrence et al. (2011). CLM is developed as part of the Community Earth System Model (CESM) (Collins et al., 2006;Dickinson et al., 2006) but it has been also coupled to other global (NorES) or regional (Steiner et al., 2005(Steiner et al., , 2009Kumar et al., 2008) climate models. In particular, an earlier version of CLM (CLM3.5) has been coupled to COSMO (Davin et al., 2011;Davin and Seneviratne, 2012) using a "sub-routine" approach for the coupling. Here we use a more recent version of CLM (CLM4.0 as part of the CESM1_2.0 package) coupled to COSMO via OASIS3-MCT rather than through a sub-routine call. Note that CLM4.5 is also included in CESM1_2.0 and can be also coupled to COSMO using the same framework. 3 Description and optimization of COSMO-CLM couplings via OASIS3-MCT 305 The computational performance, usability and maintainability of a complex model system depend on the coupling method used, the ability of the coupler to run efficiently in the computing architecture, and on the flexibility of the coupler to deal with different requirements on the coupling depending on model physics and numerics. In the following, the physics and numerics of the coupling of COSMO-CLM with the different OASIS3-MCT coupling method and performance optimization Lateral-, top-and/or bottom-boundary conditions for regional geophysical models are traditionally read from files and updated regularly at runtime. We call this approach offline (one-way) coupling. 320 For various reasons, one could decide to calculate these boundary conditions with another geophysical model -at runtime -in an online (one-way) coupling. If this additional model in return receives information from the first model modifying the boundary conditions provided by the first to the second, an online two-way coupling is established. In any of these cases, model exchanges must be synchronized. This could be done by (1) reading data from file, (2) calling one model as a subroutine 325 of the other or (3) by using a coupler which is a software that enables online data exchanges between models. Communicating information from model to model boundaries via reading from and writing to a file is known to be quite simple to implement but computationally inefficient, particularly in the case of non-parallelized I/O and high frequencies of disc access. In contrast, calling component 330 models as COSMO-CLM subroutines exhibits much better performances because the information is exchanged directly in memory. Nevertheless, the inclusion of an additional model in a "subroutine style" requires comprehensive modifications of the source code. Furthermore, the modifications need to be updated for every new source code version. Since the early 90s, software solutions have been developed, which allow coupling between geophysical models in a non-intrusive, flexible and 335 computationally efficient way. One of the software solutions for coupling of geophysical models is the OASIS coupler, which is widely used in the climate modeling community (see for example Valcke (2013) and Maisonnave et al. (2013)). Its latest fully parallelized version, OASIS3-MCT version 2.0 , proved its efficiency for high-resolution quasi-global models on top-end supercomputers (Masson 340 et al., 2012). In the OASIS coupling paradigm, each model is a component of a coupled system. Each component is included as a separate executable up to OASIS3-MCT version 2.0. Using the version 3.0 this is not a constraint anymore. At runtime, all components are launched together on a single MPI context. The parameters defining the properties of a coupled system are provided to OASIS via an ASCII file called namcouple. This ensures the modularity and interoperability of any OASIS-coupled system. As previously mentioned, OASIS3-MCT includes the MCT library, based on MPI, for direct parallel communications between components. To ensure that calculations are delayed only by receiving of coupling fields or interpolation of these fields, MPI non-blocking sending is used by OASIS3-MCT so that sending coupling fields is a quasi-instantaneous operation. The SCRIP library (Jones, 385 1997) included in OASIS3-MCT provides a set of standard operations (for example bilinear and bicubic interpolation, Gaussian-weighted N-nearest-neighbor averages) to calculate, for each source grid point, an interpolation weight that is used to derive an interpolated value at each (non-masked) target grid point. OASIS3-MCT can also (re-)use interpolation weights calculated offline. Intensively tested for demanding configurations , the MCT library performs the definition of 390 the parallel communication pattern needed to optimize exchanges of coupling fields between each component's MPI subdomain. It is important to note that unlike the "subroutine coupling" each component coupled via OASIS3-MCT can keep its parallel decomposition so that each of them can be used at its optimum scalability. In some cases, this optimum can be adjusted to ensure a good load balance between components. The two optimization aims that strongly matter for computational 395 performance are discussed in the next section. The coupled-system synchronization and optimization A coupled model component receiving information from one or several other components has to wait for the information before it can perform its own calculations. In case of a two-way coupling this component provides information needed by the other coupled-system component(s). As mentioned 400 earlier, the information exchange is quasi-instantaneously performed, if the time needed to perform interpolations can be neglected which is the case even for 3D-field couplings (as discussed in section 4.6). Therefore, the total duration of a coupled-system simulation can be separated into two parts for In a coupled system this time can be shorter than in the uncoupled mode, since the reading of boundary conditions from file (in stand-alone mode) is partially or entirely replaced by the coupling. It is also important to note that components can perform their calculations sequentially or concurrently. The coupled-system's total sequential simulation time can be expected to be equal to the sum of Thus, the strategy of synchronization of the model components depends on the layout of the coupling (sequential or concurrent) in order to reduce the waiting time as much as possible. It is important to note that huge differences in computational performance can be found for different coupling layouts due to different scalability of the modular model components. 430 Since computational efficiency is one of the key aspects of any coupled system the various aspects affecting it are discussed. These are the performances of the model components, of the coupling library and of the coupled system. Hereby the design of the interface and the OASIS3-MCT coupling parameters, which enables optimization of the efficiency, are described. The model component performance depends on the component's scalability. The optimum parti-435 tioning has to be set for each parallel component by means of a strong scaling analysis (discussed in section 4.1). This analysis, which results in finding the scalability limit (the maximum speed) or the scalability optimum (the acceptable level of parallel efficiency), can be difficult to obtain for each component in a multi-component context. In this article, we propose to simply consider the previously defined concept of the computing time (excluding the waiting time from the total time 440 to solution). In chapter 4 we will describe our strategy to separate the measurement of computing and waiting times for each component and how to deduce the optimum MPI partitioning from the scaling analysis. The optimization of OASIS3-MCT coupling library performance is relevant for the efficiency of the data exchange between components discretized on different grids. The parallelized interpolations 445 are performed by the OASIS3-MCT library routines called by the source or by the target component. An interpolation will be faster if performed (1) by the model with the larger number of MPI processes available (up to the OASIS3-MCT interpolation scalability limit) and/or (2) by the fastest model (until the OASIS3-MCT interpolation together with the fastest model's calculations last longer than the calculations of the slowest model). 450 A significant improvement of interpolation and communication performances can be achieved by coupling of multiple variables that share the same coupling characteristics via a single communication, that is, by using the technique called pseudo-3D coupling. Via this option, a single interpolation and a single send/receive instruction are executed for a whole group of coupling fields, for example, all levels and variables in an atmosphere-atmosphere coupling at one time instead of all coupling 455 fields and levels separately. The option groups several small MPI messages into a big one and, thus, reduces communications. Furthermore, the amount of matrix multiplications is reduced because it is performed on big arrays. This functionality can easily be set via the 'namcouple' parameter file (see section B.2.4 in Valcke et al. (2013)). The impact on the performance of COSMO-CLM atmosphereatmosphere coupling is discussed in section 4.6). See also Maisonnave et al. (2013). 460 The optimization of the performance of a coupled-system relies on the allocation of an optimum number of computing resources to each model. If the components' calculations are performed concurrently the waiting time needs to be minimized. This can be achieved by balancing the load of the two (or more) components between the available computing resources: the slower component is granted more resources leading to an increase in its parallelism and a decrease in its computing time. 465 The opposite is done for the fastest component until an equilibrium is reached. Chapter 4 gives examples of this operation and describes the strategy to find a compromise between each component's optimum scalability and the load balance between all components. On all high-performance operating systems it is possible to run one process of a parallel application on one core in a so-called single-threading (ST) mode ( fig. 2a). Should the core of the 470 operating system feature the so-called simultaneous multi-threading (SMT) mode, two (or more) processes/threads of the same (in a non-alternating processes distribution ( fig.2b)) or of different (in an alternating processes distribution ( fig.2c)) applications can be executed simultaneously on the same core. Applying SMT mode is more efficient for well-scaling parallel applications leading to an increase in speed in the order of magnitude of 10 % compared to the ST mode. Usually it is possible 475 to specify, which process is executed on which core (see fig. 2). In this cases the SMT mode with alternating distribution of model component processes can be used, and the waiting time of sequentially coupled components can be avoided. Starting each model component on each core is usually the optimum configuration, since the reduction of waiting time of cores outperforms the increase of the time to solution by using ST mode instead of SMT mode (at each time one process is executed 480 on each core). In the case of concurrent couplings, however, it is possible to use SMT mode with a non-alternating processes distribution. The optimization procedure applied is described in more detail in section 4.3 for the couplings considered. The results are discussed in section 4.6. 3.1.3 Regional climate model coupling particularities 485 In addition to the standard OASIS functionalities, some adaptation of the OASIS3-MCT API routines were necessary to fit special requirements of the regional-to-regional and regional-to-global couplings presented in this article. A regional model covers only a portion of earth's sphere and requires boundary conditions at its domain boundaries. This has two immediate consequences for coupling: first, two regional models 490 do not necessarily cover exactly the same part of earth's sphere. This implies that the geographic boundaries of the model's computational domains and of coupled variables may not be the same in the source and target components of a coupled system. Second, a regional model can be coupled with a global model or another limited-area model and some of the variables which need to be exchanged are three-dimensional as in the case of atmosphere-to-atmosphere or ocean-to-ocean coupling. 495 A major part of the OASIS community uses global models. Therefore, OASIS standard features fit global model coupling requirements. Consequently, the coupling library must be adapted or used in an unconventional way, described in the following, to be able to cope with the extra demands mentioned. Limited-area field exchange has to deal with a mismatch of the domains of the coupled model Interpolation of 3D fields is necessary in an atmosphere-to-atmosphere coupling. The OASIS3-MCT library is used to provide 3D boundary conditions to the regional model and a 3D feedback to the global coarse-grid model. OASIS is not able to interpolate the 3D fields vertically, mainly An exchange of 3D fields, which occurs in the CCLM+MPI-ESM coupling, requires a more inten-535 sive usage of the OASIS3-MCT library functionalities than observed so far in the climate modeling community. The 3D regional-to-global coupling is even more computationally demanding than its global-to-regional opposite. Now, all grid points of the COSMO-CLM domain have to be interpolated instead of just the grid points of a global domain that are covered by the regional domain. The amount of data exchanged is rarely reached by any other coupled system of the community due 540 to (1) the high number of exchanged 2D fields, (2) the high number of exchanged grid points (full COSMO-CLM domain) and (3) the high exchange frequency at every ECHAM time step. In addition, as will be explained in section 3.2, the coupling between COSMO-CLM and MPI-ESM needs to be sequential and, thus, the exchange speed has a direct impact on the simulation's total time to solution. 545 Interpolation methods used in OASIS3-MCT are the SCRIP standard interpolations: bilinear, bicubic, first-and second-order conservative. However, the interpolation accuracy might not be sufficient and/or the method is inappropriate for certain applications. This is for example the case with the atmosphere-to-atmosphere coupling CCLM+MPI-ESM. The linear methods turned out to be of low accuracy and the second-order conservative method requires the availability of the spatial derivatives 550 on the source grid. Up to now, the latter cannot be calculated efficiently in ECHAM (see section 3.2 for details). Other higher-order interpolation methods can be applied by providing weights of the source grid points at the target grid points. This method was successfully applied in the CCLM+MPI-ESM coupling by application of a bicubic interpolation using a 16-point stencil. In section 3.2 to 3.5 the interpolation methods recommended for the individual couplings are given. point and spectral space. Since the simulation results of COSMO-CLM need to become effective in ECHAM dynamics, the two-way coupling is implemented in ECHAM after the transformation from spectral to grid point space and before the computation of advection (see Fig. 8 and DKRZ (1993) for details). ECHAM provides the boundary conditions for COSMO-CLM at time level t = t n of the three time 570 levels t n − (∆t) E , t n and t n + (∆t) E of ECHAM's leap frog time integration scheme. However, the second part of the Assilin time filtering in ECHAM for this time level has to be executed after the advection calculation in dyn (see Fig. 8) in which the tendency due to two-way coupling needs to be included. Thus, the fields sent to COSMO-CLM as boundary conditions do not undergo the second part of the Assilin time filtering. The COSMO-CLM is integrated over j time steps between the 575 ECHAM time level t n−1 and t n . However, the coupling time may also be a multiple of an ECHAM time step. A complete list of variables exchanged between ECHAM and COSMO-CLM is given in Table 4. The data sent by ECHAM are the 3D variables of COSMO-CLM temperature, u-and v-components of the wind velocity, specific humidity, cloud liquid and ice water content and the two-dimensional 580 fields surface pressure, surface temperature and surface snow amount. At initial time the surface geopotential is sent to COSMO-CLM for calculation of the orography differences between the model grids. After horizontal interpolation to the COSMO-CLM grid via the bilinear SCRIP interpolation 1 the 3D variables are vertically interpolated to the COSMO-CLM grid keeping the height of the 300 hPa level constant and using the hydrostatic approximation. Afterwards, the horizontal wind 585 vector velocity components of ECHAM are rotated from the geographical (lon, lat) ECHAM to the rotated (rlon, rlat) COSMO-CLM coordinate system. Here send_fld ends and the interpolated data are used to initialize the boundlines at next COSMO-CLM time levels t m = t n−1 + k · (∆t) C ≤ t n , with k ≤ j = (∆t) E /(∆t) C . However, the final time of COSMO-CLM integration t m+j = t m + j · (∆t) C = t n is equal to the time t n of the ECHAM data received. 590 After integrating between t n − i · (∆t) E and t n the 3D fields of temperature, u-and v velocity components, specific humidity and cloud liquid and ice water content of COSMO-CLM are vertically interpolated to the ECHAM vertical grid following the same procedure as in the COSMO-CLM receive-interface and keeping the height of the 300 hPa level of the COSMO-CLM pressure constant. The wind velocity vector components are rotated back to the geographical directions of the ECHAM 595 grid. The 3D fields and the hydrostatically approximated surface pressure are sent to ECHAM, horizontally interpolated to the ECHAM grid by OASIS3-MCT 2 and received in ECHAM grid space. In ECHAM the COSMO-CLM solution is relaxed at the lateral and top boundaries of the COSMO-CLM domain by means of a cosine weight function over a range of five to ten ECHAM grid boxes using a weight between zero at the outer boundary and one in the central part of the COSMO-CLM The two-way coupled system CCLM+MPI-ESM with prescribed COSMO-CLM solution within the COSMO-CLM domain (weight=1) provides a stable solution over climatological time scales. A 605 strong initialization perturbation is avoided by slowly increasing the maximum coupling weight to 1 with time, following the function weight = weight max · (sin((t/t end ) · π/2)), with t end equal to 1 month. CCLM+NEMO-MED12 COSMO-CLM and the NEMO ocean model are coupled concurrently for the Mediterranean Sea 610 (NEMO-MED12) and for the North and Baltic Sea (NEMO-NORDIC). Table 5 gives an overview of the variables exchanged. Bicubic interpolation between the horizontal grids is used for all variables. At the beginning of the NEMO time integration (see Fig. 7) the COSMO-CLM receives the sea surface temperature (SST) and -only in the case of coupling with the North and Baltic Sea -also the sea ice fraction from the ocean model. At the end of each NEMO time step COSMO-CLM 615 sends average water, heat and momentum fluxes to OASIS3-MCT. In the NEMO-NORDIC setup COSMO-CLM additionally sends the averaged sea level pressure (SLP) needed in NEMO to link the exchange of water between North and Baltic Sea directly to the atmospheric pressure. The sea ice fraction affects the radiative and turbulent fluxes due to different albedo and roughness length of ice. In both coupling setups SST is the lower boundary condition for COSMO-CLM and it is used 620 to calculate the heat budget in the lowest atmospheric layer. The averaged wind stress is a direct momentum flux for NEMO to calculate the water motion. Solar and non-solar radiation are needed by NEMO to calculate the heat fluxes. E − P ("Evaporation minus Precipitation") is the net gain (E − P > 0) or loss (E − P < 0) of fresh water at the water surface. This water flux adjusts the salinity of the uppermost ocean layer. 625 In all COSMO-CLM grid cells where there is no active ocean model underneath, the lower boundary condition (SST) is taken from ERA-Interim re-analyses. The sea ice fraction in the Atlantic Ocean is derived from the ERA-Interim SST where SST < −1.7 • C which is a salinity-dependent freezing temperature. On the NEMO side, the coupling interface is included similar to COSMO-CLM, as can be seen 635 In the CCLM+TRIMNP+CICE coupled system (denoted as COSTRICE; Ho-Hagemann et al. (2013)), all fields are exchanged every hour between the three models COSMO-CLM, TRIMNP and CICE running concurrently. An overview of variables exchanged among the three models is given in Table 5. The "surface temperature over sea/ocean" is sent to CCLM instead of "SST" to avoid a potential inconsistency in case of sea ice existence. As shown in Fig. 7, COSMO-CLM receives the skin 640 temperature (T Skin ) at the beginning of each COSMO-CLM time step over the coupling areas, the North and Baltic Seas. The skin temperature T skin is a weighted average of sea ice and sea surface temperature. It is not a linear combination of skin temperatures over water and over ice weighted by the sea ice fraction. Instead, the skin temperature over ice T Ice and the sea ice fraction A Ice of CICE are sent to TRIMNP where they are used to compute the heat flux HF L, that is, the net out-645 going long-wave radiation. HF L is used to compute the skin temperature of each grid cell via the Stefan-Boltzmann Law. At the end of the time step, after the physics and dynamics computations and output writing, COSMO-CLM sends the variables listed in Table 5 to TRIMNP and CICE for calculation of wind stress, fresh water, momentum and heat flux. TRIMNP can either directly use the sensible and latent 650 heat fluxes from COSMO-CLM (considered as flux coupling method; see e.g. Döscher et al. (2002)) or compute the turbulent fluxes using the temperature and humidity density differences between air and sea as well as the wind speed (considered as the coupling method via state variables; see e.g. Rummukainen et al. (2001)). The method used is specified in the subroutine heat_flux of TRIMNP. 655 In addition to the fields received from COSMO-CLM, the sea ice model CICE requires from TRIMNP the SST, salinity, water velocity components, ocean surface slope, and freezing/melting potential energy. CICE sends to TRIMNP the water and ice temperature, sea ice fraction, fresh-water flux, ice-to-ocean heat flux, short-wave flux through ice to ocean and ice stress components. The horizontal interpolation method applied in CCLM+TRIMNP+CICE is the SCRIP nearest-neighbour 660 inverse-distance-weighting fourth-order interpolation (DISTWGT). Note that the coupling method differs between CCLM+TRIMNP+CICE and CCLM+NEMO-NORDIC (see section 3.3). In the latter, SSTs and sea ice fraction from NEMO are sent to CCLM so that the sea ice fraction from NEMO affects the radiative and turbulent fluxes of CCLM due to different albedo and roughness length of ice. But in CCLM+TRIMNP+CICE, only SSTs are passed 665 to CCLM. Although these SSTs implicitly contain information of sea ice fraction, which is sent from CICE to TRIMNP, the albedo of sea ice in CCLM is not taken from CICE but calculated in the atmospheric model independently. The reason for this inconsistent calculation of albedo between these two coupled systems originates from a fact that a tile-approach has not been applied for the CCLM version used in the present study. Here, partial covers within a grid box are not accounted for, 670 hence, partial fluxes, i.e. the partial sea ice cover, snow on sea ice and water on sea ice are not considered. In a water grid box of this CCLM version, the albedo parameterisation switches from ocean to sea ice if the surface temperature is below a freezing temperature threshold of −1.7 • C. Coupled to NEMO-NORDIC, CCLM obtains the sea ice fraction, but the albedo and roughness length of a grid box in CCLM are calculated as a weighted average of water and sea ice portions which is a 675 parameter aggregation approach. Moreover, even if the sea ice fraction from CICE would be sent to CCLM, such as done for NEMO-NORDIC, the latent and sensible heat fluxes in CCLM would still be different to those in CICE due to different turbulence schemes of the two models CCLM and CICE. This different calculation of heat fluxes in the two models leads to another inconsistency in the current setup which 680 only can be removed if all model components of the coupled system use the same radiation and non-radiation energy fluxes. These fluxes should preferably be calculated in one of the models at the highest resolution, for example in the CICE model for fluxes over sea ice. Such a strategy shall be applied in future studies, but is beyond the scope with the CCLM version used in this study. 685 The two-way coupling between COSMO-CLM and the land surface models VEG3D or CLM is similar to the other in several respects. First, the call to the LSM (OASIS send and receive; see Fig. 7) is placed at the same location in the code as the call to COSMO-CLM's native land surface scheme, TERRA_ML, which is switched off when either VEG3D or CLM is used. This ensures that the sequence of calls in COSMO-CLM remains the same regardless of whether TERRA_ML, VEG3D or CLM is used. In the default configuration used here COSMO-CLM and CLM (or VEG3D) are executed sequentially, thus mimicking the "subroutine"-type of coupling used with TERRA_ML. Note that it is also possible to run COSMO-CLM and the LSM concurrently but this is not discussed here. Details of the time step organization of VEG3D and CLM are described in the appendix and shown in Fig. 12 and 13 . 695 VEG3D runs at the same time step and on the same horizontal rotated grid ( 0.44 • here) as COSMO-CLM with no need for any horizontal interpolations. CLM uses a regular lat-lon grid and the coupling fields are interpolated using bilinear interpolation (atm to LSM) and distance-weighted interpolation (LSM to atm). The time step of CLM is synchronized with the COSMO-CLM radiative transfer scheme time step (one hour in this application) with the idea that the frenquency of the 700 radiation update determines the radiative forcing at the surface. The LSMs need to receive the following atmospheric forcing fields (see also Table 6): the total amount of precipitation, the short-and long-wave downward radiation, the surface pressure, the wind speed, the temperature and the specific humidity of the lowest atmospheric model layer. One specificity of the coupling concerns the turbulent fluxes of latent and sensible heat. In its turbulence scheme, COSMO-CLM does not directly use surface fluxes. It uses surface states (surface temperature and humidity) together with turbulent diffusion coefficients of heat, moisture and momentum. Therefore, the diffusion coefficients need to be calculated from the surface fluxes received by COSMO-CLM. This is done by deriving, in a first step, the coefficient for heat (assumed to be the 715 same as the one for moisture in COSMO-CLM) based on the sensible heat flux. In a second step an effective surface humidity is calculated using the latent heat flux and the derived diffusion coefficient for heat. Computational efficiency Computational efficiency is an important property of numerical model's usability and applicability 720 and has many aspects. A particular coupled model systems can be very inefficient even if each component model has a high computational efficiency in stand-alone mode and in other couplings. Thus, optimizing the computational performance of a coupled model system can save a substantial amount of resources in terms of simulation time and cost. We focus here on aspects of computational efficiency related directly to coupling of different component models overall tested in other applications 725 and use real case model configuration. We use a three step approach. 740 A parallel program's runtime T (n, R) mainly depends on two variables: the problem size n and the number of cores R, that is, the resources. In scaling theory, a weak scaling is performed with the notion to solve an increasing problem size in the same time, while as in a strong scaling a fixed problem size is solved more quickly with an increasing amount of resources. Due to resource limits on the common high-performance computer we chose to conduct a strong-scaling analysis with a 745 common model setup allowing for an easier comparability of the results. By means of the scalability study we identified an optimum configuration for each coupling which served as basis to address two central questions: (1) How much does it cost to add one (or more) component(s) to COSMO-CLM? (2) How big are the cost of different components and of OASIS3-MCT to transform the information between the components' grids? The first question can only be answered by a comparison to a ref- Table 7 for details). Usually, HP SY 1 is the time to solution of a model component executed serially, that is, using one 775 process (R = 1) and HP SY 2 is the time to solution if executed using R 2 > R 1 parallel processes. Some model components, like ECHAM, cannot be executed serially. This is why the reference number of threads is R 1 ≥ 2 for all coupled-system components. If the resources of a perfectly scaling parallel application are doubled, the speed would be doubled and therefore the cost would remain constant, the parallel efficiency would be 100 %, and the speed- Strategy for finding an optimum configuration The optimization strategy that we pursue is rather empirical than strictly mathematical, which is why we understand "optimum" more as "near-optimum". Due to the heterogeneity of our coupled systems, a single algorithm cannot be proposed ( as in Balaprakash et al. (2014)). Nonetheless, our results show that these empirical methods are sufficient, regarding the complexity of the couplings 825 investigated here, and lead to satisfying results. Obviously, "optimum" has to be a compromise between cost and time to solution. In order to find a unique configuration we suggest the optimum to have a parallel efficiency higher than 50 % of the cost of the reference configuration, until which increasing cost can be regarded as still acceptable. In the case of scalability of all components and no substantial cost of necessary additional calculations, this guarantees that the coupled-system's time to solution is only slightly bigger than that of the component with the highest cost. However, such "optimum" configuration depends on the reference configuration. In this study for all couplings the one-node configuration is regarded to have 100 % parallel efficiency. An additional constraint is sometimes given by the CPU accounting policy of the computing 835 centre, if consumption is measured "per node" and not "per core". This leads to a restriction of the "optimum" configuration (r 1 , r 2 , · · · , r n ) of cores r i for each model component of the coupled system to those, for which the total number of cores R = i r i is a multiplex of the number of cores The optimum configurations We applied the strategy for finding an optimum configuration described in section 4.3 to the CCLM couplings with a regional ocean (TRIMNP+CICE or NEMO-MED12), an alternative land surface scheme (CLM or VEG3D) or the atmosphere of a global earth system model (MPI-ESM). The optimum configurations found for CCLM sa and all coupled systems are shown in Fig. 6 and in 875 more detail in Table 8. The parallel efficiency used as criterion of finding the optimum configuration is shown in Fig. 5. The minimum number of cores, which should be used is 32 (one node). For sequential coupling an alternating distribution of processes is used and thus one CCLM and one coupled component model (VEG3D, CLM) process are started on each core. For CCLM+VEG3D and CCLM+CLM the CCLM 880 is more expensive and thus the scalability limit of CCLM determines the optimum configuration. In this case the fair reference for CCLM is CCLM stand-alone (CCLM sa ) on 32 cores in single threading (ST) mode. As shown in Fig. 5 the parallel efficiency of 50 % for COSMO stand-alone in ST mode is reached at 128 cores or 4 nodes and thus the 128 core configuration is selected as optimum. 885 For concurrent coupling the SMT mode with non-alternating distribution of processes is used, which is more efficient than the alternating SMT and the ST modes. The cores are shared between CCLM and the coupled component models (NEMO-MED12 and TRIMNP+CICE). For these couplings CCLM is the most expensive component as well and thus the reference for CCLM is CCLM sa on 16 cores (0.5 node) in SMT mode. As shown in Fig. 5 the parallel efficiency of 50 % 890 for COSMO stand-alone in SMT mode using 16 cores as reference is reached at approximately 100 cores. For CCLM+NEMO-MED12 coupling a two nodes configuration with 78 cores for CCLM and 50 cores for NEMO-MED12 was resulting in an overall decrease in load imbalance to an acceptable 3.1 % of the total cost. Increasing the number of cores beyond 80 for COSMO-CLM did not change much the time to solution, because COSMO-CLM already approaches the parallel-efficiency limit 895 by using 78 cores. This prevented finding the optimum configuration using three nodes. The corresponding NEMO-MED12 measurements at 50 cores are a bit out of scaling as well. This is probably caused by the I/O which increased for unknown reasons on the machine used between the time of conduction of the first series of simulations and of the optimized simulations. For CCLM+TRIMNP+CICE no scalability is found for CICE. As shown in Fig. 5 a parallel effi-900 ciency smaller than 50 % is found for CICE at approximately 15 cores. As shown in Fig. 3 the time to solution for all core numbers investigated is higher for CICE than for CCLM in SMT mode. Thus, a load imbalance smaller than 5 % can hardly be found using one node. The optimum configuration found is thus a one-node configuration using the CCLM reference configuration (16 cores). The CCLM+MPI-ESM coupling is a combination of sequential coupling between CCLM and 905 ECHAM and concurrent coupling between ECHAM and the ocean model MPIOM. As shown in Fig. 4 MPIOM is much cheaper than ECHAM and thus, the coupling is dominated by the sequential coupling between CCLM and ECHAM. As shown in Fig. 3 ECHAM is the most expensive component and it exhibits no decrease of time to solution by increasing the number of cores from 28 to 56, i.e. it exhibits a very low scalability. Thus, as described in the strategy for finding the optimum 910 configuration, even if a parallel efficiency higher than 50 % for up to 64 cores (see Fig. 5) is found, the optimum configuration is the 32 core (one node) configuration, since no significant reduction of the time to solution can be achieved by further increasing the number of cores. An analysis of additional cost of coupling requires a definition of a reference. We use the cost of CCLM stand-alone at optimum configuration (CCLM sa,OC ). We found the SMT mode with 915 non-alternating distribution of processes and 64 cores to be the optimum configuration for CCLM resulting in a time to solution of 3.6 HPSY and cost of 230.4 CHPSY. As shown in section 4.2, SMT mode with non-alternating processes distribution is the most efficient and the scalability limit is reached at approximately 80 cores in SMT mode due to limited number of grid points used. The double of 64 cores is beyond the scalability limit of this particular model grid. Table 8). They use 128 cores for each component model in SMT mode with alternating processes distribution (line 1.5 in Table 8 Table 8). The 128 CCLM processes of our reference optimum con- Table 8). The 5 times higher cost of VEG3D in comparison with CCLM is due to low scalability of VEG3D (see Fig. 3). The OASIS horizontal interpolations (line 3.3.2 in Table 8) produce 6.3 % extra cost in CCLM+CLM. No extra cost occurs due to hori-960 zontal interpolation in CCLM+VEG3D coupling, since the same grid is used in CCLM and VEG3D, and due to load imbalance, which is obsolete in sequential coupling. The remaining extra cost are assumed to be the cost difference between the coupled CCLM and CCLM sa,OC . They are found to be 55.4 % and 29.7 % for CLM and VEG3D coupling respectively. A substantial part of the relatively high extra cost of CCLM in coupled mode of CCLM+CLM might be explained by higher cost 965 of cosmo_5.0_clm1, used in CCLM+CLM, in comparison with cosmo_4.8_clm19, used in all other couplings (see line 1.7 in Table 8). CCLM sa performance measurements with both versions (but on a different machine than Blizzard) reveal a cosmo_5.0_clm1 time to solution 45 % smaller than for cosmo_4.8_clm19. Table 8). The load imbalance of 6.9 % of CCLM sa,OC is below the intended limit of 5 % of the cost of the coupled system. The extra cost of CCLM OC of 19 % are smaller than for the land surface scheme couplings. The optimum configuration of the coupling with TRIMNP+CICE for the North and Baltic Sea (CCLM+TRIMNP+CICE) has a time to solution of 18 HPSY and cost of 576 CHPSY. This is 3.5 times longer than CCLM sa,OC due to lack of scalability of the sea ice model CICE and 1.5 times 980 more expensive than CCLM sa,OC (line 2.3 and 3.3 of Table 8 The cost of CCLM stand-alone using the same mapping (CCLM sa,sc ) as for CCLM coupled to MPI-ESM is 4.3 % higher than the cost of CCLM sa,OC (line 3.3.4 in Table 8). Interestingly, the cost of OASIS horizontal interpolations is 3.3 % only. This achievement is discussed in more detail in the next section. Finally, the extra cost of CCLM in coupled mode of CCLM+ECHAM+MPIOM are 77.4 %. They are the highest of all couplings. Additional internal measurements allowed to identify additional computations in CCLM coupling interface to be responsible for a substantial part of these cost. The vertical spline interpolation of the 3D fields exchanged between the models was found to consume 51.8 % of CCLM sa,OC cost, which are 2/3 of the extra cost of CCLM OC . Interestingly, a direct comparison of complexity and grid point number G (see definition in Balaji et al. (2017)) given in Table 3 with extra cost of coupling given in Table 8 exhibits, that the couplings 1015 with short time to solution and lowest extra cost are those of low complexity. On the other hand, the most expensive coupling with longest time to solution is that of highest complexity and with largest number of gridpoints. Coupling cost reduction The CCLM+MPI-ESM coupling is one of the most intensive couplings that has up to now been 1020 realized with OASIS3(-MCT) in terms of number of coupling fields and coupling time steps: 450 2D fields are exchanged every ECHAM coupling time step, that is, every ten simulated minutes (see section 3.2). Most of these 2D fields are levels of 3D atmospheric fields. We show in this section that a conscious choice of coupling software and computing platform features can have a significant impact on time to solution and cost. 1025 To make the CCLM+MPI-ESM coupling more efficient, all levels of a 3D variable are sent and received in a single MPI message using the concept of pseudo-3D coupling, as described in section 3.1.2, thus reducing the number of sent and received fields (see Table 4). The change from 2D to pseudo-3D coupling lead to a decrease of the cost of the coupled system running on 32 cores by 3.7 % of the coupled system, which corresponds to 25 % of CCLM sa,OC cost. At the same time the The combined effect of usage of 3D-field exchange and of an alternating processes distribution lead to an overall reduction of the total time to solution and cost of the coupled system CCLM+MPI-ESM by 39 %, which corresponds to 261 % of the CCLM sa,OC cost. Conclusions We present couplings between the regional land-atmosphere climate model COSMO-CLM and Fig. 7 to 13). The next step is development of the UOI for multiple couplings which allows regional climate system modelling over Europe. A series of simulations has been conducted with an aim to analyse the computational performance of the couplings. The CORDEX-EU grid configuration of COSMO-CLM on a common computing The scaling of COSMO-CLM was found to be very similar in stand-alone and in coupled mode. 1075 The weaker scaling, which occurred in some configurations, was found to originate from additional computations which do not scale but are necessary for coupling. In some cases the model physics or the I/O routines exhibited a weaker scaling, most probably due to limited memory. The results confirm that parallel efficiency is decreasing substantially if the number of grid points per core is below 80. For the configuration used (132x129 grid points), this limits the number of 1080 cores, which can be used efficiently to 80 in SMT mode and 160 in ST mode. For the first time a sequential coupling of approximately 450 2D fields using the parallelized coupler OASIS3-MCT was investigated. It was shown that the direct cost of coupling by OASIS3-MCT (interpolation and communication) are negligible in comparison with the cost of the coupled atmosphere-atmosphere model system. We showed that the exchange of one (pseudo-)3D field in-1085 stead of many 2D fields reduces the cost of communication drastically. Furthermore, the idling of cores due to sequential coupling could be avoided by a dedicated launching of one process of each of the two sequentially running models on each core making use of the multi-threading mode available on the machine Blizzard used and on several other machines. A strategy for finding an optimum configuration was developed. Optimum configurations were 1090 identified for all investigated couplings considering all three aspects of climate modeling performance: time to solution, cost and parallel efficiency. The optimum configuration of a coupled system, that involves a component not scaling well with available resources, is suggested to be used at minimum cost, if time to solution cannot be decreased significantly. This is the case for CCLM+MPI-ESM and CCLM+TRIMNP+CICE couplings. An exception is the CCLM+VEG3D coupling. VEG3D 1105 The optimum configuration of land surface scheme couplings exhibit same speed and doubling of cost in comparison with COSMO-CLM stand-alone. It was found to be close to its absolute optimum, since 60 % to 75 % of the extra cost of coupling are unavoidable. These are the extra cost of (1) keeping the speed of the coupled system high, resulting in an unavoidable increase of cost with core number, (2) the need of using the less efficient single threading mode to avoid 50 % of idle The optimum configuration of the regional ocean coupling for the Mediterranean CCLM+NEMO-MED12 exhibits same speed and doubling of cost as well. In this case the cost of the ocean model 1115 are much higher and the extra cost of mapping are much smaller, which is due to usage of concurrent coupling. The optimum configuration of the regional ocean coupling for the North and Baltic Sea CCLM+- The procedure of finding an optimum configuration was found applicable to each coupling layout investigated and thus it could be applicable to other coupled model systems as well. The Analysis of extra cost of coupling was found to be a useful step of development of a Regional Climate System Model, which is coupling several model components. Bottle-necks of coupling have 1140 been identified in the CCLM+TRIMNP+CCLM and the CCLM+MPI-ESM couplings. The results for time to solution, cost and parallel efficiency of different couplings can serve as a starting point for finding an optimum coupling layout and configuration for multiple couplings. Appendix A: Source code availability COSMO-CLM is an atmosphere model coupled to the soil-vegetation model TERRA. Other regional processes in the climate system like ocean and ice sheet dynamics, plant responses, aerosol-cloud interaction, and the feedback to the GCM driving the RCM are made available by coupling COSMO-CLM via OASIS3-MCT with other models. The COSMO-CLM model source code is freely available for scientific usage by members of the CLM-Community (www.clm-community.eu). The CLM-Community is a network of scientists who accept the CLM-Community agreement. For details on how to become a member, please check the CLM webpage. The current recommended version of COSMO-CLM is COSMO_131108_5.0_clm9 5 . It comes together with a recommendation for the configurations for the European domain. The development of fully coupled COSMO-CLM is an ongoing research project within the CLM- 1155 Community. The unified coupling interface via OASIS3-MCT is available by contacting one of the authors and will be part of a future official COSMO-CLM version. All other components, including OASIS interface, are available by contacting the authors. The OASIS3-MCT coupling library can be downloaded at https://verc.enes.org/oasis/ . The two way coupled system CCLM+MPIESM was developed at BTU Cottbus and FU Berlin. In the following, the time step organisation within the coupled models is described. This aims at providing a basis of understanding of the coupling between the models. The solution at the next time level t m +(∆t) c is relaxed to the solution prescribed at the boundaries 1200 using an exponential function for the lateral boundary relaxation and a cosine function for the top boundary Rayleigh damping (Doms and Baldauf, 2015). At the lower boundary a slip boundary condition is used together with a boundary layer parameterisation scheme (Doms et al., 2011). The time loop (stepon) has three main parts. It begins with the computations in spectral space, followed by grid space and spectral-space computations. In scan1 the spatial derivatives (sym2, ewd, fft1) are computed for time level t n in Fourier space followed by the transformation into 1215 grid-space variables on the lon/lat grid. Now, the computations needed for two-way coupling with COSMO-CLM (twc) are done for time level t n variables followed by advection (dyn, ldo_advection) at t n , the second part of the time filtering of the variables at time t n (tf2), the calculation of the advection tendencies and update of fields for t n+1 (ldo_advection). Now, the first part of the time filtering of the time level t n+1 (tf1) is done followed by the computation of physical tendencies at t n (physc). The remaining spectral-space computations in scan1 begin with the reverse fourier transformation (fftd). B3 NEMO-MED12 In Fig. 9 the flow diagram of NEMO 3.3 is shown. At the beginning the mpp communication is initialized by cpl_prism_init. This is followed by the general initialisation of the NEMO model. Weather Rev., 139, 3887-3905, 2011. of the Regional Mediterranean Earth System, Mercator Ocean Quarterly Newsletter, 46, 60-66, 2012. Bülow, K., Dietrich, C., Elizalde, A., Gröger, M., Heinrich, H., Hüttl-Kabos, S., Klein, B., Mayer, B., Meier, H. M., Mikolajewicz, U., Narayan, N., Pohlmann, T., Rosenhagen, G., Schimanke, S., Sein, D., andSu, J.: Comparison of three regional coupled ocean atmosphere models for the North Sea under today's and future Casulli, V. and Cattani, E.: Stability, Accuracy and Efficiency of a Semi-Implicit Method for Three-Dimensional Technol., 19, 183-204, 2002. Gasper, F., Goergen, K., Shrestha, P., Sulis, M., Rihani, J., Geimer, M., and Kollet, S.: Specific humidity (kg kg −1 ) 3D Specific cloud liquid water content (kg kg −1 ) 3D Specific cloud ice content (kg kg −1 ) 3D Surface pressure (P a) Sea surface temperature SST (K) Surface snow amount (m) Surface geopotential (m s −2 ) SST = (sea_ice_area_f raction · Tsea ice) + (SST · (1 − sea_ice_area_f raction)) Table 8 for details. Fig. 6). seq refers to sequential and con to concurrent couplings. Thread mode is either the ST or the SMT mode (see Fig. 2). APD indicates whether an alternating processes distribution was used or not. levels in CCLM gives the simulated number of levels and CCLM version is the COSMO-CLM model version used for coupling. Relative Time to solution (%) and Cost (%) are caculated with respect to the reference, which is the CCLM stand-alone configuration CCLM sa using 64 cores and non-alternating SMT mode. The time to solution includes the time needed for OASIS interpolations. All relative quantities in lines 2.2-2.3 and 3.2-3.3.5 are given in percent of Figure 7: Simplified flow diagram of the main program of the regional climate model COSMO-CLM, version 4.8_clm19_uoi. The red highlighted parts indicate the locations at which the additional computations necessary for coupling are executed and the calls to the OASIS interface take place. Where applicable, the component models to which the respective calls apply are given.
2018-12-26T21:29:39.966Z
2016-04-20T00:00:00.000
{ "year": 2016, "sha1": "7a3b2006fd2b2c4e5a666cd24db5881745799098", "oa_license": "CCBY", "oa_url": "https://www.geosci-model-dev.net/10/1549/2017/gmd-10-1549-2017.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7a3b2006fd2b2c4e5a666cd24db5881745799098", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
181735728
pes2o/s2orc
v3-fos-license
Effect of Isocyanate Resin Level on Properties of Passion Fruit Hulls (PFH) Particleboard The main problem of particleboard is had low dimensional stability. The using of exterior adhesive type will help to improve that problem. The objective of this research was to evaluate of isocyanate level on physical and mechanical properties of passion fruit hulls particleboard. Passion fruit hull particles (PFH) were dried up to 5% moisture content. Particleboard made in size of 25 by 25 cm2 with density and thickness were 0.80 g/cm3 and 1 cm respectively. Isocyanate resin at various level (7,5,9, and 11%) used for manufacturing of board. After blending of particle and resin, materials were placed into the mold in size of 25 by 25 cm2. Furthermore, elements were pressed using a hot pressing machine with the temperature, time, and pressure of 160°C, 6 minutes and 30 kg/cm2 respectively. Conditioning process was applied for 7days in room temperature. The results showed that the increasing of resin level resulted in improvement of some parameters such as moisture content, thickness swelling, modulus of elasticity, modulus of rupture, and internal bond. Isocyanate resin level of 11% produced the best properties of PFH particleboard. Introduction Passion fruit plant (Passiflora edulis Sims F.edulis Deg) is one of the famous plants in North Sumatra. It dispersed in Karo regency and North Tapanuli Regency, North Sumatra Province. Passion fruit was extracted as raw materials in syrup manufacturing. A byproduct of syrup manufacturing was passion fruit hulls (PFH) waste. According to [1], PFH has lignin content and coarse fiber of 31.79 and 38.89% respectively. It is potentially as raw materials of particleboard. The previous research was conducted by [2] PFH particleboard bonded with urea formaldehyde resin resulted in low dimensional stability. The adhesive has important role in particleboard manufacturing. Isocyanate is one of exterior adhesive had advantages such as free formaldehyde emission and its better adhesive bond compared to another exterior resin. As stated by [3] isocyanate has more tolerance to the higher moisture content of particle, had lower temperature and faster time in hot pressing, and had higher dimensional stability. Focused of this research was to the analysis of isocyanate adhesive levels on physical and mechanical properties of particleboard made from PFH. Material and Method Passion fruit hulls were collected from the syrup industry "PT. Dewi Markisa" in Berastagi, North Sumatra, Indonesia. The isocyanate resin used for the binder in this research. Materials preparation Drying of the particle was conducted in two steps. The first step, fresh passion fruit hulls was dried up to air-dry condition to reduce moisture content before oven drying. The second step, the particle was oven dried up to 5% moisture content. After drying, PFH was mill to get particle size of 10 mesh. Determining of pH and Extractive Content The determination of pH conducted refers to [4]. The determining of extractive content in hot and cold water's solution refers to [5]. Board manufacturing Particle and isocyanate adhesive blended using rotary blending machine. The adhesive levels in this research were set on 5, 7, 9, and 11%. After that, particle was placed into the mold in a size of 25 cm by 25 cm. Furthermore, sheet pressed using a hot pressing machine was setting at 160 o C temperature for 6 minutes and pressure of 30 kg/cm 2 to get the density target of 0.8 g/cm 3 . Conditioning process was applied on boards for 7 days in room temperature. Board cutting and testings Cutting and testing of board refer to [6]. The testing parameter included density, moisture content, water absorption, and thickness swelling for physical properties. While of mechanical properties included of the internal bond, modulus of elasticity, and modulus of rupture. pH value and extractive content of PFH particle The aim of determination pH is to determine the acidity of mango fruit and to know how the effect of pH with freshness indicator of mango fruit. According to Table 1, the extractive content of PFH was quite high. High extractive substances of lignocellulose material can interfere with the gluing process of particleboard. As stated by [7] extractive effects on adhesive consumption and curing rate, and result in blowing during the hot pressing process. As reported by [8] extractives give the effect of wood wettability, and adhesives spread. Most extractives have hydrophobic characters. The acidity of materials greatly determined of adhesive performance. Some adhesives types are very sensitive to the acidity of lignocellulose materials. The isocyanate adhesives have a reasonably good tolerance to acidic conditions, but very acidic particles condition can cause the board's internal bond strength to below. As reported by [9] the internal bond of flake board made from Kapur wood with a pH of less than 4 were lower than that of Hemlock, Red lauan, and Douglas fir. Furthermore, According to [10] that wood pH has a strong influence on gelatinization time at lower catalyst concentrations. This effect will decrease when the catalyst concentration used is increased. According to [11] polymerization rate of the adhesive depending on raw material (wood and adhesive) used, this will directly affect on temperature and time of pressing in the manufacture of particleboard. Density and Moisture Content (MC) The average density values ranged between 0.57 to 0.69 g/cm 3 ( Table 1). The highest and lowest density values were produced by particleboard with 7 and 11% respectively. The density of particleboard in this study did not reach the target of 0.8 g/cm 3 . Several non-wood lignocellulose materials like PFH are bulk density, so the chances of springback getting bigger. Similar to the previous research, [2,12] reported that passion fruit hulls and jatropha fruit hulls particleboard bonded with UF resin resulted in density value below of the target. Overall the density values have a meet standard which requiring board density values ranging from 0.40 to 0.90 g/cm 3 [6]. The average of moisture content value ranged between 5.98 to 10.35% (Table 1). The highest and lowest moisture content was produced by particleboard with 5 and 11% adhesive level respectively. Overall the moisture content of particleboard has a meet standard in which requiring moisture content values are ranging from 5 to 13% [6]. Thickness Swelling (TS) and Water Absorption (WA) The thickness swelling value ranged between of 13.41 to 22.26%. Based on Table 2, it can be seen that there is a decrease in thickness swelling value along with the increase of adhesive level. According to [13] increasing the amount of adhesive would increase the bond between particles to resulted in better dimensional stability of particleboard. All boards produced did not meet the standard in which requiring a maximum board thickness of 12% [6]. PFH was porous characteristic, so it requires a higher adhesive content to produce a low thickness swelling value. The water absorption of particleboard ranges from 51.33 to 68.58%. The lowest and highest values were found at the adhesive content treatment of 11% and 7%, respectively. Water absorption of particleboard produced in this study was quite high. It was due to the character of PFH as a raw material is porous. As reported by [2] water absorption rate of PFH particles are 10 gram per second. It has an impact on board during the conditioning process in which board will easily to absorb of moisture. Modulus of Elasticity (MoE) dan Modulus of Rupture (MoR) The MoE value of particleboard ranged between of 2189.46 to 4077.12 kg/cm 2 . The highest and lowest of MoE values were produced by particleboard that use of 11 and 5% adhesive level respectively. Overall, the increase in adhesive content caused an increase of MoE value. As stated by [7] the value of MoE is influenced by level and type of adhesive, bonding strength of adhesive, and size of particles. The low MoE value in this study due to two reasons. First is in homogeneous particle size. According to [14] ideal particles for getting strength and dimensional stability are thin flake particles with uniform thickness and high of slenderness ratios. The second is the low chemical component such as cellulose, hemicellulose, and lignin of passion fruit hulls compared to wood causing this material to have no strength. MoR value of particleboard ranged from 21.73 to 30.61 kg/cm 2 . Similar to MoE value, the highest and lowest of MoR value was produced by particleboard with 11% and 5% adhesive level. Overall the Internal Bond (IB) The IB value of particleboard ranged from 2.19 to 4.27 kg/cm 2 . The highest and lowest IB values were resulted from the board using 11 and 5% respectively. Overall the IB value of particleboard in this research had met JIS A 5908-2003 that requiring a minimum IB value of 1.5 kg/cm 2 . As stated by [3] the bond occurs in MDI resin are chemical bonds in which these chemical bonds are stronger and more stable than mechanical bonds. MDI in an isocyanate group (-N = C = O) reacts with hydroxyl groups in wood to form a urethane chain. A combination of factors such as non-polar, aromatic components of MDI is resistant to hydrolysis. Conclusion Overall, the increase of adhesive content improves the physical and mechanical properties of the particleboard. The density, moisture content, and internal bond value have met standards. However, of thickness swelling, MOE and MOR value did not meet of JIS A 5908 (2003). The adhesive level of 11% produced the best physical and mechanical properties of particleboard in this study.
2019-06-07T21:51:04.843Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "1545c7ca7486c391382beb7f61853b12ccf96842", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/270/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e5f1928622328ca6c5681d3bd7a11b8ef0c33f03", "s2fieldsofstudy": [ "Materials Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
261789791
pes2o/s2orc
v3-fos-license
Is Surgical Treatment for Obesity Able to Cure Urinary Incontinence in Women?—A Prospective Single-Center Study There is enough evidence to support weight loss in order to improve urinary incontinence. Nevertheless, weight loss and maintaining a lower weight are not easy to achieve in the general population. Our study aims to evaluate whether bariatric surgery has a positive effect on the symptoms of urinary incontinence in female patients. We performed a prospective study on obese female patients before and after bariatric surgery, over a period of 9 years. Patients with a BMI ≥ 33 kg/m2 were included if they described involuntary loss of urine and no previous surgery for urinary incontinence was performed. The patients underwent laparoscopic surgery, either gastric sleeve, bypass or banding, performed by four surgeons in our hospital. The type of incontinence was not assessed at the initial visit carried out by the surgeon. All patients who declared being incontinent were referred to the urologist where they received the ICIQ—UI-SF questionnaire before their bariatric surgery and during follow -up visits. The sum of points obtained at questions 3, 4 and 5 was used to evaluate the severity of incontinence, as well as the impact on the quality of life. Our evaluation collected data on age, time since onset of symptoms, pad usage, number and type of deliveries, concomitant conditions and medications. The type of incontinence was assessed by the urologist before bariatric surgery as urge, stress or mixed incontinence. At follow-up visits, the patients were also asked to fill out a 10-point VAS questionnaire evaluating their perception on the evolution of incontinence symptoms. Data were analyzed using t-test statistical analysis. Our objective defined changes in incontinence as cure, improved, no change and worse. We included 54 women from whom initial data and at least 18 months of follow-up were available. We observed that about 50% of all women undergoing bariatric surgery have some degree of urinary incontinence. The ICIQ score improved from 13.31 ± 5.18 before surgery to 8.30 ± 4.49 points after surgery (p < 0.0001). Before surgery, 38 patients (70%) described severe incontinence compared to only 20 patients (37%) after surgery. A total of 16 women (31%) reported complete cure of urinary incontinence after bariatric surgery. Data from the VAS questionnaire show improvement in 46 cases (85%). Pad usage improved from 7.04 ± 2.79 to 3.42 ± 2.77 (p < 0.001) per day. The number of patients using more than one pad per day decreased from 35 (65%) to 9 (17%). The type of incontinence did not seem to be relevant, but our sample size was too small to lead to statistically significant results. There was no impact on the outcome of incontinence of number/type of delivery, age or BMI. Our data show that bariatric surgery is able to cure urinary incontinence in one of three obese women. A significant improvement was obtained in more than two-thirds of the patients, regardless of the type of incontinence. For an obese female with urinary incontinence, treatment for obesity should prevail and incontinence should be treated only if symptoms remain. Introduction Urinary incontinence (UI) is defined as an involuntary loss of urine, representing a social or hygienic concern.There are three main types of UI: stress urinary incontinence (SUI), urgency urinary incontinence (UUI) and mixed urinary incontinence (MUI).SUI represents involuntary urination in response to effort, physical activity, sneezing or coughing, whilst UUI is defined as the involuntary leakage of urine preceded or accompanied by an intense desire to urinate [1,2].According to the literature reports, about 50% of adult women could suffer from a form of UI at some point in their life [3].Due to the nature of the female urinary tract and risk factors like pregnancy, childbirth and hysterectomy that can harm the connective tissue and musculature of the pelvic floor, the prevalence of UI in women is almost double than in men [4]. Obesity is an important health issue with an increasing prevalence worldwide [5,6].It is defined as excess fat accumulation that raises the likelihood of adverse health outcomes.Classification of obesity uses body mass index (BMI, determined by dividing the body weight in kilograms by height in meters squared).A BMI ≥ 30 kg/m 2 defines obesity, while a BMI ≥ 40 kg/m 2 indicates severe obesity [7].According to current trends, obesity will afflict 51.1% of people in the USA by 2030, up from more than one-third of adults today [8].It decreases life expectancy and predisposes people to a number of diseases [9].Obesity is an independent risk factor for all types of urinary incontinence [10,11].Incontinence has been found to affect as many as 60% to 70% of severely obese women [12].With every five-point increase in body mass index (BMI), the odds of UI rise by 30-60% [13]. Depending on the type of UI, there are different treatment options in obese patients.Surgery is still the basis for treating SUI, whether it be through the insertion of a midurethral sling, Burch colposuspension, pubovaginal sling using autologous rectus fascia or the use of a bulking agent.However, women with a BMI over 35 kg/m 2 have a relatively high failure rate for this procedure: 81% of them manage to gain and maintain continence, compared to 92% to 96% of the general population [1,14,15].Pharmacotherapy, primarily with anticholinergic drugs, is the main form of treatment for UUI.Treatment for MUI is still challenging and often involves a combination of surgery and medication, depending on the primary complaint [1].Weight loss, bladder training and pelvic floor muscle development are all known to help patients regardless of the type of UI [16].Weight-loss therapies have been observed to lessen the frequency of UI in obese women [17][18][19].The strongly recommended course of treatment for women with UI with a BMI over 30 kg/m 2 is lifestyle management, according to professional organizations such the European Association of Urology (EAU) and the National Institute for Health and Care Excellence (NICE).This includes suggestions for weight loss through nutritional, pharmaceutical and behavioral treatment, or a mix of these interventions [20,21].However, only about 20% of persons who adhere to lifestyle changes are able to maintain long-term weight loss of at least 10% [22]. For patients with morbid obesity (BMI ≥ 40 kg/m 2 ), as well for individuals with a BMI ≥ 35 kg/m 2 and related co-morbidities, bariatric surgery is the most effective method for long-term weight loss [23,24].Interventions to lose weight have been demonstrated to lessen the frequency of UI [17][18][19].In a randomized controlled clinical trial, Subak et al. concluded that the treatment for obese women with UI that involves weight loss is successful.Weight loss of 5% to 10% should be thought of as a first-line therapy for incontinence because it has an efficacy comparable to other nonsurgical treatments [17].In a similar study, Wing et al. concluded that through a weight-loss program, stress incontinence episodes were less frequent over the course of 12 months, and patient satisfaction with incontinence modifications was increased over the course of 18 months [18].We acknowledge the benefits of losing weight through lifestyle changes, such as a healthier diet with a low-calorie intake or physical exercises for UI.However, patient compliance with these recommendations is very low.As easy as it is to recommend losing weight, it is as difficult to take the steps needed to actually achieve it. Given the fact that it is proven that weight loss is beneficial for improving symptoms of UI [17][18][19], we intend to assess if bariatric surgery could lead to similar benefits.Our study aims to prospectively evaluate whether bariatric surgery has a positive effect on the symptoms of urinary incontinence in obese female patients. Materials and Methods We conducted a prospective study on obese female patients with symptoms of urinary incontinence, comparing data before and after bariatric surgery over a period of 9 years.Bariatric surgery was carried out in compliance with our hospital standard of care.Written informed consent was signed by all participants and ethical committee approval was obtained prior to the start of the study. Patients qualified for bariatric surgery by meeting the following requirements: BMI value of 33 kg/m 2 or above; age between 18 and 55 years old; motivation to have surgery; potential of life-long follow-up; adequate cognitive ability to comprehend the procedure and its implications; and absence of drug or alcohol addictions.The eligible BMI value for surgery was set by our bariatric surgeons.We included in our study only female patients that simultaneously met the bariatric surgery criteria and described involuntary loss of urine at least two or three times a week, as per the ICIQ, and had no previous history of surgery for urinary incontinence. Patients presented at the hospital with their weight-related issues.Four bariatric surgeons from our clinic made the initial assessment of the patients.A comprehensive medical history and physical examination of the patient were part of the evaluation methodology, including a basic ("Yes"/"No"), non-leading question, on whether they suffer from involuntary loss of urine.Then, a series of laboratory tests were carried out, including a urinalysis, urine culture, complete blood count, serum biochemistry and coagulation test.Patients with pre-diagnosed conditions (e.g., diabetes, high blood pressure) or conditions diagnosed during the pre-operative evaluation were referred to a multi-disciplinary examination before surgery.All participants who described involuntary loss of urine during the surgical assessment were referred to the urology department and assessed by the same urologist.The bariatric surgery was carried out using the laparoscopic approach.Gastric sleeve, bypass or banding were the preferred methods.The procedure technique was chosen by the surgeon, in accordance with our standard of care and the patient's desire. During the urological evaluation, we assessed comprehensive medical history regarding UI symptoms.A full clinical examination was performed for each patient.For the purpose of our study, it was decided upfront to exclude patients with pelvic organ prolapse, fistulas or other malformations of the urinary tract.Patients were also required to fill in the International Consultation on Incontinence Questionnaire-Urinary Incontinence short form (ICIQ-UI SF) before and after surgery.A validated Romanian version of the questionnaire was used.The ICIQ-UI SF is a questionnaire that distinguishes between different types of incontinence based on patient self-reporting and assesses the burden of the symptoms on the patient [25].Female patients that described urine leakage at least two or three times a week (Question nr. 3 from ICIQ-UI SF) were included in our study.Medical history, physical examination and the ICIQ-UI SF were used prior to bariatric surgery to diagnose the type of UI: stress, urge, or mixed incontinence.For the purpose of our study, it was decided upfront to exclude patients with fistulas or other malformations of the urinary tract.The sum of points obtained at questions 3, 4 and 5 was used to evaluate the severity of incontinence, as well as the impact on quality of life.Question nr. 3 assesses the frequency of urine leakage ("Never"-0 to "All the time"-5).Question nr. 4 estimates the amount of urine leaked in patient's perception ("None"-0 to "A large amount"-6).Question 5 rates how much leaking urine interferes with the patient's everyday life ("Not at all"-0 to "A great deal"-10).These three separate scores added together result in an overall score between 0 and 21.A lower score indicates a better outcome for symptom severity: mild (1-5), moderate (6)(7)(8)(9)(10)(11)(12), severe (13)(14)(15)(16)(17)(18) and very severe (19)(20)(21) [26].A similar postoperative ICIQ-UI SF questionnaire was used at follow-up visits.At the first follow-up assessment (1 month after the surgery), we also asked patients to evaluate their perception of urinary symptoms after the surgery.Patients were asked to fill a 10-point visual analogue Life 2023, 13, 1897 4 of 10 scale (VAS) regarding how their urinary symptoms evolved (1-It is worse; 5-Same as before; 10-No incontinence).The VAS questionnaire was applied during every follow-up visit after the surgery.For our analysis, we gathered data on age, rural/urban area, height, weight, BMI, gynecologic and obstetric history (including the number and method of births), concomitant conditions and medications, time since onset of incontinence symptoms and pad usage.It is important to underline that no treatment for UI was initiated at this time as that would influence the final outcome of our study.Patients with ongoing treatment for UI were recommended to maintain the same dosage and method of administration.If at the initial evaluation urinary tract infections or other conditions of the lower urinary tract were identified, they were treated, and evaluation was restarted from the beginning, using the same algorithm.Invasive urodynamics, imaging or endoscopy of the urinary tract were not considered as being necessary since no actual treatment for incontinence was planned at this point. Our primary objective was to evaluate whether bariatric surgery has a positive effect on the symptoms of urinary incontinence.In order to achieve this, we analyzed the results before and after the surgery using the ICIQ-UI SF score, the number of used pads/day and the prevalence of severe incontinence.t-test statistical analysis was used in order to compare the collected data before and after the surgery.We used the standard p value of 0.05 to determine the degree of statistical significance for each test.For continuous data, the mean and standard deviation were provided, and for categorical variables, the frequency and percentage were reported. Results Over a period of 9 years, 54 obese female patients with urinary incontinence underwent bariatric surgery.As Figure 1 shows, over 50% of the eligible women for bariatric surgery were suffering from urinary incontinence.Table 1 describes the demographic characteristics of patients. up assessment (1 month after the surgery), we also asked patients to evaluate their perception of urinary symptoms after the surgery.Patients were asked to fill a 10-point visual analogue scale (VAS) regarding how their urinary symptoms evolved (1-It is worse; 5-Same as before; 10-No incontinence).The VAS questionnaire was applied during every follow-up visit after the surgery.For our analysis, we gathered data on age, rural/urban area, height, weight, BMI, gynecologic and obstetric history (including the number and method of births), concomitant conditions and medications, time since onset of incontinence symptoms and pad usage.It is important to underline that no treatment for UI was initiated at this time as that would influence the final outcome of our study.Patients with ongoing treatment for UI were recommended to maintain the same dosage and method of administration.If at the initial evaluation urinary tract infections or other conditions of the lower urinary tract were identified, they were treated, and evaluation was restarted from the beginning, using the same algorithm.Invasive urodynamics, imaging or endoscopy of the urinary tract were not considered as being necessary since no actual treatment for incontinence was planned at this point. Our primary objective was to evaluate whether bariatric surgery has a positive effect on the symptoms of urinary incontinence.In order to achieve this, we analyzed the results before and after the surgery using the ICIQ-UI SF score, the number of used pads/day and the prevalence of severe incontinence.T-test statistical analysis was used in order to compare the collected data before and after the surgery.We used the standard p value of 0.05 to determine the degree of statistical significance for each test.For continuous data, the mean and standard deviation were provided, and for categorical variables, the frequency and percentage were reported. Results Over a period of 9 years, 54 obese female patients with urinary incontinence underwent bariatric surgery.As Figure 1 shows, over 50% of the eligible women for bariatric surgery were suffering from urinary incontinence.Table 1 describes the demographic characteristics of patients.The mean age for the included patients was 37.1 ± 7.93.In our series, 41 (76%) women lived in an urban area, whilst 13 (24%) came from a rural area.The median number of childbirths was 1.38 ± 1.03, with 0.42 ± 0.73 vaginal deliveries and 0.96 ± 0.84 C-section deliveries.At least one comorbidity was present in 33.3% (17) of patients.A total of 18.5% (10) of our sample had high blood pressure; 13% (7) were diabetic; and 31.5% (17) had dyslipidemia.Prior to the surgery, the median BMI was 42.5 ± 3.87, and half of the women suffered from stress urinary incontinence, while 20 (37%) were found to have urge UI; 7 women (13%) had mixed UI.Table 2 describes the mean BMI and the prevalence of UI at 18 months after bariatric surgery.After the follow-up period, we observed a significant drop in the mean BMI in all cases (42.5 ± 3.87 vs. 30.29 ± 4.22; p < 0.005).The prevalence of stress UI decreased after the surgery (27 (50% of all the patients) vs. 16).Both the prevalence of urge UI (20 cases vs. 15 cases after surgery) and mixed UI (7 cases vs. 5 cases) decreased, yet without statistical significance.We speculate that the lack of statistical significance might be due to the relatively small sample size. Subjective perception of UI improvement in patients after the surgery is shown in Table 3.The ICIQ score improved from 13.31 ± 5.18 points before surgery to 8.30 ± 4.49 points after surgery (p < 0.0001).Before surgery, 38 patients (70%) described severe incontinence, compared to only 20 patients (37%) after surgery.A total of 16 women (31%) reported complete cure of urinary incontinence after bariatric surgery; 12 women who suffered from SUI reported no incontinence after the surgery, while only 4 patients declared no incontinence after UUI and MUI.Every patient that scored 6 or above on the VAS was declared improved after the surgery.Data from the VAS questionnaire show improvement in 46 cases (85%).It is important to mention that all patients filled in at least "5" on the VAS score, so no worsening of symptoms after surgery was found.In 15% of cases (8 patients), no improvement was found during the follow-up at 18 months.Pad usage improved from 7.04 ± 2.79 to 3.42 ± 2.77 (p < 0.001) per day.The number of patients using more than one pad per day decreased from 35 (65%) to 9 (17%).We analyzed if age, initial BMI and the number or the method of delivery could predict a better improvement in symptoms of UI after surgery.No statistical correlation between the postoperative ICIQ-UI SF score, VAS questionnaire or cured incontinence rate and these parameters was found. Discussion In our study, female participants presented with weight-related issues.If UI was found, patients were administered the pre-and postoperative symptom questionnaires during the urological assessment.One limitation comes from the lack of a full evaluation of incontinence, including urodynamic tests before and after the surgery.The main outcomes are based on patients' individual self-perception regarding the evolution of urinary incontinence symptoms.However, in a recent study, Okuyan et al. concluded that non-complex UI patients benefit from appropriate treatment regardless of urodynamics evaluation [27].Obesity increases the risk of lower self-esteem and depression among patients [28], thus a bias may occur even when evaluating UI.We speculate that patients may overrate the impact of UI before surgery.Once they lose weight, their self-esteem may increase, leading to an undervalue of their UI symptoms. UI, especially SUI, may lower a patient's quality of life (QOL) since it affects many parts of their everyday lives, including their connections with their families, their jobs and their sexual function.People who experience UI find it difficult to accept it because of how it negatively affects their daily life, including sexuality and privacy, which can result in lower self-esteem and depression [29,30].The burden of patients with urinary incontinence can be more severe than many life-threatening diseases.Therefore, our study mainly focuses on the patient's perception of urinary incontinence rather than on objective parameters. It is known that obesity is a changeable risk factor for UI [10,31].In obese patients, central adiposity may significantly enhance pressure within the abdomen and bladder and may cause an increase in urethral mobility, leading to SUI.Another explanation implies that the pudendal nerve may become chronically stretched due to chronically increasing pressure, which may cause pelvic floor muscles to weaken.Inflammation and diabetes mellitus, which are risk factors for incontinence, are also connected to obesity [32].In our study, 50% of women with a BMI over 33 kg/m 2 declared a type of UI.In a combined case-cohort study, Durigon Keller and colleagues observed a 65% prevalence of UI in obese women who underwent bariatric surgery [32].In a randomized, controlled clinical trial, Subak et al. suggested that the lower bladder pressure may be the cause of the therapeutic effect of weight loss on UI.In their clinical and urodynamic findings, a reduction in waist circumference and, therefore, pressure on the bladder were independent predictors of improved incontinence after weight loss [17].However more studies are needed in order to explain the pathophysiology between obesity and UI. Using data from over 14,000 patients in a study conducted by the National Center for Health Statistics in USA, Trivedi and colleagues found a higher prevalence of obesity in rural areas compared to urban areas (35.6% vs. 30.4%,p < 0.01).They concluded that people in rural areas were 1.19 times more likely to be obese than people in urban areas (95% C.I.: 1.06, 1.34) [33,34].On a smaller scale, Svensson E. conducted a similar study in Sweden.The mean BMI and prevalence of obesity among women were lower in urban areas vs. rural areas [35].Although the prevalence of obesity might be the same in urban and rural areas, in our sample, only a quarter of patients were from a rural area.This might be explained due to the precarious access of people from some rural areas to medical services. Given the constant growth in the prevalence of obese patients, treating obesity and UI remains a very challenging task [7].Behavioral interventions proved to be an effective method of losing weight in some obese patients.However, maintaining a long-term low weight is hard to achieve [36].A recent meta-analysis conducted by Sheridan W. concluded that compared to behavioral therapies, bariatric surgery was linked to a considerably lower UI prevalence and maintained weight loss [37].Nevertheless, bariatric surgery may lead to some postoperative complications that need to be considered in obesity management and discussed with the patient. SUI and quality of life improved 18 months after bariatric surgery in our study.According to the study by Bump et al., the pathophysiological process may be linked to a drop in abdominal pressure that is to blame for involuntary loss of urine [38].They examined the impact of bariatric surgery on urine incontinence by administering a urinary incontinence questionnaire and a urodynamic examination both before and after gastric bypass.Nine patients out of twelve had an improvement in symptoms.Although urodynamic examination was not carried out in our study, 46 patients out of 54 (85%) declared an improvement in symptoms after bariatric surgery according to the VAS questionnaire. Prior to surgery, half of our patients were diagnosed with SUI.In this group of patients, we observed the best results in terms of symptomatic improvement.After an almost 12-point decrease in median BMI after surgery (42.5 ± 3.87 vs. 30.29 ± 4.22), 12 of 27 SUI patients declared no incontinence after surgery.UUI and MUI were also improved, but the number of patients did not carry statistical significance.A meta-analysis and systematic review from Yung Lee and colleagues assessed the effect of bariatric surgery on UI, including 33 cohort studies with over 2900 patients [39].After bariatric surgery, resolution or improvement in any UI was found in 56% of patients.In the SUI group, 47% of patients benefitted from improvement and 39% reported a total cure of incontinence.Moreover, in their analysis, UI symptoms improved in 53% of patients from the UUI group.The ICIQ score significantly decreased by four points after surgery.Compared with our results, the data suggest a clear improvement after surgery in the SUI group and a better quality of life declared by patients regarding the type of UI. In our sample, the ICIQ score dropped from 13.31 ± 5.18 to 8.30 ± 4.49 (p < 0.0001).The number of pads used daily also decreased (7.04 ± 2.79 vs. 3.42 ± 2.77; p < 0.0001), and only 9 patients of 54 needed more than one pad/day after the surgery.Based on ICIQ only, 20 patients declared severe incontinence after surgery.C.J.O'Boyle et.al. had similar results in a prospective cohort study with 82 female patients [40].Over a median follow-up of 15 months after surgery, the mean ICIQ-SF score dropped by 4.4 (SD = 5.5), from 9.3 (4.4) before surgery to 4.9 (SD = 5.3) after surgery.Furthermore, the number of patients needing daily pads decreased by 48% from 65% to 17%, and one-third of patients declared no incontinence after surgery.Pad weighing was first described as a diagnostic and assessment method for UI by James et al. in 1971 [41].Until today, the number of pads used daily by patients with UI remains one of the best methods to evaluate the severity of UI.Although it is a noninvasive and relatively simple method, results might be subject to patients' subjectivity.Behavioral changes, such as fluid restriction and inactivity, might undervalue the severity of UI.Furthermore, a fully continent patient may use an unnecessary pad/day after treatment of UI, considering the remaining social anxiety priorly caused by UI.Although it is as subjective as the ICIQ-SF score, the number of pads remains a useful measure for clinical care [42]. Finally, we found no statistical correlation between the ICIQ-SF score, number of pads and the severity of incontinence regarding age, BMI and both the number and method of child delivery.However, Rodrigues AFS et.al. found that age, vaginal delivery and menopause are an important risk factor for SUI persistence after bariatric surgery [43].In the six months following bariatric surgery, menopause was the most important predictor of SUI persistence.Menopausal women were 2.7 times more likely than non-menopausal women to experience SUI persistence following surgery.They also found that each centimeter of gain in waist circumference prior to surgery increased the risk of SUI by 5.7% (p = 0.05).In another study, when BMI and waist circumference were combined, 2702 women between the ages of 42 and 52 showed an increased risk of SUI with every centimeter of waist circumference growth (OR = 1.04; 95%CI: 1.02-1.06),but not with unit increments of BMI (OR = 0.99; 95%CI: 0.95-1.04)[44].Given the fact that a greater abdominal pressure caused by central adiposity might be the mechanism for SUI, waist circumference may be a better variable to take into account when assessing SUI in obese patients. A limitation for our study comes from the relatively small sample size (54 patients) and a short follow-up period (18 mo.).Also, it was a non-randomized study, and no control group was used.We only focused on female patients considering that urinary incontinence in males is more complex and subject to more variables.Another potential limit we identified is that the initial screening for UI was conducted by a bariatric surgeon, but we consider their task to be very simple and could not induce any bias in the selection of our patients.We continue to gather data both from the already included patients who come for follow-up visits and from new patients included in our study.We consider that once the sample size grows larger, evidence for improvements in UI after bariatric surgery will become statistically significant in all subgroups defined by type of incontinence. Conclusions Obesity is a rapidly expanding public health problem.There are more publications examining the benefits of bariatric surgery for urinary incontinence in the literature.Half of the obese female patients eligible for bariatric surgery in our study reported symptoms of urinary incontinence.Patients who are candidates for bariatric surgery should be advised that improvement in UI may also be a significant benefit of their intervention.Our data show that bariatric surgery is able to cure urinary incontinence in one of three obese women.A significant improvement was obtained in more than two-thirds of the patients, regardless of the type of incontinence.Almost half of the patients with stress urinary incontinence declared no involuntary leak of urine after surgery.The findings of this study suggest that weight loss via bariatric surgery is an efficient method of managing SUI in obese women.A larger sample is needed to demonstrate the beneficial effect on urgency UI and mixed UI.For an obese female with urinary incontinence, treatment for obesity should prevail and incontinence should be treated only if symptoms remain after surgery. Figure 1 . Figure 1.Flow diagram for study participants. Figure 1 . Figure 1.Flow diagram for study participants. Table 1 . Demographic characteristics of patients. Table 2 . BMI and prevalence of UI before and after surgery.In some patients, symptoms improved significantly, although they were still present. Table 3 . Quality of life assessments before and after surgery.
2023-09-14T15:22:23.225Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "cac8f9efd88c6f560bd80742a5366cb8cacff325", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/13/9/1897/pdf?version=1694434650", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b068cbdc758e843e5cd4587d283cb4179d21033", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247969710
pes2o/s2orc
v3-fos-license
SUITABILITY OF THE SELECTED LOCAL MAIZE HYBRIDS FOR SILAGE PRODUCTION  : The main goal of this study was to observe the properties of fifteen different genotypes of maize hybrids from Serbia in order to determine their suitability for the production of high-quality silage for ruminant feed. The research was conducted in a two-year field experiment at the location of the Maize Research Institute in Zemun Polje, Serbia, and the laboratory analyses included yield structure of the investigated maize hybrids, assessment of the lignocellulosic fiber composition, as well as the in vitro dry matter digestibility of the whole plant samples. All maize hybrids have shown good quality traits that are a prerequisite for the production of high-quality silage. Introduction The total 2021 world production of maize (Zea mays L.), one of the most important cultivated crops, amounted to 1125.03 million metric tons (Shahbandeh, 2021). The history of maize hybrids started way back in 1918 when D. F. Jones created the first double-cross inbred maize that was later introduced experimentally in 1924 by H. A. Wallace (Sutch, 2011). Furthermore, first attempts of ensiling maize for forage were conducted in late nineteenth century, nevertheless, the extensive use of silage maize in cattle diet began decades later, after flint x dent hybrids tolerant to low temperatures were developed (Barrière, 2018). Silage maize is, at the present time, among the most important annual forage crops used worldwide as a main source of energy in ruminant nutrition. It is not difficult to produce and store, and can be consumed daily throughout the year (Barrière, 2018). The breeding of silage maize has lately been especially focused on designing hybrids with improved whole plant yield, nutritive value, as well as agronomic traits that provide better ensiling quality (Terzić, 2020). Studies have shown that the in vitro digestibility of forages decreases as the maturity of the plant increases after the optimal physiological stage (Johnson, 1999). The main aim of this study was to investigate some of the most important quality parameters of the whole plant of fifteen maize hybrids harvested at the physiological maturity stage, in order to determine their suitability for the preparation of silage for ruminant nutrition. Material and methods Fifteen dent maize hybrids of different genetic backgrounds and maturity groups created at the Maize Research Institute Zemun Polje were tested in the field experiments in two consecutive years (2019 and 2020). The field trial was set up in the experimental field of the Maize Research Institute, Zemun Polje, Belgrade, Serbia (44°52´N, 20°19´E, 81m asl) according to the randomized complete block design with two replicates. The elementary plot size amounted to 21 m2 and the sowing density was 60,000 plants per hectare. Plants from each replication were harvested from two inner rows of the experimental plot (area of 7 m2), and five average plants from each replication were singled out for further testing. The plants were harvested in the full waxy phase of maize maturity, i.e. between one-quarter and one-half milk-line kernel stages (whole plant dry matter content approximately 30 -35%). Samples of the whole plants, plants without ears, and ears were first chopped and then dried at 60˚C for 48 h in a forced-air drying oven until constant moisture was reached and ground afterward in the mill with 1-mm mesh sieves. The total dry matter content content was analyzed by drying the samples at 105˚C in a laboratory drying oven for 12 h, until a constant mass was reached. The fiber analysis included determination of lignocellulosic constituents: neutral detergent fiber (NDF), acid detergent fiber (ADF), and acid detergent lignin (ADL), hemicellulose and cellulose, according the detergent method by Van Soest (1980), with some modification (Mertens, 1992). The in vitro dry matter digestibility of the whole plant samples was determined by the enzymatic method by Aufrere (2007). The results are shown as the percentage per dry matter (d.m.). The data were analyzed in Minitab19 Statistical Software using one-way ANOVA analysis of variance with Fisher's LSD test, and reported as a mean ± standard deviation of at least three repetitions. Differences between the means with probability P<0.05 were accepted as statistically significant. The level of confidence was set at 95%. Results and discussion The results shown in Table 1 represent the yield structure of the fifteen investigated maize hybrids. Dry matter content of the harvested maize plants ranged from 33.89% (ZP 666) to 41.34% (ZP 747), the highest whole plant dry matter yield (23.25 t ha -1 ) was achieved by hybrid ZP 749, and the highest ear dry matter yield was determined with hybrid ZP 745 (11.30 t ha -1 ). The contents of the individual lignocellulosic fibers of the whole maize plant, considered among the most important indicators of the nutritional value and technological quality of maize biomass intended for ruminant nutrition, are shown in Table 2. The NDF content ranged from 43.49% (ZP 600) 49.06% (ZP 667). The NDF potion of the lignocellulosic complex consists of cell wall material including cellulose, hemicellulose, lignin and silica. Lignin is is completely indigestible and it reduces the availability of cellulose and hemicellulose in the silage. The NDF fiber is required by ruminant animals even though it can be a negative indicator of silage quality. With the progressing of the maize plant maturity, the NDF share increases and animals tend to consume less forage. A study by Bittman (2004) has shown that the content of ADF, mainly consisting of cellulose, lignin and inorganic silica, is negatively correlated with digestibility of feed. Dry matter digestibility is one of the most important parameters of silage maize. The in vitro dry matter digestibility of parts of the maize plant depends on the hybrid, therefore the quality of maize hybrids is determined by the morphology and structure of the plant (Bertoia, 2014). The percentage of the determined in vitro dry matter digestibility of the investigated hybrids is shown in Graph 1. Graf 1. In vitro svarljivost suve materije cele biljke hibrida kukuruza (%) Graph 1. In vitro dry matter digestibility of the maize hybrids whole plant (%) Hybrid ZP 173/8 had the highest in vitro dry matter digestibility (63.87%), followed by ZP 606 (61.37%), and ZP 444 (61.00%). These findings are in accordance with previous studies . A number of previous studies reported that the digestibility of the stover portion (plant without ear) of maize silage decreases significantly with advanced maturity from 3 weeks before to 5 weeks after physiological maturity (Johnson, 1999). Furthermore, an increasing share of grain as the maize plant matures blurs the correlation between plant maturity and digestibility of whole plant maize silage. As a strategy in future breeding programs for improved silage maize hybrids, more attention should be directed toward creating genotypes that maintain high in vitro dry matter digestibility while increasing grain content at advanced stages of maturity. Conclusion Maize hybrids investigated in this two-year study have shown traits required for high-quality silage production. Apart from a good dry matter yield structure, optimal lignocellulosic fibers content and sufficient dry matter digestibility are properties that make the investigated maize hybrids suitable for ruminant feed production. Hybrid ZP 173/8 had the highest in vitro dry matter digestibility (63.87%), followed by ZP 606 (61.37%), and ZP 444 (61.00%). The results are implying that the agronomic traits, chemical composition, as well as other genetically predisposed properties of maize hybrids, are crucial for their end-use. These findings can be of great importance for future breeding programs directed toward creating new and improved silage maize hybrids.
2022-04-06T15:24:53.587Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "ce5801fcfad125191123982c493b6d76ab7c528f", "oa_license": "CCBY", "oa_url": "http://rik.mrizp.rs/bitstream/123456789/874/1/bitstream_4414.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "1e96be250d816768e14dd3b481371ae90aa176e2", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
221888373
pes2o/s2orc
v3-fos-license
Resistance to natural and synthetic gene drive systems Scientists are rapidly developing synthetic gene drive elements intended for release into natural populations. These are intended to control or eradicate disease vectors and pests, or to spread useful traits through wild populations for disease control or conservation purposes. However, a crucial problem for gene drives is the evolution of resistance against them, preventing their spread. Understanding the mechanisms by which populations might evolve resistance is essential for engineering effective gene drive systems. This review summarizes our current knowledge of drive resistance in both natural and synthetic gene drives. We explore how insights from naturally occurring and synthetic drive systems can be integrated to improve the design of gene drives, better predict the outcome of releases and understand genomic conflict in general. | INTRODUC TI ON Organisms require networks of cooperating genes. Generally, alleles spread through populations by increasing the reproductive success of the organism as a whole. However, some alleles, defined here as drivers, selfishly bias reproduction to increase their own representation in the next generation, at a cost to the rest of the genome (Burt & Trivers, 2006). For example, "segregation distorters" are a type of driver that subvert the usual rules of Mendelian inheritance in such a way that they are inherited by over 50% of the descendants of heterozygous individuals, and occur naturally in many species including plants, fungi, nematodes, insects and mice . Another example is drive by mitochondria, the key endosymbiont of eukaryotes, which damage male function in many hermaphroditic plants (Burt & Trivers, 2006). This loss of male function diverts resources to seed production, enhancing transmission of the mitochondrial genome, which is typically uniparentally transmitted through ovules but not pollen. Selfish genetic elements likely occur in all species and can have major impacts on the evolution and ecology of their hosts (Burt & Trivers, 2006). Crucially, the super-Mendelian rate at which gene drivers are transmitted over generations can allow them to spread through populations despite costs. This has inspired researchers to propose using gene drives to solve major biological challenges related to public health, the environment and agriculture (Burt, 2014;Champer, Buchman, & Akbari, 2016;Piaggio et al., 2017;Raban et al., 2020). Two broad types of gene drives have been proposed: population suppression gene drives and population replacement gene drives. Population suppression gene drives can be employed when reduction or elimination of a population (e.g. of disease vectors, agricultural pests or invasive species) is desired. Replacement gene drives offer the potential to alter existing populations for human benefit, for example by spreading alleles or endosymbionts that reduce the ability of mosquitoes to transmit malaria. Strains of the intracellular bacterium Wolbachia reduce the ability of mosquitoes to transmit dengue and other viruses. Wolbachia strains have already been successfully deployed in Australia and elsewhere, spreading through populations by creating mating incompatibilities that disproportionately reduce the fitness of females that do not carry Wolbachia, and reducing the threat of dengue (Nazni et al., 2019;Ryan et al., 2020). New synthetic population suppression and replacement drive systems are being created with increasing regularity, highlighting the enormous promise of CRISPR-Cas9 and other new molecular tools for editing genomes (Champer et al., 2016). However, gene drives impose costs, certainly on outcompeted alleles, and often on the individual as a whole. Costs at the individual level can arise directly via the mechanism of transmission, for example the costly death of gametes that carry rival alleles, or because the driver carries costs such as associated low fitness alleles or metabolic costs in driving endosymbionts (Burt & Trivers, 2006). The resulting selection can lead to the rapid evolution of resistance traits that prevent the driver from spreading. As a result, many natural drivers have been completely suppressed, only showing drive when crossed into distant relatives that do not carry suppressor alleles (Courret, Chang, Wei, Montchamp-Moreau, & Larracuente, 2019;McDermott & Noor, 2010). This research suggests that we should expect synthetic gene drives, especially those with large fitness effects, to select for resistance, which will potentially undermine their ability to spread, and modify or suppress populations (Barrett et al., 2019;Holman, 2019;Unckless et al., 2017). For synthetic gene drives to be effectively deployed, we urgently need to understand how quickly resistance will arise. Does resistance usually arise through selection on pre-existing genetic variation, or does it more often involve novel mutations that appear once drive has reached a high frequency? What fraction of natural gene drives reach fixation, go extinct, reach a stable polymorphism or are fully suppressed, and how can we address this question given the difficulties of detection once a gene drive has fixed or been lost? Does resistance to drive typically involve the same fundamental mechanism (e.g. loss of the driver's target, or "defusing" of the driver by interfering RNAs) across species and types of drivers, or is the resistance mechanism highly idiosyncratic? In this review, we synthesize what is known about how resistance evolves against both natural and synthetic drives, and point out gaps in our knowledge. We begin by reviewing how resistance has evolved in well-studied natural systems, examining resistance that interferes directly with the molecular mechanisms of drive and then resistance through behaviour and life history. We then turn to the current evidence regarding resistance to synthetic drives. Finally, we discuss the implications for the design of "evolution proof" synthetic gene drives. | RE S IS TAN CE TO G ENE DRIVE S IN NATUR AL SYS TEMS In any drive system, selection for resistance will act on the target locus itself, genes linked to the target and in some cases on the occurring and synthetic drive systems can be integrated to improve the design of gene drives, better predict the outcome of releases and understand genomic conflict in general. K E Y W O R D S CRISPR-Cas9, fitness costs, meiotic drive, population suppression, selfish genetic elements, sex ratio distorter, transposable element, Wolbachia F I G U R E 1 The evolutionary impact of a gene drive, as measured by the magnitude and location of costs imposed (yellow/red gradients). Boxes represent individuals; white rectangles are chromosomes within the organism. Drive creates selection pressure for the three drive resistance mechanisms discussed in this review (blue). The selection pressure for drive resistance is highest at the target locus itself (1a), where rivalling homologous genes suffer both from reduced transmission due to drive (yellow) and (potential) fitness costs to the organism (red). Selection pressure on unlinked loci throughout the genome to disrupt drive will be a function of organismal drive costs (1b). Finally, gene drive may create selection for mechanisms that suppress the drive at the population level (2) [Colour figure can be viewed at wileyonlinelibrary.com] entire genome. Generally speaking, selection for resistance at the target and linked loci becomes stronger with more biased transmission, whereas the strength of selection for resistance on the rest of the genome increases with higher fitness loss for the organism ( Figure 1). These two are often positively related, leading to strong selection for resistance at both the target locus and genome-wide. We classify resistance as adaptations that reduce the spread of drive elements either by (a) interfering with the molecular mechanism of drive (which we term as "suppression" in this Review) or by (b) altering some aspect of behaviour or life history of carriers which in turn reduces the ability of a driver to spread. We use these categories to structure our review of known drive resistance factors, incorporating natural and synthetic drive systems. | Mutations at the target site and suppression of drive machinery One way to evolve resistance is to modify the target of drive so that it is no longer susceptible. For example, a gene drive that spreads itself by targeting a specific sequence of nucleotides or peptides might impose selection that favours genotypes carrying an altered sequence. Below, we review the evidence for this mode of resistance in nature. For a brief overview of the biological differences between the natural gene drives, see Table 1. | Sex chromosome linked gamete killers Naturally occurring "gamete killer" meiotic drivers have often been found on sex chromosomes, where they cause distortion in the transmission of the heterogametic sex (Hurst & Pomiankowski, 1991). The evolution of sex chromosome drivers is facilitated by the differentiation between X and Y chromosomes (and Z/W). Driver alleles arising on a well-differentiated sex chromosome therefore have potential targets at many sites that are never linked to the driver: for example, an X-linked driver could promote its own transmission by destroying gametes containing a particular Y-linked locus (Jaenike, 2001). Sex-linked drivers generate especially strong selection for resistance because they alter the population sex ratio. A bias in the population sex ratio creates strong selection favouring individuals/genotypes that produce relatively more of the under-represented sex (Fisher, 1930). This "Fisherian sex ratio selection" confers an additional fitness benefit to alleles that confer resistance to drive in populations showing a biased sex ratio, due to the presence of a sex-linked driver. We therefore expect to see rapid evolution of resistance against sexlinked drivers (Hurst & Pomiankowski, 1991). We illustrate this using sex chromosome drive systems in Drosophila simulans. In the Paris Sex Ratio (SR) system, two X-linked drivers together prevent the disjunction of the Y sister chromatids in the second meiotic division. One of these encodes HP1D2, a Gene drive system Mechanism Key effects Gamete killer Drives by killing or damaging gametes that do not carry the driving chromosome. Reduces sperm number. If on a sex chromosome, can bias population sex ratios. Female meiotic drive Drive chromosome manipulates meiosis so rival chromosomes are disproportionately discarded in the polar bodies. Costs relatively unknown, but some well-studied systems associated with low fitness. Transposable elements Drive sequences copy themselves into other locations in the genome. Largely deleterious due to gene disruption and DNA breakage. Genetic incompatibility systems Factors inherited via cytoplasm such as organelles and endosymbionts increase the fitness of females at a cost to males. Mechanisms are extremely diverse. Effects can include loss of male function, feminization, death of offspring. Can be highly costly. Homing-based systems Induce targeted double-strand DNA breaks that copy and insert the drive construct during DNA repair. Effects depend on design. Can include sterility, offspring sex ratio bias, disease resistance. Medea-like systems Chromosomes bearing a set of lethal loci in which each suppresses the other, killing offspring that do not inherit the system. Reduced viability if not all loci are inherited. Reduced offspring production. TA B L E 1 A highly simplified view of mechanisms and associated costs for the gene drive systems discussed in this paper. Please note all systems are considerably more diverse than described here protein that binds Y chromosome heterochromatin in premeiotic cells, suggesting it targets repeated DNA sequences (Helleu et al., 2016). The Y chromosome of D. simulans exhibits substantial variation in resistance to Paris SR drive, with a wide continuum of phenotypes from high susceptibility (95% female progeny) to complete resistance (50% female progeny; Montchamp-Moreau et al., 2001). These more or less resistant Y chromosomes show extensive structural rearrangements affecting satellite sequences, which strongly suggests that resistance occurs through changes in target repeat sequences (Helleu et al., 2019). In addition, Paris SR is suppressed by yet unidentified autosomal loci (Courret et al., 2018). The Winters SR is another sex ratio-distorting system in D. simulans, with a drive phenotype different to Paris SR, killing sperm after meiosis . An X-linked gene, Dox, and likely its progenitor Mdox, are involved in drive . Winters SR is typically entirely suppressed by high frequencies of the autosomal suppressor locus Nmy. Nmy arose from a retrotransposed inverted repeat of Dox and produces an antisense RNA that represses Dox and Mdox through the RNA interference pathway (Lin et al., 2018). These two SR systems illustrate empirically the dynamic nature of the spread of drivers, followed by the rise of suppressors and then loss of drivers that evolve in a continuous cycle of "red queen" dynamics. However, although many meiotic drive systems we observe in nature have arrived at such a dynamic equilibrium, others have not. There is some evidence that drive can cause extinction, at least in local populations (Pinzone & Dyer, 2013). Other drive systems seem to occur at stable frequencies in different populations, sometimes in geographical clines, for reasons that are not well understood, and there is some evidence that this stability can last for hundreds of generations (Price et al., 2014). | Autosomal gamete killers Autosomal gamete-killing meiotic drivers function by killing gametes that carry alternative alleles (Bravo Núñez, Nuckolls, & Zanders, 2018). Some of the best-studied systems are the spore-killers in various fungal species. First, in Neurospora, an RNA interferencebased genome defence mechanism has been shown to be a suppressor of spore-killing alleles (Svedberg et al., 2020). Secondly, there are multiple copies of drivers in the filamentous fungus Podospora anserina, one of which is a known suppressor (Grognet et al., 2014). Another well-studied system is Segregation Distorter (SD) in Drosophila melanogaster, which contains a driver, enhancers of drive and a target site, found in a region of low recombination (Larracuente & Presgraves, 2012). Males heterozygous for SD and a sensitive wildtype chromosome suffer chromatin condensation defects and dysfunction in wild-type sperm. The target site consists of a large block of tandem repeats. The number of copies of the tandem repeat correlates with sensitivity to drive, and alleles with fewer than ~ 300 repeats are insensitive to drive (Wu et al., 1988). There is substantial variation in target copy number in D. melanogaster populations across the globe. Frequencies of SD are low in natural populations, suggesting a balanced polymorphism, but evidence for genetic sweeps of SD instead suggests rapid turnover of SD chromosomes, either because of competition between SD variants or arms races with suppressors (Brand, Larracuente, & Presgraves, 2015). Unlinked genetic suppressors are known (Hiraizumi & Thomas, 1984), but they have not been studied at the molecular genetic level. | Female meiotic drive Female meiotic drive exploits asymmetry in female meiosis to influence which homolog of the chromosome pair is distributed to the egg nucleus as opposed to the excluded polar bodies. Thus, the fitness of the nondriving homolog is reduced, but costs to the organism are small in terms of gamete production. If costs are negligible, then female drivers might readily spread and fix, since only a small region of the genome close to the drive locus would be under selection to evolve resistance. However, in Mimulus monkeyflowers, female drivers impose fitness costs when homozygous (Fishman & Kelly, 2015). In maize (Zea mays), the Kindr (Ab10) driving knobs system has heterozygous and homozygous fitness costs in seed set and weight . Resistant alleles block expression of the Kindr complex and are characterized by small interfering RNAs and DNA methylation . | Transposable elements Transposable elements (TEs) are DNA sequences that can change their location within a genome, often copying themselves in the process (Feschotte & Pritham, 2007). They have been found in prokaryotes, eukaryotes and even giant viruses (Sun et al., 2015). Transposition is generally deleterious to the individual, resulting in DNA breakage and potentially ectopic recombination, as well as potentially disrupting genes (Feschotte & Pritham, 2007). Mechanisms for suppressing TEs are diverse, and many have ancient origins, such as genome methylation which silences TE expression. Typically TE invasions follow a cycle, with a novel TE invading a species, or a TE already in the genome escaping suppression (Bousios & Gaut, 2016). The TE rapidly replicates in the genome of the species, imposing costs, which select for suppression. This invasion and suppression can occur extremely quickly. In Drosophila melanogaster, a DNA-based TE invaded in the early 1950s and had spread worldwide by the 1980s (Anxolabéhère, Kidwell, & Periquet, 1988). In around the year 2000, this TE jumped to the closely related D. simulans and spread even faster worldwide through that species (Hill, Schlötterer, & Betancourt, 2016). RNAi suppression of the TE evolved extremely rapidly in both species, resulting in the TE being largely suppressed in D. simulans populations within two decades of invasion. This fast evolution of suppression is facilitated by piRNA clusters in animals that appear to perform a defensive function against TEs (Czech et al., 2018), similar to the CRISPR libraries that provide adaptive immune defence against viruses and plasmid gene drivers in bacteria (Barrangou & Marraffini, 2014). When a TE attacks the organism, sequences from the invading TE are recruited to the piRNA clusters, providing a DNA template that guides RNAi silencing of that TE, preventing it from further replication (Brennecke et al., 2007). The maintenance of these genomic regions as defences against TEs suggests it is possible that other genomic regions may also be maintained over evolutionary time because they defend against TEs or other selfish genetic elements. | Genetic incompatibility systems Cytoplasmic incompatibility can occur between nuclear and mitochondrial DNA (mtDNA), as mtDNA is transmitted almost exclusively from mother to offspring. The most widely recorded example of cytoplasmic incompatibility is cytoplasmic male sterility, in which hermaphroditic plants are rendered male-sterile and are functionally female. Cytoplasmic male sterility is very widely distributed among angiosperm plant species, with populations consisting of both hermaphroditic and female plants (Touzet & Budar, 2004). Nuclear suppressors that restore male fertility (called Rf genes) are commonly found within cytoplasmic male sterility systems. Many Rfs are members of the pentatricopeptide repeat protein family, involved in processing and editing RNA (Gaborieau, Brown, & Mireau, 2016). They typically act by binding directly to the mitochondrial transcripts, interfering with the production of male sterility proteins (Chen & Liu, 2014). Rfs show evidence of rapid evolution and diversification (Fujii, Bond, & Small, 2011) suggesting ongoing cycles of conflict with cytoplasmic male sterility genes. Male-killing caused by some Wolbachia bacteria, also inherited via cytoplasm, provides a demonstration of how quickly suppression can spread. Pacific island populations of the butterfly Hypolimnas bolina are infected with a Wolbachia strain that causes the death of the sons of infected females (Dyson, Kamath, & Hurst, 2002). This benefits infected daughters due to decreased larval competition with siblings, allowing Wolbachia to reach extremely high frequencies, resulting in populations with fewer than one male per hundred females (Dyson & Hurst, 2004). A nuclear gene which rescues the male embryos recently appeared and has spread rapidly; in the Samoan Hypolimnas population, an equal population sex ratio was restored over the course of 8-10 generations (a single year) after resistance reached the island (Charlat et al., 2007;Hornett et al., 2014). In another example, feminizing Wolbachia in the woodlouse Armadillidium vulgare often reach very high frequencies within populations, such that the only males present come from eggs that by chance do not inherit sufficient Wolbachia to convert them into females (Leclercq et al., 2016). In these highly female-biased populations, the | Systems where suppression has not been found Although mutations have allowed resistance to evolve in many systems, there are examples of both sex-linked and autosomal drivers for which little or no suppression has been found. For example, in the well-studied t haplotype of house mice, distorter loci are bound together in inversions and cause dysregulation of development in sperm carrying the wild-type target allele (Herrmann & Bauer, 2012;Lindholm et al., 2019). Suppression of the t haplotype has not been found in wild populations (Ardlie & Silver, 1996), although transmission differences have been reported in crosses between laboratory strains (Bennett, Alton, & Artzt, 1983;Gummere, McCormick, & Bennett, 1986). In one closely monitored study population, the t haplotype declined and went extinct within eight years, which is thought to be due to negative densitydependent effects on fitness (Manser et al., 2011) and positive density-dependent effects on dispersal (Runge & Lindholm, 2018), rather than suppressors of t. The combination of strong distortion and lack of evidence of suppression has led to plans to develop a synthetic sex chromosome driver from the t haplotype by adding a male sex-determining gene (Sry) to the t, for the purpose of controlling invasive house mouse populations on islands (Backus & Gross, 2016;Campbell et al., 2019). Similarly, the sex ratio-distorting X chromosome drive system in Drosophila pseudoobscura has been studied for almost a century, yet no evidence has been found of target-site variation leading to suppression, or indeed any factors that reduce drive strength . This is puzzling given that SR reaches 30% frequency in populations in the south-western United States, imposes significant costs on the males that carry it and has apparently existed for hundreds of thousands of years (Kovacevic & Schaeffer, 2000), providing ample time for the evolution of resistance. In the related species D. subobscura, only an extremely weak suppressor of drive has been found, again despite a high frequency of drive in natural populations and substantial costs of drive (Verspoor et al., 2018). The same lack of suppressors occurs in Teleopsis dalmanni stalk-eyed flies which again have a high frequency SR drive system which imposes significant viability costs in males and females (Finnegan et al., 2019) and is estimated to be a million years old (Reinhardt et al., 2014). The hybridizing species D. testacea and D. neotestacea each bear driving X chromosomes, but the former shows strong autosomal suppression (Keais, Lu, & Perlman, 2020), whereas the latter shows no evidence of suppression at all (Pinzone & Dyer, 2013). Surprisingly, in the . Another possibility is that some gene drives are involved in ongoing coevolutionary arms races with resistance loci, such that the supposedly unresistable gene drives that we observe are those that have temporarily outpaced their suppressors for a short span of evolutionary time. The Hypolimnas example appears to provide an example of this: the costs of Wolbachia sex ratio distortion were high and Wolbachia was very common, yet for at least a century there was no sign of resistance to the drive. When a resistance allele appeared, it rapidly spread across the species' range within a few decades (Hornett et al., 2014). | Behavioural and life-history resistance against drive One explanation for lack of direct suppression of the mechanism of drive is the evolution of indirect resistance involving behavioural or life-history changes. For example, self-medication in which a Wolbachia-infected individual might reduce their titre by exposing themselves to heat that impairs Wolbachia, or feeding on an antibiotic rich diet (Abbott, 2014;Shikano, 2017;Snook et al., 2000) is a possible but untested idea. There may be many unexplored lifehistory or behavioural ways to resist drive. One of the best-known ideas is that noncarriers may avoid drive carriers as mates, preventing offspring from inheriting harmful drivers and improving offspring fitness. Theoretical models support this idea (Lande & Wilkinson, 1999;Manser et al., 2017;Randerson et al., 2000;Reinhold et al., 1999). However, this requires a trait that reliably reveals the presence or absence of drive (Lande & Wilkinson, 1999;Manser et al., 2017). Evidence of mate avoidance of drive carriers is weak or absent from the majority of systems studied. avoid t-bearing males in some (Lenington & Coopersmith, 1992) but not all studies (Manser et al., 2015;Sutter & Lindholm, 2016 it has yet to be demonstrated whether mate preference has been strengthened for avoidance of drive carriers. Disentangling general condition-dependent mate preferences from evolved resistance to drive through avoidance of mating with drive carriers can be highly challenging. In the Winters SR system of D. simulans, the strength of drive declines from 93% to 60% daughters when males are reared at high temperatures, and older males also show a decline in drive (Tao, Masly, et al., 2007). This could promote females evolving a preference for males unlikely to have strong drive due to these nongenetic causes (i.e. high temperature reared or older males), but to date, this has not been examined, although age-based mate choice is common in Drosophila and other organisms (Verspoor et al., 2015). males are rare, and males will benefit from mating with uninfected females who produce more sons. In this case, males have been found to preferentially mate with ZW-uninfected females, rather than genetically male ZZ individuals who have been feminized . Whether this has suppressed Wolbachia frequency in populations has not been established. In general, the lack of choice against drive carriers may be due to evolutionary pressure to reduce detectability, with the least detectable gene drive alleles outcompeting rival variants, but this remains to be investigated. Another route for drive-susceptible females to avoid producing offspring with drive carriers is by increasing the intensity of sperm competition. In several systems of gamete-killing male meiotic drive, drive-carrying males are inferior sperm competitors, because of a reduction in sperm number and quality . For example, in controlled experimental matings, t-carrying males gain only 12% of paternity when a female mates with both a t-carrying and wild-type male (Sutter & Lindholm, 2015). Females could therefore mate with several males indiscriminately and rely on sperm competition to suppress fertilization by drive sperm (Haig & Bergstrom, 1995). An increase in the propensity to mate with multiple males could evolve as a form of resistance to the presence of a driver within the population. Multiple mating potentially evolves more easily than precopulatory mate choice, as no discrimination between driver-carrier and driver-free individuals is required (Haig & Bergstrom, 1995). The evolution of higher remating rates in response to the presence of a sex ratio distorter was seen within 10 generations in a laboratory experiment using D. pseudoobscura . So it is possible that in polyandrous species, sperm competition reduces the success of gamete killers enough that selection for direct genetic suppression is reduced. As yet, there is no concrete evidence for this in nature. | Homing-based drive systems Many newly engineered systems are based on homing drives that mimic the mode of propagation of homing endonuclease genes (HEGs), a class of naturally occurring selfish genetic elements found in bacteria, fungi and other organisms (Burt & Trivers, 2006 repair, increasing the rate of mutation at the target site without insertion of the gene drive. These novel alleles will confer resistance as they have a different sequence, and may preserve gene function. In laboratory experiments with flies and mosquitoes, resistance to CRISPR-Cas9 homing drives emerges rapidly, in particular when the driver targets single sites (Champer et al., 2017;Gantz et al., 2015;Hammond et al., 2017, ;KaramiNejadRanjbar et al., 2018). Functional target gene mutants can be generated at considerable frequency within one generation by in-frame indels (KaramiNejadRanjbar et al., 2018). One approach to delay the evolution of resistance at the target site is to design targets at highly conserved regions in which sequence variation, including in-frame indels, cannot be tolerated because any change is associated with high fitness costs (Kyrou et al., 2018). Alternatively, a suite of sites can be targeted by the drive construct. When the aim is gene replacement rather than population suppression, gene drives are designed to have low fitness costs and avoid disruption of normal host gene function. This should constrain selection for resistance alleles. But the "cargo" of replacement genes is unlikely to be cost-free. Examples of cargoes include genes that encode resistance or susceptibility to disease or toxins, and genes that alter sexual phenotype. All of these will carry costs, and in the long term, they are expected to be lost due to the spread of loss-of-function mutations. When loss of function is caused by deletion, this may even enhance gene drive spread (i.e. of a null allele); replacement gene drives are only useful as long as the cargo remains intact. The assumption is that the replacement gene will spread and persist sufficiently long to provide its public health benefit (Beaghton et al., 2017). Other types of cargo may be more resilient to loss, for example, where the cargo is beneficial to the organism, such as thermal tolerance genes or symbionts (Piaggio et al., 2017). Finally, expression of the endonuclease is unlikely to be without fitness cost and thus subject to mutational decay. This will mostly come to play at the point when the drive construct has already successfully propagated itself in a population. These constraints have hardly been investigated, but seem likely to place limits on the spread and effectiveness of homing gene drives. | Synthetic sex ratio distorters The X chromosome is the target in engineered systems that aim at distorting the sex ratio towards males. One approach, inspired by the mode of action of natural sex distorters in the mosquitoes Aedes aegypti and Culex pipiens (Wood & Newton, 1991), operates by targeting the X-linked rDNA cluster with an endonuclease operating in spermatogenesis . The lack of target-site resistance, at least when observed at the limited scale of population cage experiments, reflects the use of extremely conserved rDNA target sequences which are present in hundreds of copies on the X chromosome, although even this cannot completely remove the possibility of resistance evolving. Gene drive systems targeting the heterogametic sex chromosome have only been investigated theoretically (Holman, 2019;Prowse et al., 2019) and in preliminary experiments in a house mouse system (Prowse et al., 2019). | Wolbachia The cytoplasmic incompatibility wMelPop strain of Wolbachia was originally isolated from a laboratory screen of D. melanogaster, where it shortens lifespan (Min & Benzer, 1997). (Bull & Turelli, 2013). Alternatively, as high temperature can eliminate Wolbachia infections, it might be possible for mosquitoes to suppress infections by altering their temperature preferences . However, a trial introduction of Wolbachia has seen maintenance of strong cytoplasmic incompatibility and relatively stable frequencies in Australian field populations for seven years since their release, suggesting this may be unlikely, or at least slow to evolve Ryan et al., 2020). After nearly a decade of use, there is as yet no evidence of any type of resistance evolving and the ability to block dengue virus has not been lost (Ross, Turelli, et al., 2019;Ryan et al., 2020). A further question is whether Wolbachia and dengue will enter a coevolutionary arms race against one another in these populations. | Medea and underdominance-like systems Medea-like systems encode a maternal toxin and zygotic antidote, killing offspring that do not inherit the Medea gene drive (Beeman, Friesen, & Denell, 1992). Synthetic underdominance systems are conceptually similar, consisting of a set of lethal loci, each associated with a suppressor of the other (Davis, Bax, & Grewe, 2001). Individuals inheriting only one of the loci carry a lethal locus, but not its suppressor, resulting in reduced viability or fertility. Resistance to these systems is likely to occur via changes to the toxin's target. For example, an underdominant maternal-effect lethal introduced into the soft-fruit pest D. suzukii depends on a miRNA toxin and a zygotic antidote to function and will be impaired by variation at the miRNA binding site. Indeed, a recent survey shows natural variation in the miRNA toxin target sites (Buchman, Marshall, Ostrovski, Yang, & Akbari, 2018). Population cage experiments found that the Medea drive was unable to persist in populations, likely due to a combination of significant fitness costs of the driver as well as standing variation in resistance present in the cage populations . In addition to target-site mutation, Medea and similar toxinantidote systems could also encounter resistance through driver inactivation either through direct suppression or the spread of antidote-only alleles due to mutational inactivation of toxin production. The single study investigating the stability of a D. melanogaster underdominance system found no evidence of resistance evolution over > 200 generations (Reed et al., 2018). Finally, there has been recent theoretical proof of principle of other Medea-like systems that rely on either CRISPR-Cas9 transcriptional overactivation of an endogenous target gene as the "toxin" and an insensitive copy of that target as the "antidote" or CRISPR-Cas9 cleavage as the "toxin" and resistant target gene as the "antidote" (Champer, Kim, Champer, Clark, & Messer, 2019). These too will face similar types of resistance (e.g. target-site mutation, driver inactivation). They are not in principle different from other synthetic gene drive systems that utilize CRISPR-Cas9, although their development is still at an early stage and not advanced enough for empirical investigation of resistance evolution. | THE S TRENG TH OF S ELEC TI ON FOR RE S IS TAN CE ACROSS G ENE DRIVE SYS TEMS The strength of selection against a driver can vary dramatically between drive mechanisms and targets. At one extreme, a synthetic driver aimed at killing carriers or preventing reproduction, or distorting sex ratios will create extremely strong genome-wide selection for resistance against drive. In contrast, a biased gene converter that carries no cost to the organism will select for resistance at the target locus and linked sites, but have no effect on the rest of the genome. Drivers may themselves have a range of harmful pleiotropic effects, or be in linkage with deleterious alleles (Burt & Trivers, 2006). Fitness loss is often observed in both males and females, especially when drivers are homozygous (Dyer & Hall, 2019;Finnegan et al., 2019;Hamilton, 1967;Larner et al., 2019;Zanders & Unckless, 2019). To understand the strength of selection against novel drivers, we need to know their fitness consequences in the field. There is currently a lack of such information for virtually all considered synthetic gene drives. One of the few systems where such information is readily available is for Wolbachia-carrying Aedes mosquitoes. The fitness costs associated with Wolbachia infection have been shown to be exacerbated under field conditions. As an example, the wMelPop Wolbachia strain, which invaded mosquito populations in semi-field cage trials, failed in several field trials because infected females had unexpectedly reduced egg viability in the field (Nguyen et al., 2015). This emphasizes the need for field studies of the fitness of drive carriers for the use of gene drive in natural populations. The spatial structure of target populations is likely to be an important factor in deciding the fate of a gene drive system, as well as the way resistance may arise or spread. For example, Noble et al. (2018) showed that moderate amounts of gene flow between neighbouring populations is sufficient for a HEG-based replacement gene drive to spread between populations, even when resistance systematically arises in each individual population. More generally, we expect not only population genetic structure but also landscape and ecological characteristics to significantly impact the fate of a gene drive. Abiotic barriers (highways, open fields) have been shown to impede the spread of Wolbachia infections due to the limited dispersal ability of Aedes mosquitoes (Schmidt et al., 2017). We can also imagine the evolution of tolerance to drive-meaning that the rest of the genome mitigates the deleterious effects of drive without directly interfering with the drive mechanism. For example, in stalk-eyed flies, males with drive invest more in testes to compensate for the loss of half of their sperm caused by the driver (Meade et al., 2020). Such changes do not interfere with drive and may actually enhance its spread. They lessen the deleterious costs of drive to the rest of the genome even though they do not improve fitness for the target chromosome. This reduction in the costs of the gene drive potentially reduces the strength of selection to suppress the driver. There has been surprisingly little consideration of how all these processes interact when a new driver evolves or enters a population. Does the evolution of an effective defence mechanism against a driver preclude the evolution of other defences? There may be some parallels with the evolution of multiple defences against predators and parasites, which suggests multiple defences commonly evolve (Broom, Higginson, & Ruxton, 2010). joining) result in lethal products (Bull & Malik, 2017;Esvelt, Smidler, Catteruccia, & Church, 2014;Kyrou et al., 2018). A second strategy would be to target multiple sites. The same principle applies to Medea or other systems with "toxins" that act on specific sequence regions (Champer et al., 2017(Champer et al., , 2018Marshall et al., 2017;Noble et al., 2017). Combining multiple mechanisms, for example a suppressive gene drive that also distorts the sex ratio, could be another way to delay the emergence of resistance (Simoni et al., 2020). | S TR ATEG IE S FOR DE S I G NING SYNTHE TI C DRIVE SYS TEMS TO REDUCE RE S IS TAN CE It is also critical to make the driver as stable as possible. For example, reducing the size of a CRISPR-Cas9 HEG transgene increases the likelihood that it will copy itself correctly, and integrating such a drive into endogenous genes may help achieve this goal (Nash et al., 2019;Hoermann et al., 2020). Clearly this may trade off with the benefits of more complicated drivers that reduce resistance evolution by attacking multiple loci. Additionally, repetitive DNA sequence (such as from multiple sgRNA or miRNA backbones) can reduce stability (Bzymek & Lovett, 2001;Marshall et al., 2017;Simoni et al., 2014), and reduction of such repetitiveness can protect against recombination and possible loss of a part of the drive element. It is also important to take into consideration the inherent evolutionary stability of integral gene driver components and mechanisms. For example, using a smaller protein than Cas9 in the drive mechanism could reduce the chance of mutations that inactivate the driver. Additionally, the endogenous homology-directed repair process required for CRISPR-Cas9 HEG function may be error-prone and lead to driver loss of function Oberhofer et al., 2018). Conversely, miRNA or chromosomal rearrangement-based systems may be more evolutionarily stable because they do not rely on large exogenous proteins and error-prone repair pathways to function. Minimizing any fitness costs of the driver is also likely to reduce selection for resistance. Genomic insertion sites are associated with different costs, so transgenes inserting at a low cost site may create less selection for resistance. It is also advisable to reduce pleiotropic impacts of gene drive, as this can create resistance alleles in some systems. For example, work on CRISPR-Cas9 HEGs suggests that expression of the nuclease in somatic cells can lead to off target-site mutation which reduces the spread of the driver (Beaghton, Hammond, Nolan, Crisanti, & Burt, 2019;Champer et al., 2017;Gantz et al., 2015;Hammond et al., 2017). It is also important to remember the ecology of the target species, as this may offer novel ideas for making a gene drive system durable, or reveal weaknesses only present in the field. For example, extremely high temperatures in Australia in 2019 may have impaired the transmission of the temperature-sensitive wMel Wolbachia strain used to combat dengue in Queensland mosquitoes. Synthetic drives designed in benign laboratory conditions may struggle in the field during extreme environmental conditions. If a gene drive is unable to penetrate some areas of an environment, due to conditions that prevent drive function or increase its costs, this could provide ideal circumstances for resistance to evolve. Finally, it is essential to choose the right gene drive for the job. Certain types of drive (e.g. translocations) are much less likely to face resistance, but may spread more slowly than drives that bias segregation (Buchman, Ivy, et al., 2018;Champer et al., 2016). Additionally, population suppression drives will face considerably stronger evolutionary pressures in terms of resistance than replacement drives (Eckhoff, Wenger, Godfray, & Burt, 2017;KaramiNejadRanjbar et al., 2018;Prowse et al., 2017). However, resistance will not always be an impediment to gene drive deployment. For example, if the goal is short-term transformation of a population, then long-term evolution of resistance against the gene drive may not matter (Unckless et al., 2017). Resistance in nontarget populations may make gene drives less likely to spread accidentally (Esvelt et al., 2014). If the target population carries only susceptible alleles, but surrounding populations have a mix of susceptible and resistant alleles, the driver may also be unable to successfully spread to nontarget populations (Sudweeks et al., 2019). | CON CLUS IONS The evolution of resistance is a key problem in the design and use of gene drives. It is a major challenge faced by natural gene drive systems but remains poorly understood. Resistance based on interference can arise very rapidly, within a single generation, but in some natural systems does not appear to have evolved despite long timeframes. As illustrated by this review, mechanisms of resistance are very diverse. Although we understand some of the mechanisms that can resist drivers, we rarely have a clear understanding of the forces underlying individual resistance pathways, nor the biological and ecological factors that determine which resistance type or mechanism is more likely to be selected in a given situation. In the context of applied control programmes using specific gene drive approaches,
2020-09-25T13:01:45.741Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "0968956e0b01c6e0f6d8e757b5aa100611a6bbd7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jeb.13693", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f19f5d64fef617b7a13e6ca371b79156fc7952dc", "s2fieldsofstudy": [ "Engineering", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253458156
pes2o/s2orc
v3-fos-license
Unusual Localization of Hysterothylacium Incurvum in Xiphias gladius (Linnaeus 1758) Caught in the Atlantic Ocean This study represents the first report of Hysterothylacium incurvum within swordfish (Xiphias gladius) heart chambers. Swordfish is a large pelagic teleost, considered one of the most appreciated fish worldwide. Among swordfish parasites, Anisakis sp. and Hysterothylacium sp. have been used to evaluate biological and ecological aspects of this teleost. Between 2021 and 2022, 364 X. gladius hearts, caught from the Atlantic Ocean (FAO 27.IXa and FAO 34 areas), were collected at the Milan fish market (Lombardy, Italy). Three specimens from FAO 27.IXa was positive for seven adult nematodes (p = 1.55%) within the heart chambers. Of these, three specimens were found within the bulbus arteriosus and 4 in the ventricle. All parasites were stored in 70% ethanol and processed for parasitological and molecular analysis using Cox2, ITS regions/ITS-I-5.8S-ITS-II, and rrnS genes. The analysis allowed us to identify the retrieved parasite as H. incurvum. According to our evaluation, the final localization is due to the movement of L3 larvae from the coelomic cavity to the bloodstream, with consequent development to the adult stage within the heart. Finally, the parasite localization, considered non-marketable fish parts, does not pose a significant risk to consumers, also considering the low zoonotic potential of H. incurvum. Introduction Swordfish (Xiphias gladius, Linnaeus 1758) is a large pelagic teleost characterized by a worldwide distribution, mainly in tropical and temperate areas, including the Mediterranean Sea. Despite its intense migratory aptitude, separate stocks, both in the Ocean and in the Mediterranean Sea, have been reported [1][2][3]. The high commercial value of swordfish caught from the Ocean and Mediterranean Sea has been reported [4]. Regarding oceanic swordfish populations, parasitic fauna associated with relative load has been described [5], confirming a significant division between the North and South Atlantic Ocean stocks [6]. Some genetic differences between Oceanic and Mediterranean X. gladius populations were reported [7]. Since 1990, genetic stock differentiation, and some stock movements, between Atlantic and Mediterranean Sea stocks have been reported [8]. Parasites have been used to identify various biological and ecological aspects of aquatic organisms, such as the integrity of food systems and indicators of marine ecosystem conditions, also providing significant data about global climatic changes [9]. Nematode larvae belonging to the genus Anisakis and Hysterothylacium, heterogeneous parasites characterized by a complex life cycle, were the most widely used as "biological tags" [10]. Xiphias gladius parasite fauna, such as crustaceans and trematodes, from the Indian and Pacific Oceans [11,12] and the Baltic Sea, have been reported [13]. Swordfish metazoan fauna sampled from the Mediterranean Sea [14,15] and the North Atlantic Ocean were described [3,16] and have been compared and reported. Among the metazoan parasites, Hysterothylacium corrugatum, H. incurvum, and H. petteri adult specimens were found in swordfish gastrointestinal tracts in the Mediterranean and Ocean areas [3,15]. Anisakidae larvae, genetically identified as Anisakis pegreffii and A. physeteris, were reported in the Mediterranean Sea [15]. A. simplex (sensu strictu), A. paggiae, A. brevispiculata, and A. physeteris larvae were found and molecularly identified in X. gladius celomic organs serosae, caught off the Portuguese Atlantic Ocean areas [3]. The copepod Pennella instructa, attached to the skin [16] and up to the heart chambers [17], has been described worldwide. Contracaecum sp. larvae, generally found in the teleost body cavity [18], pericardial sac [19], and celomic organs serosa [20], were found and histologically described in the atrium and ventricle heart chambers of the freshwater species, fathead minnows (Pimephales promelas) and nine-spined stickleback (Pungitius pungitius) caught from High Rock Lake (Nord Carolina, USA) [21]. After a thorough evaluation of data reported in the literature, in which the presence of adult nematodes was reported only in the gastrointestinal lumen, the present study aims to document the unusual localization of adult nematodes inside the hearth chamber of swordfish caught in the Atlantic Ocean. Sample Collection and Parasitological Assessment From February 2021 to May 2022, 364 hearts of X. gladius were collected during official veterinarian checks at the Milan fish market (Milan, Lombardia, Italy). All examined specimens were caught using hooks and lines fishing methods. In total, 193 fish were caught in the Atlantic, Northeast, Portuguese Waters East Area (FAO 27.IXa), while 171 were caught in the Atlantic, Eastern Central Area (FAO 34). After an external examination, all fish hearts were opened for routine official veterinary activity. Biological indices of body weight (BW) and total length (TL) were recorded for each specimen, and the mean weight (MW) and mean length (ML) were calculated. All retrieved parasites were immediately stored in 70% ethanol and transferred to the laboratory of Parasitology and Parasitic Diseases, University of Messina, for subsequent examinations, where all samples were divided into two stocks, identified as Area 1 (Northeast, Portuguese Waters East Area) and Area 2 (Atlantic, Eastern Central). Xiphias gladius specimens sampled from Area 1 had an MW of 50.5 kg and an ML of 160.1 cm, while specimens from Area 2 had an MW of 39.1 kg and an ML of 155.3 cm. Morphological evaluation was performed with an optic stereo microscope (SteREO Discovery.V12 Zeiss, Jena, Germany) following the keys suggested by Bruce and Cannon [22], and all pictures were taken with a digital camera system (Axiocam Mrc, Axiovision, Zeiss, Jena, Germany). Epidemiological indices of prevalence (P%), mean abundance (MA) and mean intensity (MI) was estimated following the technique reported by Bush et al. [23]. DNA Extraction from Parasites Genomic DNA extraction from parasites was performed using the Nucleo Spin Plant II kit (Macherey-Nagel, Düren, North Rhine-Westphalia, Germany), according to the manufacturer's instructions. NanoDrop 2000 (Thermo Scientific; Wilmington, MA, USA) was used to measure UV absorbance at 260, 280, and 230 nm to verify DNA quantity and purity. Nuclear ribosomal ITS regions (ITS-I-5.8S-ITS-II), a small subunit of the mitochondrial ribosomal RNA gene (rrnS), and cytochrome C oxidase subunit II (cox2) were used as phylogenetic markers in the polymerase chain reaction (PCR). Polymerase Chain Reaction and Sequence Analysis PCR was performed using 500 ng of genomic DNA and Taq DNA Polymerase Recombinant kit (Invitrogen, Carlsbad, California, United States) in a 50 µL reaction volume using the Ep-Gradient Mastercycler (Eppendorf, Hamburg, Germany). For the nuclear ribosomal ITS region amplifications, the following PCR conditions were used: after the first step of 95 • C for 10 min, DNA was subjected to 35 cycles of 95 • C for 30 s, 52 • C for 40 s, and 72 • C for 75 s, with a final extension of 72 • C for 7 min. For the small subunit of the mitochondrial ribosomal RNA gene amplification, the cycling was as follows: denaturing at 95 • C for 10 min followed by 40 cycles of 95 • C for 30 s, 55 • C for 30 s, and 72 • C for 30 s with an initial denaturation of 95 • C for 10 min and a final extension of 72 • C for 7 min. The Cytochrome C oxidase subunit II was amplified, performing 35 cycles of 95 • C for 30 s, 52 • C for 40 s, and 72 • C for 75 s. PCR products were resolved by 1.5% agarose gel electrophoresis to verify product size; the fragments were then purified using the E.Z.N.A Gel Extraction Kit (OMEGA, Omega Bio Tek, Norcross, GA, USA), following the manufacturer's protocol. DNA sequencing of the purified fragments was performed in both forward and reverse directions on the Applied Biosystems 3730 DNA Analyzer (Thermo Fisher Scientific, Waltham, MA, USA), using the same primers used for amplification ( Table 1). The DNA sequences obtained from the isolates (XG1-2022) were analyzed by BLASTN similarity search against the National Center for Biotechnology Information (NCBI; https:// blast.ncbi.nlm.nih.gov/Blast.cgi, accessed on 12 September 2022) database to calculate the statistical significance of the matches, and alignments were performed using the ClustalW algorithm (https://www.genome.jp/tools-bin/clustalw, accessed on 13 September 2022). Phylogenetic analyses were performed using MEGA X [24], and Maximum likelihood (ML) trees were constructed by selecting the GTR + G + I nucleotide substitution model with the bootstrap method (1000 replications). Results Three of the 171 specimens caught from Area 1 were positive for the presence of adult nematodes inside the hearth chambers (n = 7; p = 1.55%, MA = 0.04, MI = 2.33); of these, three specimens were found inside the bulbus arteriosus and four specimens in the ventricle ( Figure 1); 3 of the nematodes were males (2.5 up to 3.7 cm), and four females (6 up to 11 cm). The morphological characteristic of the retrieved parasites allowed us to identify them as Hysterothylacium sp. Molecular Identification of Hysterothylacium sp. All specimens showed positive amplification for ITS regions, rrnS, and cox2 genes. The nucleotide sequences of the amplified products of each gene were identical among biological replicates. The representative DNA sequences for ITS regions, rrnS, and cox2 were submitted to GenBank (accession numbers ITS: OP675472, rrnS: OP675473, and cox2: OP675471, respectively). The representative sequences of ITS regions showed 98.27% similarity to Hysterothylacium sp. (MT365536.1, E value 0.0 and query cover 90%) with 7 nt of difference. The rrnS sequences showed 90.76% similarity to Hysterothylacium sp. (MF140352.1, E value 2E-154 and query cover 93%) with 39 nt of difference. The obtained sequences of cox2 showed 97% similarity to H. incurvum (MW456073.1, E value 0.0, and query cover 92%) with 18 nt of difference. These findings indicated that no ITS and rrnS sequences from H. incurvum were available in GenBank to date. Phylogenetic analyses of our sequences with the relative ITS, rrnS, and cox2 sequences from Ascaridoidea previously deposited in GenBank showed that the cox2 marker was the most effective in the species identification as sequences from our isolates were in the same clade with H. incurvum (MW456073.1) supported by a value of 100 at the node and in a separate branch including all the cox2 sequences from Hysterothylacium sp. retrieved from GenBank (Figures 2-4). Discussion The present study represents the first report of Hysterothylacium incurvum in the heart chambers of X. gladius, considered one of the most appreciated fish species worldwide. Hysterothylacium sp. represents one of the most isolated parasites in swordfish [3,[28][29][30]; our molecular evaluation, compared to other Hysterothylacium sp. sequences reported by Garcia et al. [3], allowed us to identify all the specimens as H. incurvum, adding significant information about the species that parasitize the swordfish in the studied area. The notable finding of adult Hysterothylacium inside the heart chambers of X. gladius highlights a characteristic parasite adaptation against the high blood pressure present in the infection site. The only finding of Contracaecum sp. larvae inside the heart chambers of freshwater fish [21] did not show any host' inflammatory response. In the present study, none of the positive specimens showed a reduction in body weight, suggesting a complete host-parasite adaptation. Furthermore, the used fishing technique does not show any reduction of predatory activities, characteristic of X. gladius. According to Kabata [31], there is the possibility that during the early stage, in the present study L3 larvae, during the physiological intra vitam movement between the celomic cavity and muscle tissue, the parasite can move into the ventral aorta, reaching the bulbus arteriosus and ventricle of the heart, causing bloodstream occlusion, in the case of massive infection. In large healthy fish, there is the possibility of a natural adaptation of the heart chambers, able to modify their structure against occlusive injuries [31,32]. In the case reported here, no macroscopically appreciable structural adaptation was observed, probably due to the different parasite size and body compared to the aforementioned cases. Usually, Hysterothylacium sp. larvae have been used to identify fish stock between the Ocean and the Mediterranean Sea [15]. Our study confirmed ocean stock heterogeneity, adding information on body distribution and possible intra vitam migration of Hysterothylacium sp. larvae in X. gladius. According to Kabata [31], only during a massive infection could the presence of parasites create tissue damage, followed by host physiological adaptations, as also reported by Schuurmans Stekhoven [32]. The low parasitic load per specimen described in the present study, also considering the huge caliber of the ventral aorta and the size of the heart chambers, suggested a partial larval migration from the coelomic cavity to the bloodstream. Furthermore, the mixed infections described by Kabata [31] as an additional cause of occlusive damage in the heart chambers cannot be considered in the present study; indeed, morphological evaluation, associated with molecular analysis, allowed us to identify the found parasite as H. incurvum. The swordfish heart involvement during parasite infection previously reported [17] was significantly different from our case, as Pennella instructa involves heart tissues after skin and muscle penetration; in our case, we can speculate that the adult H. incurvum developed in the bloodstream and heart chambers, after a L3 larvae penetration in other body districts. Among the three phylogenetic markers analyzed in this study, the cox2 gene was the most suitable marker for identifying H. incurvum in X. gladius, thus contributing to the morphological characterization of these parasites in fish. The multi-locus approach would not have been effective as no ITS regions, and rrnS sequences from H. incurvum were deposited in GenBank to date; therefore, ITS and rrnS sequences obtained in this study can provide new molecular markers for the identification of H. incurvum in future studies. Conclusions The present study improves the parasitological knowledge of the host/parasite relationship between H. incurvum and X. gladius. In particular, this paper provides an update on the parasite localization and development stage of this parasite in swordfish. The observed localization of H. incurvum that involve nonedible and non-marketable parts may represent a negligible risk for the consumers, also considering the low zoonotic potential of this parasite [33]. Author Contributions: G.D.B., I.C. and G.G. conceived and designed the study. I.C. and R.M. performed the veterinary examinations and sampling. G.D.B. and G.G. carried out the parasitological analysis. A.G. and K.R. performed the molecular analysis. G.G. and A.G. critically reviewed the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Our study was planned on internal organs sampled from fish markets. For this reason, according to national decree-law 26/2014 (2010-63-EU directive), no institutional review board statement was required. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-11-12T06:18:18.967Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "e21c0991f5abcffe032ba7f8217e10b4d82b844b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/11/11/1315/pdf?version=1668766697", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c54398fa7f0ba1df773e7f2aa477f736ac3558ef", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1063590
pes2o/s2orc
v3-fos-license
Pre-mRNA splicing alters mRNP composition: evidence for stable association of proteins at exon-exon junctions. We provide direct evidence that pre-mRNA splicing alters mRNP protein composition. Using a novel in vitro cross-linking approach, we detected several proteins that associate with mRNA exon-exon junctions only as a consequence of splicing. Immunoprecipitation experiments suggested that these proteins are part of a tight complex around the junction. Two were identified as SRm160, a nuclear matrix-associated splicing coactivator, and hPrp8p, a core component of U5 snRNP and spliceosomes. Glycerol gradient fractionation showed that a subset of these proteins remain associated with mRNA after its release from the spliceosome. These results demonstrate that the spliceosome can leave behind signature proteins at exon-exon junctions. Such proteins could influence downstream metabolic events in vivo such as mRNA transport, translation, and nonsense-mediated decay. Eukaryotic mRNAs are generated by a series of metabolic events including transcription, capping, splicing, and polyadenylation. Once made, mRNAs are transported to the cytoplasm where they undergo translation and, ultimately, decay. It is becoming increasingly clear that all of these metabolic steps are mechanistically linked in intact cells. For example, recent evidence suggests that the carboxy-terminal domain (CTD) of RNA polymerase II delivers proteins required for capping, splicing, and polyadenylation to the nascent transcript in vivo (Cho et al. 1997;McCracken et al. 1997a,b;Misteli and Spector 1999) and enhances splicing and polyadenylation in vitro (Hirose and Manley 1998;Hirose et al. 1999). Moreover, the nature of the promotor can affect alternative splicing patterns of transcripts (Cramer et al. 1999). Also, proteins acquired in the nucleus are essential for proper localization of certain mRNAs in the cytoplasm (Lall et al. 1999). The action of spliceosomes also can influence downstream mRNA metabolism. The presence and position of pre-mRNA introns can affect the efficiency of mRNA transport (Chang and Sharp 1989;Legrain and Rosbash 1989;Pasquinelli et al. 1997;Saavedra et al. 1997;Luo and Reed 1999), the efficiency of mRNA translation (Matsumoto et al. 1998), and the rate of mRNA decay (Maquat 1995(Maquat , 1996Li et al. 1997;Hentze and Kulozik 1999). Studies of nonsense-mediated mRNA decay indicate that mammalian cells can distinguish authentic from premature stop codons by their positions relative to the 3Ј-most exon-exon junction position in mRNA (Cheng et al. 1994;Carter et al. 1996;Nagy and Maquat 1998;Thermann et al. 1998;Zhang et al. 1998a,b). The most probable means by which pre-mRNA splicing influences subsequent mRNA metabolism is by altering the structure of the messenger ribonucleoprotein particle (mRNP) (Nakielny and Dreyfuss 1997;Izaurralde and Adam 1998;Luo and Reed 1999). Such alterations could consist of covalent nucleotide modifications (Rottman et al. 1994) or noncovalent associations of specific proteins, either of which could stay with an mRNP throughout all or a portion of its lifetime. In fact, Luo and Reed (1999) demonstrated recently that an mRNP generated by splicing in vitro has altered mobility in native gels and is exported more rapidly and efficiently from Xenopus laevis nuclei than an mRNP not generated by splicing. A number of proteins known to be involved in pre-mRNA splicing, such as hnRNP A1 (Dreyfuss et al. 1993;Nakielny and Dreyfuss 1997) and a subset of the SR splicing factors (Cá ceres et al. 1998), shuttle between the nucleus and cytoplasm, making them potential mRNP components. HnRNP A1 clearly functions in mRNA export , and electron microscope tomography has shown that an hnRNP A1-related protein accompanies Balbiani ring granules, giant mRNPs of the dipteran Chironomus tentans, through nuclear pores and into cytoplasmic polysomes (Visa et al. 1996). However, despite its demonstrated importance, the extent to which mRNP composition is altered by splicing is currently unknown. We report direct evidence that pre-mRNA splicing alters mRNP protein composition. We employed a new in vitro cross-linking strategy designed to identify proteins left behind by the splicing machinery specifically at mRNA exon-exon junctions. Dependent on the nature of the cross-linker, the position of the cross-linker relative to the exon-exon junction and the pre-mRNA sequence, we observed at least four different proteins that cross-link only to mRNAs generated by splicing in HeLacell nuclear extract. Immunoprecipitations suggested that these four proteins are part of a tight complex around the exon-exon junction. Two of the proteins were identified as SRm160, a nuclear matrix-associated splicing coactivator subunit, and hPrp8p, a highly conserved U5 snRNP protein. Glycerol gradient fractionation indicated that most of the proteins that crosslinked in a splicing-dependent manner remain associated with mRNA after its release from the spliceosome. These data provide a new view of the dynamic nature of protein-RNA interactions at exon-exon junctions after splicing and identify proteins that might influence subsequent mRNA fate. Cross-linking strategy To identify proteins associated with mRNA specifically as a consequence of splicing, we constructed single intron-containing pre-mRNAs having two site-specific modifications: a photoreactive group near the intronproximal end of one exon, and a single 32 P at or near the opposite intron-exon boundary. In vitro splicing of such pre-mRNAs in HeLa-cell nuclear extract juxtaposes both groups at the exon-exon junction in the mRNA product ( Fig. 1, right). After irradiation at a wavelength appropriate for the photoreactive group followed by ribonuclease treatment and then electrophoresis through a denaturing gel, only proteins attached to the cross-linkable moiety at the exon-exon junction, and therefore associated with the 32 P, are detectable by autoradiography. The first substrate we prepared was PIP:E1(B) pre-mRNA, a derivative of PIP.85B pre-mRNA ) that contained a benzophenone moiety (B) (Mac-Millan et al. 1994;Moore and Query 1998) at the penultimate nucleotide of exon 1 (E1) and a single 32 P at the beginning of E2 (Fig. 1, right; see Materials and Methods for sequence and details). We also synthesized two controls: the corresponding mRNA, PIP:E1(B) mRNA (Fig. 1, left), and a control PIP:E1(B) pre-mRNA (Fig. 1, center), identical to experimental PIP:E1(B) pre-mRNA except that the 32 P was positioned at the 5Ј-splice site. The control mRNA allowed for identification of proteins that interact with the benzophenone independent of splicing. The control pre-mRNA permitted identification of proteins that interact with the benzophenone only before lariat formation, because in this construct the benzophenone and 32 P became separated during the first step of splicing and, therefore, were not juxtaposed in spliced mRNA. Splicing and cross-linking of modified RNAs To determine splicing kinetics and spliced product stability, each benzophenone-containing RNA was incubated under splicing conditions in HeLa-cell nuclear extract ( Fig. 2A). Both pre-mRNAs spliced to similar levels, with splicing intermediates evident after 30 min and little additional product accumulation after 90 min (Fig. 2A,. Control mRNA was degraded significantly over the course of the incubation ( Fig. 2A, lanes 1-5). This facilitated differentiation of proteins that cross-link independent of splicing from those that crosslink dependent on splicing because the amount of mRNA that was or was not generated by splicing was comparable at both 90 and 120 min ( Fig. 2A, cf. lanes 4 and 5 with 14 and 15). Neither splicing efficiency nor RNA stability was affected by the presence of the photoreactive group (data not shown; MacMillan et al. 1994). To induce cross-links, each splicing reaction was incubated under splicing conditions (30°C) for 0, 45, or 90 min and then irradiated with a 302-nm lamp for 20 min on ice (4°C). No additional spliced product accumulated during the 4°C incubations, indicating that splicing was not ongoing during the period of UV irradiation (data not shown). Samples were subsequently treated with RNase A and analyzed by SDS-PAGE (Fig. 2B). Numerous labeled bands were observed with all three RNAs. Because no cross-linking was observed when the reactions were treated with proteinase K (Fig. 2B, lanes 4,9,14) open box) and a single 32 P (star) at the beginning of exon 2 (shaded box). Control mRNA (left) was structurally identical to the spliced product of experimental pre-mRNA. Control pre-mRNA (center) was identical to experimental pre-mRNA except that the 32 P was positioned at the 5Ј splice site. sufficient to cross-link a small amount of a 220-kD protein to the 5Ј splice site (see below)], bands detected in other reactions represented proteins specifically crosslinked to the photoreactive group. Multiple proteins cross-linked to the experimental PIP:E1(B) RNA after 45 and 90 min of splicing, times by which both modifications were juxtaposed at the mRNA exon-exon junction (Fig. 2B, lanes 12,13). The pattern of cross-linked proteins was the same regardless of whether RNase treatment was performed before protein denaturation or after boiling samples in the presence of 0.5% SDS (data not shown). However, the majority of these proteins were nonspecific because they also cross-linked to control mRNA, control pre-mRNA, or both. It should be noted that similar proportions of nonspecific protein cross-links have also been observed by others when highly reactive photo-cross-linking moieties, such as benzophenone, were employed with no affinity purification step after cross-linking (Wyatt et al. 1992;MacMillan et al. 1994;Moore and Query 1998). Nonetheless, careful comparison of the pattern obtained with or without splicing revealed four proteins, with apparent molecular masses of ∼220, 160, 50, and 20 kD, that clearly cross-linked in a splicing-dependent manner [ Fig. 2B, bands marked with an asterisk in lanes 11-13 and 16-18 (darker exposure)]. These proteins did not cross-link to control mRNA at any point ( Fig. 2B, lanes 1-3) or to experimental RNA prior to spliceosome assembly (Fig. 2B,lanes 11,16). Polyacrylamide gels of 7.5%, 10%, and 16% which allowed for greater separation in the >150-, 50-, and 20-kD ranges, respectively, corroborated these conclusions (Fig. 2C). Moreover, 32 P labeling of the specific proteins increased with longer splicing incubations, as would be expected for proteins that interact with the mRNA-spliced product (Fig. 2B, lanes 12,13; data not shown). When splicing of experimental pre-mRNA was inhibited by ATP omission (Fig. 2D A 220-kD protein also interacted with the 5Ј splice site prior to lariat formation but dependent on spliceosome assembly (Fig. 2B, cf. lane 6 with lanes 7 and 8; data not shown), as did a protein of approximately 70 kD (band marked with a solid circle in Fig. 2B, lanes 7,8). As would be expected of proteins interacting with the photoreactive group before the first step of splicing, the extent of p220 and p70 cross-linking to control pre-mRNA decreased over time as the 32 P label became separated from the benzophenone by 5Ј splice site cleavage (Fig. 2B, lanes 7,8; data not shown). Taken together, the above results demonstrate clearly that the protein cross-linking pattern around the exonexon junction is different for an mRNA generated by splicing and one not undergoing this process. For PIP:E1(B) mRNA, four proteins with mobilities of ∼220, 160, 50, and 20 kD cross-linked to the exon-exon junction specifically as a consequence of splicing. Immunoprecipitation of cross-linked proteins With the aims of separating the proteins that crosslinked dependent on splicing from those that crosslinked nonspecifically, as well as identifying the specific proteins, we performed a series of immunoprecipitations (IPs). Initially, we employed two monoclonal antibodies (mAbs), NM4 and B1C8, which had been shown previously to preferentially coimmunoprecipitate exon-containing RNA species, including the mRNA product, from in vitro splicing reactions (Blencowe et al. 1995). NM4 recognizes the SR-related, nuclear matrix splicing coactivator subunits SRm300 and SRm160 (Blencowe et al. 1998;Eldridge et al. 1999) and most of the prototypic SR protein splicing factors (Zahler et al. 1992;Blencowe et al. 1995), whereas B1C8 specifically recognizes SRm160 (Blencowe et al. 1998). Splicing reactions containing either experimental or control PIP:E1(B) RNAs were immunoprecipitated after UV irradiation. To examine the extent to which the IP pattern was dependent on intact RNA, RNase treatment was performed either before or after IP. RNase treatment prior to IP would be expected to eliminate any proteins complexed with the mAb epitope only by virtue of being bound to the same RNA. When mAb NM4 was employed and RNase treatment followed IP, numerous cross-linked proteins were precipitated from all three reactions (Fig. 3A, lanes 2,6,10). However, when RNase treatment preceded the IP, all four proteins that crosslinked to experimental pre-mRNA in a splicing-dependent manner were efficiently precipitated, whereas other cross-linked proteins were not (Fig. 3A, cf. lane 11 with lanes 9 and 10). Moreover, the relative intensities of the four splicing-dependent cross-links were not affected by IP. No proteins that cross-linked to control mRNA were precipitated when RNase treatment preceded the IP (Fig. 3A, lane 3), and only the splicing-dependent 220-and 70-kD proteins that cross-linked to control pre-mRNA were precipitated under these conditions (Fig. 3A, lane 7). The simplest interpretation of the above results is that the four proteins that cross-link to the PIP:E1(B) exonexon junction dependent on splicing form a tight complex that is stable to RNase digestion, and the complex contains at least one protein bearing an NM4 epitope. However, the possibility that the nature of the complex varies between mRNA molecules so that individual molecules are bound by at least one protein bearing an NM4 epitope and any or all of the remaining three proteins cannot be discounted. Remarkably, IPs performed on experimental pre-mRNA using mAb B1C8 after RNase treatment, although less efficient overall, yielded a very similar pattern of precipitated proteins to that obtained with mAb NM4 (Fig. 3B, cf. lanes 4 and 7). Because B1C8 is specific for SRm160, SRm160 must be a component of the protein complex at the exon-exon junction. To test whether the cross-linked 160-kD protein was SRm160, IPs were performed next with the two mAbs after both RNase digestion and protein denaturation. This reduced significantly the number of cross-links observed (Fig. 3B,(8)(9). Under the most stringent denaturing conditions, the only splicing-dependent cross-linked protein immunoprecipitated with either mAb was the 160-kD protein (Fig. 3B, lanes 6,9), identifying it as SRm160. Proteins with apparent molecular masses of ∼55, 40, and 25 kD that cross-linked independent of splicing were also immunoprecipitated by mAb NM4 after stringent protein denaturation (Fig. 3B, lane 6). This raised the possibility that these proteins were SRp55, SRp40, and either SRp30 or SRp20. We have confirmed that the 25-kD protein is SRp20, since SRp20 antiserum (Neugebauer and Roth 1997) immunoprecipitated the ca. 25-kD protein, and this protein comigrated with SRp20 when assayed by Western blot hybridization (data not shown). Because the 220-kD protein was not immunoprecipitated by mAb NM4 after protein denaturation, it was unlikely SRm300 (Fig. 3B, lane 6). Another possibility was hPrp8p, a 279-kD component of U5 snRNP , because a similar-sized protein also crosslinked to the 5Ј splice site after spliceosome assembly (Fig. 2B, lanes 7,8), and other studies have shown that Prp8p cross-links in the vicinity of both the 5Ј splice site prior to lariat formation (Wyatt et al. 1992;Reyes et al. 1996) and the 3Ј splice site prior to exon ligation (Teigelkamp et al. 1995;Umen and Guthrie 1995;Chiara et al. 1996Chiara et al. , 1997. Of all the proteins cross-linked to experimental PIP:E1(B) RNA, only the 220-kD band was immunoprecipitated with hPrp8p antiserum after stringent protein denaturation (Fig. 3C, lane 8). This protein was not immunoprecipitated by preimmune serum (Fig. 3C, lane 9). Therefore, the 220-kD protein is hPrp8p. As expected, the 220-kD protein that cross-linked to the 5Ј splice site prior to lariat formation (Fig. 2B, lanes 7,8) is also hPrp8p (Fig. 3C, lane 5). No proteins were immunoprecipated with hPrp8p antiserum from cross-linking reactions containing control mRNA (Fig. 3C, lane 2). In summary, the IP experiments indicate that proteins that cross-link to the PIP:E1(B) mRNA exon-exon junction dependent on splicing are part of a tight RNaseresistant complex. Additionally, two of these proteins are SRm160 and hPrp8p. The splicing-dependent ∼50 and 20 kD proteins remain unidentified. Both new pre-mRNAs spliced with similar kinetics to PIP:E1(B) pre-mRNA (data not shown). Like PIP:E1(B), the pattern of proteins that cross-linked to the exonexon junctions of both PIP:E1(S) and PIP:E2(S) mRNAs was dependent on whether or not the mRNA was generated by splicing (Fig. 4B). Overall, the cross-linking patterns observed with experimental and control PIP:E1(S) RNAs were quite similar to those obtained for their respective PIP:E1(B) constructs (cf. Fig. 4B, lanes 1-3, with Figs. 2 and 3). Proteins migrating with apparent molecular weights of ∼220, 160, and 20 kD whose cross-linking depended on splicing were clearly observable (Fig. 4B, cf. lane 3 with lanes 1 and 2). Because the 220-kD protein also cross-linked to control PIP:E1(S) pre-mRNA (Fig. 4B, lane 2), this protein is most likely hPrp8p. However, there was no discernible protein of 50 kD that specifically cross-linked to experimental PIP:E1(S) RNA. It is possible that the 50-kD protein that cross-linked to experimental PIP:E1(B) RNA is relatively distant from the RNA or is sequence-specific. When the 4-thio-dU was moved to exon 2 [PIP:E2(S)], the overall cross-linking pattern was noticeably different. Although a splicing-specific p220 was still observed, another splicing-dependent protein migrated at ca. 55 kD. These results indicate that splicing alters the complement of proteins associated with exon-exon junctions regardless of the nature of the photoreactive group, the position of this group relative to the exon-exon junction, or the exact sequence of the 5Ј exon, even though changing the position or the nature of the photoreactive group can affect exactly which proteins are detected by this method. Glycerol gradient fractionation Although the experiments presented above identified proteins that cross-link to mRNA specifically as a consequence of splicing, they did not reveal which, if any, remain associated with mRNA after its release from the spliceosome. Previous studies have shown that mRNPs can be separated from spliceosomes by glycerol gradient fractionation (Cheng and Abelson 1987;Konarska and Sharp 1987). To determine the sedimentation profile of PIP RNAs and cross-linked proteins, experimental PIP:E1(B) pre-mRNA was incubated under splicing conditions for 2 hr, UV irradiated, and fractionated in a 10%-30% glycerol gradient. Fractions were then divided and the RNAs and cross-linked proteins analyzed in appropriate denaturing gels (Fig. 5A). Spliceosomes sedimented toward the bottom of the gradient (fractions 8-15) and were defined by the presence of both pre-mRNA and lariat intermediate, a product of the first step of splicing (Fig. 5A, bottom). The mRNA migrated as a broad peak toward the top of the gradient (fractions 3-8), indicating that nearly all of it had been released from the spliceosome. Notably, two of the mRNA-containing fractions were nearly free of both pre-mRNA and splicing intermediates (fractions 3 and 4). All proteins that cross-linked independent of splicing cosedimented with mRNA. Cross-linked SRm160, p50, and p20 also cosedimented with mRNA. In contrast, cross-linked hPrp8p seemed to sediment in two peaks of approximately equal intensity, one comigrating with released mRNA and another with intact spliceosomes. This suggests that only a portion of mRNA that was cross-linked to hPrp8p was released from the spliceosome. Furthermore, because only a small fraction of total mRNA cosedimented with spliceosomes, cross-linking of hPrp8p may be significantly more efficient prior to mRNA release. Interestingly, however, Northern blot analysis demonstrated that the peak of cross-linked hPrp8p that did cofractionate with released mRNA sedimented at a slightly lower velocity than the peak of U5 snRNA (Fig. 5A, cf. U5 snRNA, mRNA, and cross-linked hPrp8p). This raised the intriguing possibility that hPrp8p cross-linked to released PIP mRNA was not associated with fully intact U5 snRNP. In theory, cross-linked proteins cosedimenting with released mRNA could be true components of PIP mRNP or could be present only because their association with the mRNA had been artificially stabilized by cross-linking. This issue was particularly pertinent to hPrp8p because U5 snRNA has been shown to remain associated with the spliced intron product in a spliceosome-sized complex after mRNA release (Cheng and Abelson 1987;Konarska and Sharp 1987). To differentiate between these possibilities, we repeated the sedimentation analysis except that cross-linking was performed after sedimentation (Fig. 5B). Under these conditions, cross-linked hPrp8p exhibited a bimodal distribution similar to that observed in Figure 5A. Similarly, cross-linked SRm160 and p50 cosedimented with mRNA as above. However, the distribution of cross-linked p20 was somewhat altered by the fractionation procedure, with its peak intensity moving to slightly higher velocity fractions. This suggests that p20 is not associated as stably with spliceosome-released mRNA as the other splicing-specific proteins. That protein-RNA associations could change within the gradient was confirmed by the presence of a strong, but previously unobserved, cross-link evident at ca. 75 kD (Fig. 5B, dot). Thus, association of PIP mRNA with a previously undetected protein became apparent during sedimentation. Taken together, results obtained by glycerol gradient fractionation indicate that hPrp8p, SRm160, and p50 are all genuine components of PIP mRNP produced in vitro. Discussion We provide evidence that some of the proteins that associate with mRNA exon-exon junctions in vitro do so only as a consequence of splicing. Several of these proteins form a stable complex that survives RNase treatment. Below we discuss how the identified proteins relate to the process of splicing and the current picture of mRNP structure and function. Properties of proteins associated with exon-exon junctions specifically as a consequence of splicing We observed four proteins that cross-linked to the exonexon junction of PIP:E1(B) mRNA in a splicing-dependent manner. So far, we have identified two of these as SRm160 and hPrp8p. SRm160 is a nuclear matrix antigen (Blencowe et al. 1995(Blencowe et al. , 1998) that contains multiple RS repeats but, unlike prototypic SR proteins (Zahler et al. 1992), lacks an RNA recognition motif (RRM). However, SRm160 had not been shown previously to crosslink to RNA (Blencowe et al. 1998). Instead, previous studies showed that SRm160 forms a complex with another nuclear matrix antigen, SRm300, and this complex serves to promote splicing through interactions with SR protein family members. Notably, association of SRm160/300 with specific pre-mRNAs is dependent on SR proteins and U1 snRNP and is stabilized by U2 sn-RNP (Blencowe et al. 1998;Eldridge et al. 1999). This is consistent with our results showing that SRm160 only cross-links to an exon-exon junction that has been formed by the spliceosome. Moreover, our results suggest that SRm160 becomes closely associated with exonic RNA only after splicing, because SRm160 did not cross-link to the control pre-mRNA that allowed detection of protein-RNA interactions prior to lariat formation (Fig. 2B). If the 160-kD cross-linked protein observed with experimental PIP:E1(S) RNA (Fig. 4B) is also SRm160, then it is closely associated with the mRNA and its ability to cross-link to the 5Ј side of the exonexon junction is not highly sequence-dependent. hPrp8p cross-linked to the 5Ј exon both before lariat formation and after exon ligation ( Fig. 2B; data not shown). It also cross-linked to the 3Ј exon after exon ligation in the PIP:E2(S) construct (Fig. 4B). This remarkably conserved U5 snRNP protein (Hodges et al. 1995;) is a core component of the spliceosome that enters splicing complexes as part of the U4/U6:U5 tri-snRNP. Therefore, its association with splicing substrates, intermediates, and products requires spliceosome assembly. The many documented interactions between Prp8p and sites in both pre-mRNA and snRNAs important for the transesterification reactions implicate Prp8p as a key active site component (Reyes et al. 1996;Collins and Guthrie 1999;Siatecka et al. 1999 and references therein). Previous cross-linking studies have placed Prp8p at the 5Ј splice site before lariat formation, consistent with our observations with the control pre-mRNA (Fig. 2B), as well as in the vicinity of the 3Ј splice site before exon ligation (Wyatt et al. 1992;Teigelkamp et al. 1995;Umen and Guthrie 1995;Chiara et al. 1996Chiara et al. , 1997Reyes et al. 1996). Our results now indicate that some of these interactions are maintained after exon ligation and can persist in the mRNP complex after spliceosome release (see below). In our cross-linking reactions, we also observed numerous proteins that associated with the mRNA independent of splicing. So far, we have only identified one as SRp20 (Zahler et al. 1992;Neugebauer and Roth 1997). This is consistent with previous studies showing that recombinant SRp20 can bind RNA independent of other splicing components (Cavaloc et al. 1999;Schaal and Maniatis 1999). Some splicing-dependent proteins remain associated with PIP mRNA after spliceosome release To determine whether any of the cross-linked proteins observed here were candidates for bona fide mRNP components, we also examined their association with PIP mRNA that had been released from the spliceosome. It was established previously that mRNA release is an active process requiring ATP hydrolysis by the Prp22/ HRH1 RNA helicase (Company et al. 1991;Ohno and Shimura 1996;Schwer and Gross 1998;Wagner et al. 1998). Release occurs efficiently in HeLa-cell splicing reactions, and the resultant mRNP can be separated from spliceosomes by glycerol gradient fractionation (Konarska and Sharp 1987). Because most of the PIP:E1(B) mRNA present in splicing reactions after 2 hr had been released from spliceosomes (Fig. 5), it seemed likely that at least some of the proteins that cross-linked to the exon-exon junction at this time were mRNP components. This was clearly true for SRm160 and p50, which cosedimented exclusively with free mRNP regardless of whether cross-linking was performed before or after sedimentation (Fig. 5, cf. A and B). In contrast, cross-linked hPrp8p was detected both in fractions containing spliceosomes and in fractions containing mRNP (Fig. 5A,B). The presence of hPrp8p in mRNP was unexpected because it is known to interact directly with U5 snRNA (Dix et al. 1998), which remains bound to the spliced intron product after mRNA release (Cheng and Abelson 1987;Konarska and Sharp 1987). However, our observation that hPrp8p cross-linked to PIP mRNA even after glycerol gradient fractionation indicates that it is a component of the PIP mRNP. This mRNP is of somewhat lower density than free U5 sn-RNP, opening the possibility that hPrp8p in mRNP is no longer associated with fully intact U5 snRNP. Interestingly, upon U5 snRNP dissociation in vitro, hPrp8p can be found in RNA-free complexes with other U5 snRNP proteins including a 200-kD RNA helicase, a 116-kD EF-2 homolog, and a 40-kD WD-repeat-containing protein (Achsel et al. 1998). Thus, it is possible that these other proteins are also mRNP components. Evidence for a stable protein complex at the mRNA exon-exon junction All four proteins that cross-linked to PIP:E1(B) mRNA in a splicing-dependent manner were immunoprecipitable with mAbs NM4 and B1C8, both before and after RNase treatment (Fig. 3A,B). Among these proteins, hPrp8p does not contain an epitope recognized by these mAbs, and p50 and p20 also likely lack an appropriate epitope because they failed to be precipitated when both RNase digestion and protein denaturation preceded the IP. This suggests that all are part of a complex that remains bound to the exon-exon junction as a signature of splicing. The glycerol gradient fractionation results indicate that at least hPrp8p, SRm160, and p50 remain stably associated with PIP:E1(B) mRNA after its release from the spliceosome (Fig. 5A,B). While it remains to be determined whether or not the proteins we identified here typify all mRNPs, the type of complex observed could serve as a general mark of exonexon junctions. It will now be of interest to determine how long these proteins remain associated with mRNA. Interestingly, SRm160 can be found in both nuclear and cytoplasmic fractions and can be visualized in nuclear tracks that branch and often terminate near nuclear pore complexes (J. Nickerson, pers. comm.). This suggests that SRm160 may maintain its mRNA association even after mRNA export to the cytoplasm. Identification and functional characterization of other signature proteins will help to extend our understanding of how pre-mRNA splicing influences downstream mRNA metabolic events in vivo. Synthesis of doubly modified RNAs PIP:E1(B) experimental pre-mRNA was synthesized by splinted ligation of four separate RNA pieces (Moore and Query 1998): (1) a GpppG-capped transcript corresponding to the 5Ј end of E1; (2) a synthetic oligomer containing a convertible adenosine near its 3Ј end (MacMillan et al. 1994); (3) a transcript comprising the entire intron; and (4) a 5Ј-32 P-labeled transcript comprising E2. Each fragment is indicated within the pre-mRNA sequence below (where bold nucleotides represent exons, the asterisk specifies the convertible adenosine, underlining indicates intronic splice and branch site consensus sequences, and slashes separate the four fragments), which derived from PIP85.B : 5-GpppGGCGAAUUCGAGCUCACUCUCUUC CGCAUCGCUGUCUGCGAGGUACCCUACCAG/GGUGU CGC(A*)G/GUGAGUAUGGAUCCCUCUAAAAGCGGGC AUGACUUCUAGAGUAGUCCAGGGUUUCCGAGGGUU UCGUCGACGAUGUCAGCUCGUCUCGAGGGUGCUGA CUGGCCUCCUUUUUCCUCCCUCCACAG/ 32 PGUCCUA CACAACAUACUGCAGGACAAACUCUUCGCGGUCUCU GCAUGCAAGCU-3 Following purification of full-length pre-mRNA, benzophenone derivatization of the convertible nucleotide was performed as described previously (MacMillan et al. 1994;Moore and Query 1998). Control PIP:E1(B) pre-mRNA was generated by three-way ligation of fragments 1, 2, and a single 5Ј-32 P-labeled transcript comprising fragments 3 and 4. Control PIP:E1(B) mRNA was synthesized by three-way ligation of fragments 1, 2, and 5Ј-32 P-labeled fragment 4. All three RNAs were also synthesized with an unmodified version of fragment 2 for the benzophenone-less reactions in Figure 2B. Other experimental pre-mRNAs and corresponding controls were generated similarly. The relevant sequences around the 5Ј and 3Ј splice sites, as well as the position of the photoreactive group (4-thio-dU; Glen Research) and 32 P are indicated in Figure 4A for each experimental pre-mRNA. Nuclear extracts and splicing reactions HeLa-cell nuclear extracts (Dignam et al. 1983) were prepared with modifications described (Abmayr et al. 1988) from cells grown by Cellex Biosciences. Splicing reactions (20 µl) containing ∼10 fmoles of labeled RNA were carried out in 40% nuclear extract, 2 mM MgOAc 2 , 20 mM potassium glutamate, 1 mM ATP, 5 mM creatine phosphate, and 0.05 mg/ml of E. coli tRNA. Following incubation at 30°C for times indicated, each reaction was supplemented with heparin (0.5 mg/ml final concentration) and incubated for an additional 5 min at 30°C. Splicing inhibition was accomplished by either incubating reactions for 10 min at 30°C without ATP and creatine phosphate prior to pre-mRNA addition, or by adding 5 µM of a 2Ј-O-methyl oligonucleotide (5Ј-AGAUACUACACUUGAUC-3Ј) complementary to nucleotides 27-41 of human U2 snRNA ) and incubating for 10 min at 30°C prior to pre-mRNA addition. For RNA analysis, reactions were quenched with 10 volumes of splicing stop buffer (100 mM Tris at pH 7.5, 10 mM EDTA, 1% SDS, 150 mM NaCl, 300 mM NaO acetate), phenol/chloroform extracted and ethanol precipitated. RNAs were separated in 15% denaturing polyacrylamide gels and visualized by autoradiography or with a Molecular Dynamics PhosphorImager. Cross-linking Photoreactive substrates of high specific activity (8000 cpm/ fmole) were incubated (∼10 4 cpm/reaction) under splicing conditions. Cross-linking was performed on ice by irradiation for 20 min with a 302 or 365 nm hand-held lamp (UltraViolet Products) for benzophenone-and 4-thio-dU-derivatized RNAs, respectively. Reactions were digested with either 0.1 mg/ml RNase A [Sigma; for PIP:E1(B) and PIP:E2(S) RNAs] or 1 U/ml RNase T1 [Sigma; for PIP:E1(S) RNAs] for 30 min at 37°C. For RNase treatment after protein denaturation, samples were boiled for 2 min in the presence of 0.5% SDS prior to RNase addition. Proteinase K treatment was performed in 0.5% SDS after both cross-linking and RNase digestion by adding proteinase K to 1 mg/ml (Boehringer) and incubating for 5 min at 65°C and then 30 min at 30°C. Cross-linked proteins were separated in SDS-16% polyacrylamide (200:1 acrylamide:bis) gels or in SDS-7.5% and SDS-10% polyacrylamide (29:1 acrylamide:bis) and detected by autoradiography or PhosphorImaging. 14 C-Labeled protein molecular weight standards (GIBCO BRL) were run as controls. Immunoprecipitations Antibodies were bound to protein A-Sepharose (PAS) beads (Pharmacia) via rabbit anti-mouse IgG + IgM (Pierce) for NM4, B1C8, and SRp20 antiserum, and directly to the beads for hPrp8 antiserum and preimmune serum. For IPs, 20 µl of cross-linked samples were diluted with 200 µl of IP100 ) and combined with 40 µl of 50% slurry PAS-bound antibodies. Samples were denatured by boiling for 2 min in 0.05%, 0.15%, or 0.5% (wt/vol) SDS prior to dilution and IP, where indicated. Cross-linked proteins were bound over 3 hr at 4°C with gentle mixing, after which beads were washed three times with IP100, or twice with IP150-1 M urea ) and once with IP100 for hPrp8p IPs. Glycerol gradient analysis Splicing reactions (30 µl) were incubated for 2 hr at 30°C, supplemented to 0.5 mg/ml heparin, and further incubated for 5 min at 30°C. Reactions were cross-linked either before or after sedimentation through a 10%-30% glycerol gradient (600-µl gradient containing 50 mM Tris-glycine at pH 8.8) at 38,000 rpm for 2.5 hr at 4°C in a SW55 rotor (Beckman). Gradients were manually fractionated into 16 × 25 µl aliquots. Typically, 5 µl of each was extracted with phenol/chloroform, ethanol precipitated, and used for RNA analyses. For Northern analysis, RNAs were separated in an 8% denaturing polyacrylamide gel, transferred to an ICN nylon membrane, and hybridized with a probe complementary to U5 snRNA (Konarska and Sharp 1987). The remainder of each fraction was treated with RNase A, and crosslinked proteins were separated and detected as described above.
2018-04-03T03:10:37.239Z
2000-05-01T00:00:00.000
{ "year": 2000, "sha1": "721ec61432414b57a64aa6a93c53af6961c9339e", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/14/9/1098.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "92b88c4624676abf0b3692f6aa9fa001b0e1feff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229721741
pes2o/s2orc
v3-fos-license
Prediction of mortality and major cardiovascular complications in type 2 diabetes: External validation of UK Prospective Diabetes Study outcomes model version 2 in two European observational cohorts To externally validate the UK Prospective Diabetes Study Outcomes Model version 2 (UKPDS‐OM2) by comparing the predicted and observed outcomes in two European population‐based cohorts of people with type 2 diabetes. Conclusions: The UKPDS-OM2 consistently overpredicted the risk of mortality and MI in both cohorts during follow-up. Period effects may partially explain the differences. Results indicate that transferability is not satisfactory for all outcomes, and new or adjusted risk equations may be needed before applying the model to the Italian or Dutch settings. 3,4 and has been validated internally as well as externally. 3,5 The UKPDS-OM2 has significant advantages over version 1, as it is based on longer follow-up data (almost double follow-up time), simulates more outcomes, and captures more comprehensively the progression of diabetes. 2,6 In view of a potential wide utilization of the UKPDS-OM2 in cost-effectiveness analysis and in the evaluation of strategies for the management of T2D at the European level in the future, external validation in data across European countries is of great interest. In particular, with the aim of using the UKPDS-OM2 to support cost-effectiveness studies of new biomarkers in Western and Southern European countries with relatively extensive diabetes care programmes in place, an external validation using real-world data at the European level was necessary. As observed in a validation of the first version of the UKPDS-OM, 7 differences in health and healthcare among countries, reflected in differences in variables such as life expectancy of the general population and mortality risk for T2D, are likely to determine biased estimates of the outcomes. An assessment of model behaviour in different contexts can provide information on the factors affecting validity in different settings according to population characteristics. The previous release of the UKPDS-OM2, the UKPDS-OM1, has been validated in several settings. 7-10 Previous patient-level validation work of the UKPDS-OM2 equations has been done in the United States 11,12 and Germany. 13 They consistently report overprediction of all-cause mortality, but varied in terms of performance relative to other outcomes. Since these results are contradictory and the sample size was small (fewer than 500 participants), evidence on model performance in a European setting is currently quite limited. The aim of the present study was to assess the performance of the UKPDS-OM2 in two large unselected cohorts, with a long followup, representative of different epidemiological scenarios within the European context, the Casale Monferrato Survey (CMS) from Italy and the Hoorn Diabetes Care System (DCS) cohort from the Netherlands. | METHODS The UKPDS-OM2 was used to simulate the DCS and CMS populations from baseline up to 10 and 15 years, respectively, in order to compare its predicted cumulative incidences of T2D-related health outcomes with the observed cumulative incidences. The outcomes considered were all-cause mortality and the incidence of the following fatal and non-fatal events: myocardial infarction (MI), stroke, congestive heart failure (CHF), and other ischaemic heart disease (IHD). The list of International Classification of Diseases, 9th revision (ICD-9) and 10th revision (ICD-10) codes used for defining fatal and non-fatal events was derived from the UKPDS and is provided in Table S1. Patients were included in the analysis if data were available at baseline on a predefined core set of risk factors, namely: sex; age; duration of diabetes (years); BMI; smoking status (current smoker or not); total, HDL and LDL cholesterol; systolic blood pressure; glycated haemoglobin (HbA1c); and estimated glomerular filtration rate (eGFR). | Patient data Two unselected observational cohorts, the CMS cohort (n = 1931) 14,15 and the DCS cohort (n = 5188), 16,17 were used to inform the UKPDS-OM2 with patient-level data. Details on data selection and handling of missing data are reported in Appendix S1 (see sections "Patient data", "Missing data" and Tables S2-S9). | UKPDS outcomes model version 2 The UKPDS-OM2 is based on patient-level data from the UKPDS. 2 It has been developed to substitute the UKPDS-OM1, since additional information has been collected during the UKPDS 10-year post-trial monitoring period, allowing data to be incorporated on new risk factors and outcomes. We provide the characteristics of the UKPDS cohort used to inform the model, at 7 and 11 years of follow-up, in Table 1 Further details on the model are reported in Appendix S1 (see section "The UKPDS Outcomes Model version 2"). | Model validation The model was run for each cohort using all patients with imputed data from time of entry into the DCS and CMS cohorts up to 10 and 15 years of follow-up, respectively. In predicting incidence of each T2D-related complication, only the first event after diagnosis was counted. We removed individuals with pre-existing events, resulting in specific sample sizes for each type of event (Table S10 in Appendix S1). Model validation was performed by comparing UKPDS-OM2 predictions with mean and 95% confidence interval (CI) of the observed cumulative incidences in each cohort at 5, 10 and 15 (CMS only) years of follow-up, that is, "calibration-in-the-large". The UKPDS-OM2 was judged to be well calibrated for a particular outcome if the predicted probability fell within the 95% CI of the probability estimated from the observed data. We also calculated the difference between predicted and observed means in cumulative incidence using measures from bias (difference between observed and predicted means) to mean absolute percentage error (MAPE; average of the error in percentage terms). 18 In general, MAPE is easier to compare across cohorts and outcomes as it is a relative measure. Values closer to zero described better accuracy. Finally, predicted and observed cumulative incidence for all the outcomes at the different timepoints (5, 10 and, for CMS only, 15 years) were plotted together in one graph per cohort. We then estimated a linear regression for each cohort and report the resulting R 2 . Model discrimination, that is, ability to distinguish individuals with different outcomes, 19 was estimated with C-statistics using patient's observed survival time and predicted event-free survival at 5, 10 and 15 (CMS only) years, for each of the outcomes. Further details are reported in Appendix S1 (See "Missing data", "Validation" and "Subgroup and sensitivity analyses" sections). Analyses were performed with SAS 9.4 for the CMS and R 4.0.0 for the DCS cohorts. | RESULTS Baseline characteristics of the cohorts are provided in Table 1 Figure S2 in Appendix S1 plots the predicted versus observed cumulative incidence for all outcomes in one graph and all timepoints (at 5, 10 and, for CMS, 15 years), by cohort. In both cohorts, predictions were strongly associated with observations, with R 2 above 0.9. Results of the analysis of all-cause mortality by subgroup are reported graphically in Figure S3 in Appendix S1. In the CMS cohort, the UKPDS-OM2 showed a reasonable performance (within the 95% CI of the observed rate) for the following subgroups: men, age below 65 years (up to 10 years of follow-up), a median duration of diabetes In a sensitivity analysis carrying risk factors during follow-up forward from the last observed values (from baseline in CMS), the effect was negligible for the outcomes in the DCS. However, in the CMS, where risk factors were held constant (carried forward) from baseline, the overestimation of mortality and MI rates was reduced (see Figure S4 in Appendix S1). Table S12 in Appendix S1 reports the C-statistics concerning the UKPDS-OM2's discriminatory capability in the CMS and DCS cohorts. For the CMS cohort, C-statistic values were above 70% for all-cause mortality across the three timepoints considered. For the remaining outcomes, the C-statistic values were approximately 60% to 65% at 5 and 10 years, and lower at 15 years for MI and IHD (59%). In the DCS cohort, the C-statistics indicated a reasonably good model performance (above 70%) for mortality, heart failure, AMI and stroke (at 10 years). The model predictions for IHD performed the worst in terms of discrimination in the DCS (66% at 5 and 10 years). | DISCUSSION To allow the wide utilization of the UKPDS-OM2 in T2D we need to assess its performance in cohorts of patients different from those used for model development. We externally validated the UKPDS-OM2 using individual patient-level data from two European cohorts, the Italian CMS cohort (South Europe) and the Dutch DCS cohort (Western Europe). We found the UKPDS-OM2 to overpredict the risk of all-cause mortality and MI in both cohorts, but to perform well for stroke and IHD outcomes. The predicted incidence of CHF was accurate in the Dutch cohort but was considerably underestimated in the Italian cohort. Furthermore, model performance deteriorated the longer the period of analysis. In terms of model discrimination, the UKPDS-OM2 performed better in the DCS cohort (all but one outcome with C-statistic equal to or above 70%) compared to the CMS cohort (only mortality above 70%). The subgroup analyses on model performance for mortality indicated specifically room for improvement in elderly patients. This is reasonable, since the UKPDS cohort started with a population aged 58.5 years. In the CMS and DCS cohorts, the percentage of patients above 65 years of age was substantial (>50%), and 23% and 19% of patients, respectively, were aged above 75 years. New studies of the T A B L E 2 Comparison of observed and UKPDS-OM2-predicted cumulative incidence at 5, 10 and 15 years, and relative bias, by outcome and cohort Our results add to previous findings suggesting that the UKPDS-OM2 overpredicts mortality and certain cardiovascular outcomes such as MI. [11][12][13] Other models incorporating equations from the UKPDS-OM2 have also reported validation exercises. 18,22,23 However, these validation studies were based on aggregate data from published studies and the choice of using individual data allowed us to better match outcomes, to capture variation in patient characteristics and outcomes, to address missing risk factor values, and to compare several subgroups of interest. To perform the present validation a very significant effort was undertaken to harmonize data to inform the UKPDS-OM2 and to extract the incidence of events in both cohorts. Particular commitment was necessary to identify the core set of risk factors to be used as inclusion criteria, in order to maximize the number of patients to be included in both cohorts. Key variables in several cases needed to be recoded to harmonize cohort data with UKPDS-OM2 requirements. The potential misalignment of model outcomes with the outcomes recorded in the two cohorts is worth noting. We sought to align UKPDS-OM2 outcomes to diagnostic codes (ICD-9 and ICD-10) in the administrative records of both cohorts, but could not be certain Our work has some limitations. First, some risk factor data needed to inform the UKPDS-OM2 were not available. Of these, some risk factors were completely missing in the cohorts (eg, atrial fibrillation, peripheral vascular disease, heart rate) and were imputed based on their association with available risk factors derived using UKPDS data. Second, risk factor data were completely missing at follow-up in the CMS cohort and censored in the DCS cohort. These missing risk factor time paths were imputed using risk factor time-path equations developed by the UKPDS modelling team, and based on the UKPDS cohort. However, in sensitivity analysis, when we carried forward the last observed values, the findings were similar to those obtained using the imputed risk factor time paths. The imputation process allowed us to maximize the number of patients available for analysis, but could result in a possible overestimation of the differences between predicted and observed events if there is a significant mismatch between imputed and actual (but unobserved) risk factor time paths. However, using risk equations from the UKPDS-OM to impute missing values allowed us to test the model as it is likely to be used by Third, our study populations were those with complete observations at baseline on core risk factors. Thus, we were not able to assess model performance for patients with missing data at baseline. This could have affected the generalizability of our cohorts, however, we expect this effect to be relatively limited. Differences between included and excluded subjects in available variables were low for the CMS cohort (data not shown), while for the DCS cohort only the percentage of smokers and the level of eGFR was higher in the exclusion group (data not shown). Another limitation was the exclusion of microvascular endpoints Beside all the mentioned limitations, this study represents an attempt to validate a decision model using unselected per-patient data, as they are usually available in the real world, that is with missing values, self-reported data and few observable outcomes. In the present study, we showed that the UKPDS-OM2 over-
2020-12-31T06:18:19.832Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "769a63f8d7d6e480da9fe0e929981e4e434404a4", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.14311", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "d9401b9d0a123ebf0b438f7003fa3e3ebc09cbc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214755490
pes2o/s2orc
v3-fos-license
1,000,000 cases of COVID-19 outside of China: The date predicted by a simple heuristic We forecast 1,000,000 COVID-19 cases outside of China by March 31st, 2020 based on a heuristic and WHO situation reports. We do not model the COVID-19 pandemic; we model only the number of cases. The proposed heuristic is based on a simple observation that the plot of the given data is well approximated by an exponential curve. The exponential curve is used for forecasting the growth of new cases. It has been tested for the last situation report of the last day. Its accuracy has been 1.29% for the last day added and predicted by the 57 previous WHO situation reports (the date 18 March 2020). Prediction, forecast, pandemic, COVID-19, coronavirus, exponential growth curve parameter, heuristic, epidemiology, extrapolation, abductive reasoning, WHO situation report. Introduction Using WHO situation reports for Coronavirus disease 2019 (COVID-19), this study forecasts 1,000,000 confirmed cases outside of China in approximately two weeks. So far, #59 situation reports have been posted by WHO (the date 20 March 2020). In this study we refer to reports #31-#57. Due to potentially overwhelming numbers of severe COVID-19 patients, medical resources need to be allocated wisely. With hospital beds and life-saving machinery, such as ventilators in limited supply, preparations should be made ahead of time on how to allocate these finite resources. More information about COVID-19 can be found in [2,3,7]. The best course of action to "flatten the curve" is to follow WHO guidelines. The best way to keep hospitals under capacity is social distancing: limiting or cancelling large gatherings, only travelling when necessary, and keeping a distance from others all help to prevent the spread. Heuristic prediction The presented heuristic is based on the exponential growth of the data collected by WHO situation reports for days 31 to 57. As pointed out in [4] the predictability could be improved by pairwise comparisons based on abductive reasoning [5]. Abduction is frequently used in diagnostic expert systems. The abductive reasoning (or inference) process was used for this study. It is a type of logical inference which starts with a set of observations and then searches for the simplest and most likely explanation for the observations. In our case, the most likely explanation is exponential growth. This process yields a plausible conclusion but may not always positively verify it. The abductive conclusions are heuristics (see [1]), hence involve uncertainty, which is expressed by the bounded rationality as satisficing. Satisficing is a decision making process which takes into account the costs of optimization into the optimization process, thereby producing an efficient but suboptimal result. This can be compared with maximizing, which produces an optimal result at the expense of suboptimal costs. The extrapolation is a mathematical estimation, predicting unknown future values based on existing values. Compared to interpolation, which determines unknown values between existing values, extrapolation is less accurate. The best method for extrapolation is dependent on which method was used to initially acquire the data. The WHO situation report #31 (see [7]) has been assumed as the starting data point since it shows, for the first time, over 1000 cases outside China (see Fig. 1). Due to the risk of data from any individual country being biased or politically motivated to misreport data, we decided to use data from many countries; as such, any doctored data becomes statistically insignificant. In China, where COVID-19 originated, the situation seems to be under control as the Fig. 2 indicates. For this reason, including data about China would deviate the results or at least make them difficult to obtain. The visual inspection suggested the exponential growth, but could not be assumed. As such, R code was needed to be used for it with its nls function. According to [6]: Nonlinear Least Squares (nls) determines the nonlinear (weighted) least-squares estimates of the parameters of a nonlinear model. An nls object is a type of fitted model object. It has methods for the generic functions anova, coef, confint, deviance, df.residual, fitted, formula, log-Lik, predict, print, profile, residuals, summary, vcov and weights. Variables in formula (and weights if not missing) are looked for first in data, then the environment of formula and finally along the search path. Functions in formula are searched for first in the environment of formula and then along the search path. For more details see [8]. We consider a non-linear model of the form: with type exponential function f (.) of the form: In order to estimate the parameters a, b, we apply the non-linear least squares method, in which the residual sum of squares is minimized, see [8]: where y i is the number of total infected by COVID-19 outside China. In a, b parameters estimation we use well-known nls function from R program receiving: The residual standard error is S n = 1827. According to these results, we predict 1,000,000 COVID-19 cases outside of China by the WHO situation report day 70/71 which is 31 March/01 April (see Fig. 3). The lines of the plot, up to the last day of WHO situation report, are: (1) the blue line connecting 18 March WHO data, (2) the red line standing for 1,000,000 cases, (3) the exponential curve computed by R to be as close as possible to the real data up to 18 March. The vertical blue bar (Fig. 3) shows where the WHO data ends and where the predicted results start. For this reason, on the right hand side of the vertical bar there is only one line which is the computed exponential curve. Evidently, we do not have knowledge of how long (in terms of days) such an exponential curve will be an acceptable extrapolation; a million cases in 16 days, however, seems to have a high likeliness. Such a finding has considerable importance and should not be ignored. Conclusions To the best of our knowledge, this may be the first study proposing a heuristic for computing parameters a and b for the approximating exponential curve a * exp(b * x) and for using x as the day number for the COVID-19 situation. The more people know about our finding, the better chance that they may regard self-care as a major contribution to preventing the spread of COVID-19. Our assumptions do not consider the complexity of a pandemic. In particular, we do not consider flattening of the approximating exponential curve. Simply, it is a short term prediction model, but it is very simple and we believe it is very accurate. As for the prediction standards, 1.29% error is more than acceptable for short term predictions. We regard the WHO situation report #31 as the starting data point since it shows over 1000 cases outside China for the first time. The presented approach is based on a heuristic solution and makes a realistic assumption that the current trend can continue for the next 17 days. Obviously, it is an abstract, mathematical model; the reality may be different and COVID-19 situation may change in just a few days.
2020-03-26T10:35:58.848Z
2020-03-23T00:00:00.000
{ "year": 2020, "sha1": "8ad6e17255761e6e72f1ae36d8bd21da5effb9ec", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.gloepi.2020.100023", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70814a9602cd1bf0d377ea71d5bbede59de4ee67", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
18182890
pes2o/s2orc
v3-fos-license
Dual role of myosin II during Drosophila imaginal disc metamorphosis The motor protein non-muscle myosin II is a major driver of the movements that sculpt three dimensional organs from two dimensional epithelia. The machinery of morphogenesis is well established but the logic of its control remains unclear in complex organs. Here we use live imaging and ex vivo culture to report a dual role of myosin II in regulating the development of the Drosophila wing. First, myosin II drives the contraction of a ring of cells that surround the squamous peripodial epithelium, providing the force to fold the whole disc through about 90°. Second, myosin II is needed to allow the squamous cells to expand and then retract at the end of eversion. The combination of genetics and live imaging allows us to describe and understand the tissue dynamics, and the logic of force generation needed to transform a relatively simple imaginal disc into a more complex and three-dimensional adult wing. INTRODUCTION Animal organs develop primarily from simple epithelial sheets -tightly joined cells with apico-basal polarity but otherwise lacking three-dimensional structure. Patterning in the plane of the epithelium occurs when cells remodel contacts with their neighbours, either individually or in groups. These changes can lead, for example, to the differentiation of an organised epithelial mosaic of specialised cell types, as in the case of the Drosophila retina, or changes in the size and shape of the epithelium, as occurs in Drosophila germ band elongation [1][2][3] . Patterning in the third dimension, however, is less well characterised: how are two-dimensional epithelia transformed into the complex architecture of functional threedimensional organs? It is nevertheless clear that some of the same processes of remodelling single cells can produce substantial tissue shape change. Overall, however, there are few cases where there is clear understanding of three dimensional organ morphogenesis. Non-muscle myosin II (myosin II) is a contractile protein that in many contexts is responsible for sculpting the actin cytoskeleton of cells, leading to developmental shape changes. Myosin II is a hexamer of three pairs of subunits. In Drosophila, the myosin II heavy chain is encoded by zipper (zip) 4 , the regulatory light chain by spaghetti squash (sqh) 5,6 and the essential light chain by Mlc-c. The regulation of myosin II activity generates cell shape changes. Phosphorylation of the regulatory light chain induces myosin II activation (reviewed in 7 ) by increasing its recruitment to the cell cortex. This activation in the cortical region initiates apical contraction of the cell, which leads to a change from a columnar shape, to a "bottle" shape, narrower at the apical region than basally. This deformation can occur in groups of cells, generating a change in overall epithelial shape. For example, during formation of the ventral furrow in the Drosophila embryo or the morphogenetic furrow of the eye imaginal disc, a line of cells co-ordinately constrict apically to form an indentation 1,[8][9][10][11][12][13][14] . A similar process happens during morphogenesis of the vertebrate neural tube in mammals 13,15,16 . In the case of the formation of tubular organs, like the tracheal systems of Drosophila, the process starts with the myosin II induced apical constriction of a circular group of cells 17,18 . Asymmetric accumulation of myosin in the apical surface of the cells (only in some cell vertices and not in others) is related to other kinds of morphogenetic events, including intercalation of cells during the convergent extension of the Drosophila embryonic germ band 2 , and rotation of the ommatidia precursors in the eye imaginal disc 1 . The models above describe a deformation in a broadly homogeneous epithelium, where all the cells are equivalent in shape and size, and have similar physical attributes like rigidity or elasticity. More complex systems include a variety of cell characteristics. For example, in Drosophila embryonic dorsal closure two kinds of cell are involved: squamous and columnar. Improvements in imaging techniques have been used to show that closure of the epidermis over the amnioserosa is a complex movement contributed by the pulsating force generated by the amnioserosa and the contraction of an acto-myosin ring in the cells of the leading edge 19,20,21 . We have investigated the morphogenetic mechanisms of a yet more complex structure, a three dimensional tissue. The transformation during metamorphosis of the Drosophila wing imaginal disc into the adult wing involves extensive remodelling. Imaginal discs are more complex structures than often described: a continuous epithelial sac formed by two layers of different morphology, the peripodial epithelium (PE), mainly composed of squamous cells and the disc proper (DP), mainly composed of columnar cells. A third class, cuboidal cells, situated towards the edge of the wing disc, forms a transition zone between the columnar and squamous epithelia [22][23][24] . Interplay between the layers underlies the organ sculpting movements of metamorphosis 23,[25][26][27][28] . Metamorphosis involves two major steps: folding of the disc and the retraction of the peripodial membrane 29 . We developed a method of ex vivo culture of imaginal wing discs 29 and have focused our analysis on the folding of the wing disc. We have described the main processes involved in this example of morphogenesis: where the forces are generated, the response of the surrounding cells and the genetic mechanisms involved. Strikingly, we have discovered that morphogenesis depends on a dual myosin II role. First, it accumulates throughout the cytoplasm of a ring of cuboidal cells, transforming them into a multicellular 'drawstring' that induces folding of the peripodial epithelium; second, it provides the squamous cells with the capacity and strength necessary to expand in response the induced tension. Elevated myosin II levels in a stripe of cells When imaginal discs are fixed for immuno-histochemical analysis, the morphology of the squamous peripodial epithelium (PE) is disrupted. We therefore analysed gene expression by live imaging of unfixed discs (Fig. 1A, C). For clarity, we will divide the PE cells into two subtypes, the stripe cells (Fig. 1B, D, E green) and central cells (Fig. 1B, D, blue). The central cells are typical squamous cells, polygonal, very flat, with a large surface area; (Fig 1F, I) the stripe cells surround the central cells; they are typically referred to as cuboidal, or border cells, although their actual shape is fusiform, with a cubic section and the longer axis of the cells parallel to the compartment boundary (Fig. 1H, green; Fig. 1G). We found that the whole PE expressed high levels of myosin II, but that there was a pronounced increase in a stripe of cells that forms a ring of high expression. This stripe runs from the proximal tip of the disc, the stalk region, following a straight line along the A/P boundary, and then surrounds the cells of the wing pouch. The ring is finally closed with a more diffuse line of expression, along the posterior region over the notum (Fig. 1A-D). Interestingly, both stripe and central cells expressed high levels of Sqh-GFP (and Zip-GFP, not shown) at the interfaces between cells and throughout their cytoplasm. This contrasts with the columnar epithelium of the subjacent disc proper, where myosin was mainly accumulated in the apical region (Fig. 1G). Since myosin II contractility works through its motor properties on actin fibres, we analyzed the distribution of F-actin in the PE cells using two different GFP constructs, lifeact-GFP and the actin binding domain of moesin ( 30 , UAS-GMA-GFP, not shown). Both showed a similar accumulation pattern. Actin accumulated apically at the borders between central cells. This pattern is similar to the one found in the columnar cells at the disc proper. Interestingly, we also observed actin in prominent filamentous structures at the basal level of the cells (Supplementary Fig. S1). The high levels of myosin II in the cytoplasm partially masks the co-localisation with actin, but analysis of single confocal sections showed an enrichment of myosin II in the apical actin structures (arrowhead on Fig. 1I, middle). The stripe cells also showed apical actin accumulation (Fig. 1H) that co-localized with slightly raised levels of myosin II (arrowheads on 1H, right). Like the PE cells, stripe cells showed basal accumulation of actin, forming fibre-like structures that run parallel to the long axis of the cells ( Supplementary Fig. S1). Behaviour of the central and stripe cells during eversion Comparing a mid-third instar and a late third instar wing disc (Fig. 1A, C), it was apparent that the position of the myosin II stripe moved during development. The dorso/ventral boundary of the disc proper moves until it reaches the margin of the disc, allowing the apposition of dorsal and ventral surfaces of the wing (compare Fig. 1B and D). This movement is accompanied by an expansion of the overlying PE, resulting in the ring of stripe cells being no longer visible in a frontal view of the pupal disc. In this view we could just see the lateral stripes, which themselves moved laterally as eversion proceeds ( To follow these changes in detail, we used real-time imaging to examine the movement of the PE during the first step of eversion, when the whole disc folds (about three hours of ex vivo culture). As expected, we observed the displacement of the two stripes to the lateral region, eventually reaching the edges of the disc ( Fig. 2A, orange arrows. Supplementary Movie 1). A lateral view allowed us to follow the myosin II stripe parallel to the A/P boundary in its movement to the opposite face of the disc (Fig. 2B, Supplementary movie 2). Three hours later, as the disc folded, the stripe cells (Fig. 2B) progressively moved and concentrated in a small region (Fig. 2B, yellow arrow). This accumulation of cells with a high level of myosin II was also seen in unfixed discs dissected from the pupa at an equivalent stage to the cultured discs (Fig. 2C). The transition from a long stripe to a small cluster of cells suggested that the stripes were not only moving laterally, but also moved toward the stalk during this first phase of eversion -acting like a drawstring across the epithelium. To confirm our interpretation of this dynamic process we tracked individual cells belonging to the A/P compartment boundary stripe, following how the nuclei displaced along the anterio-posterior border (Fig. 2D). In contrast to the movement of the stripe cells, central cells remained broadly static, showing that there is a net displacement of stripe cells with respect to the central cells (Fig. 2D, Supplementary movie 3). The same was clear when comparing nuclei in the stripe, with nuclei in the adjacent row of central cells (white nuclei and blue nuclei respectively in Fig. 2E, Supplementary movie 4). The above results imply that to accommodate stripe movement, there is a growth of the central territory of the PE. We have never observed any cell division in the PE at the stages of relevance to this work, so this expansion is most easily explained by enlargement of the surface area of the central squamous cells. We quantified this expansion by measuring the area stained by Grunge-Gal4, a marker of the central territory, of different discs at two stages, 0h and 3.5h of culture (n=10 for each time point). We quantified what proportion of the whole disc surface is covered by Grunge-Gal4 expression: it increased from 64.4±6% to 87.0±8% (Student's t test p<0.00001 n=10 in each time point). The columnar epithelium of the disc proper also moved during these morphogenetic events. Specifically, the hinge region comprised three folds that can accommodate the bending of the wing disc. During the transition from the late third instar, when the wing disc is flat, to the more three-dimensional and folded stage, the middle fold (Fig. 2F, yellow arrow) disappeared, while the proximal fold became deeper (Fig. 2F, green arrow). At the same time, the notum adopted a more complex three-dimensional shape (Fig. 2F, red arrow), and the wing continued protruding, to appose fully the dorsal and ventral compartments. Together, these observations present an overall morphogenetic scenario of the folding of the disc during the first phase of eversion. The dynamic distribution of myosin II enriched cells in the peripodial epithelium suggests that orchestrated myosin II driven movement of the stripe cells could drive these large-scale tissue movements (Fig. 2G). Myosin II in the stripe is necessary to induce folding The cell dynamics and myosin II expression described above suggests that the ring of stripe cells could be a force generator that stretches the PE, thereby driving disc folding. On the other hand, the unusual distribution of myosin II throughout the cells, rather than just apically, makes them distinct from previously described cases where apical myosin II contractility controls morphogenesis. We directly tested the role of myosin II by examining the consequences of its reduction in the stripe cells ( Reduction of myosin II specifically in the stripe cells affected the whole process of eversion (Fig. 3A, B). Initially the disc was relatively normal but there was no lateral displacement of the stripes, and the first step of eversion was impaired: the columnar epithelium failed to make the 90° fold observed in wild type discs (Fig. 3A-D). We also observed a later phenotype in which the stalk that attaches the disc to the pupal wall did not open, and there was no retraction of the PE at the end of eversion. In some cases, after more than 10 hours the PE ruptured, usually over the wing pouch, and the developing wing emerged through the tear (not shown).These observations are based in 6 different movies that show the same phenotype. They suggest that lack of myosin II in the stripe cells eliminated the force necessary to generate disc folding. If this interpretation is correct, it implies that the folding force is generated by the movement of the stripes, not by the expansion of the central cells. At later stages the PE failed to retract, leading to a loss of final eversion. Force from the stripe generates tension over central cells As described above, the squamous cell layer expands during the early stages of eversion to accommodate the movement of the dorso-ventral border, and then expands further during eversion. When the stretching force from the stripe cells is abolished, the expansion of the central region is reduced (Fig. 3B) suggesting that central expansion is a direct consequence of movement of the adjacent stripe cells. However, cellular membranes are not elastic, so the expansion must be accommodated by cell growth and/or shape changes. In theory, the central cells will reach a point of maximum expansion beyond which, continued application of force is likely to cause the whole disc to fold. To test the hypothesis that the central cells are under tension, we used laser surgery to make small holes in the central region of the PE (n=12). We observed that as soon as the epithelium was cut, the size of the hole increased dramatically (Fig. 4A-C), indicating that it was indeed under tension. In many cases the tear in the PE continued as eversion progressed, and this prevented normal disc folding ( Fig. 4D and Supplementary movie 6). The increase in the size of holes cut in late third instar wing discs (mean = 33.2-fold, n=4), was greater than when cut in mid-third instar (mean = 5.3fold, n=4, p=0.049), further supporting the role of the stripe cells in generating tension as development proceeds. Between late third instar (mean = 33.2-fold, n=4) and prepupa discs (mean = 57.8-fold, n=4), the extent of increase also grew significantly (p=0.047) (Fig. 4C), again indicating increasing tension in the PE over this period. Note that in some cases of very small cuts, wound healing mechanisms were able to re-close the hole, leading to normal eversion (not shown). Overall, these laser surgical manipulations strongly support the hypothesis that the central cells are put under progressive tension by the movement of the stripe cells. Myosin II allows the expansion of the central cells Despite the central role for the stripe cells as force generators during epithelial folding, the central cells themselves also accumulated high levels of cytoplasmic myosin II. We investigated whether this was functionally important by knocking down myosin II with two different Gal4 drivers that are expressed in the central squamous cells: Grunge-Gal4 (Gug-Gal4) and Ubx LDN -Gal4. As described above, Gug-Gal4 is expressed in the central squamous cells, but also in a few stripe cells and in the stalk (plus an additional expression in two patches of cells in the hinge region in the columnar epithelium) (Fig. 5A, B and 29 ). Ubx LDN -Gal4 is patchily expressed only in cells of the squamous epithelium ( Supplementary Fig. S3). When zip or sqh were knocked down by Gug-Gal4, elevated myosin II was still observed in stripes but the area occupied by the squamous cells at the late third instar/early prepupal stage was reduced. The cell number did not change, but the nuclei of these cells were more densely packed (compare Fig. 5E), indicating that the area of individual cells was reduced. Unlike in wild type discs, there was no increase of the central region over time when myosin II function was reduced by UAS-zip-RNAi overexpression (mean 56.1%+8 of apical area at time zero to 56.0%+7 at 3.5 hours, n=10). This lack of expansion was most clear when comparing wild type and myosin II knockdown discs after 3.5 hours (central region 87.0%+8 of apical area in the WT background compared with 56.0% when myosin II was reduced; t-test, p<0.0001). These data suggest that myosin II is needed in the squamous cells to allow their expansion. In addition, a lateral view of the discs with reduced myosin II showed abnormal apposition between the wing pouch and notum region (Fig. 5C, D). Later defects included abnormal folds in the hinge area and, in some cases, unusual folding caused by the wing margin not reaching the border of the disc, and defective lateral displacement of the stripe cells. (Fig. 5F, white arrow. Supplementary movie 7). Cells of the stripes not expressing the Gug-Gal4 driver, and therefore unaffected by the myosin II knockdown, still moved, but because the stripes had not displaced laterally first, this caused the folding of the wing disc in the opposite direction to wild type (Fig. 5F). This latter result supports the use of Gug-Gal4 driver for the study of the behaviour of central cells, since although some stripe cells are affected (and therefore have reduced myosin II activity) the remaining stripe cells can evidently still generate force since they are able to drive disc folding. At later stages, when myosin II was reduced, the retraction of the PE was impaired, the stalk did not open, and PE retraction did not occur. Despite this disruption to the PE, the columnar disc proper epithelium developed relatively normally, with the apposition of dorsal and ventral compartments of the wing pouch leading to rupturing of the squamous epithelium and eversion through the hole (Fig. 5F, red arrowhead; results based on 5 movies all with same phenotype). Using a single copy of UAS-sqh-RNAi construct (weaker than UAS-zip-RNAi), the stripes were able to move to the lateral sides of the wing imaginal disc and the folding process occurred as in wild type ( Overexpression of UAS-sqh DD (a Sqh form that mimics constitutive phosphorylation and activation) in the central cells using the Gug-G4 driver, also induced abnormal development ( Fig. 5H and Supplementary movie 10). Prior to eversion, the folds in the hinge and wing columnar epithelia were more pronounced, and the squamous epithelial surface appeared reduced. Upon eversion, the stripes tended to move to the sides and contract, but the central region did not expand correspondingly, so the stripes did not reach the other side of the disc. As with Gug-G4>UAS-zip-RNAi, stripe contraction therefore caused folding in the opposite direction to normal. But our data suggest that the similar phenotypes have different causes: while the reduction of myosin II in the central territory prevented cells from expanding, cells with over-activated myosin II are actively contracting (based on 4 movies with the same phenotype). Overall, these data imply that, like the stripe cells, the activity and levels of myosin II in the central cells must be tightly regulated, but in these cells, its role is to allow, first expansion of squamous cells, and then to mediate PE retraction. peripodial epithelium to drive these complex movements (Fig. 6). Two important distinctions between this model and previously described morphogenetic processes in which myosin II participates are, firstly, the heterogeneity of the tissue, with three quite distinct cell populations (squamous, cuboidal and columnar) and, secondly, the sculpting in three dimensions that occurs upon eversion. Unlike in many of the more two dimensional systems, wing disc morphogenesis is not explained by myosin II driven apical constriction of cells. Instead, myosin II throughout the cytoplasm of both the fusiform stripe cells, and the squamous central cells of the peripodial epithelium, is needed in different contexts for movement (of stripe cells) and expansion (of squamous cells). We can now describe the epithelial events that cause eversion. First, the expansion of the central cells is necessary to accommodate the rotation of the DV border of the disc proper 29 . This rotation situates the top of the stripe cells on the rear face of the wing disc. At the same time, the stripe cells flanking the central peripodial epithelium start moving. We show that this movement acts as a cellular drawstring that surrounds the presumptive wing. The tension thus induced leads to a further expansion of the squamous central cells, allowing the lateral displacement of the stripes around the edge of the disc, until they reach the opposite face, the point of maximum expansion. At this stage, further stripe movement causes folding. It is notable that our model requires contributions from all three cell types: initial pressure is generated by the columnar epithelium, which appears more rigid than the PE, and acts as a scaffold; the pulling force derives from the migration of the cuboidal stripe cells; and correct orientation of folding depends on the expansion of the squamous central cells. Although myosin II is well established as the primary motor of contraction that controls epithelial morphogenesis in many contexts, the role we have described in the stripe cells is distinct. In particular, although a purse-string mechanism has also been suggested during dorsal closure of the Drosophila embryo 19,31 , the cellular mechanisms underlying the generation of such contractile cables seem to be different in these two situations. During dorsal closure, myosin II contraction is limited to the apical part of the leading edge cells, while in wing disc stripe cells, myosin II accumulates throughout the whole cytoplasm. Instead of a large scale reorganisation of actin into long apical cables, as occurs in dorsal closure, migration of the cellular stripes in the wing disc is presumably mediated by the preexisting network of F-actin in the cell. Interestingly, the myosin II driven movements in the PE are coordinated with other movements in the proper epithelium, including the rotation or bending of the wing pouch that involves integrins and the recently described gene Dorsocross 32,33 . At the same time, we show that movement in the disc proper and peripodial epithelium are at least partially independent since the apposition of dorsal and ventral parts of the wing pouch occurred even when the PE retraction was impaired. In the central squamous cells of the PE, too, myosin II function appears unconventional. Strong reduction of myosin II prevents their expansion, while moderate reduction allows their expansion but weakens the squamous layer so that it is ruptured by the pressure generated by the pulling force from the stripes and the pushing from the underlying disc proper. In these cells, therefore, the role of myosin II is to confer flexibility and rigidity, allowing the squamous cells to expand in response to the force generated by neighbouring regions. This more passive function requires only moderate activation of myosin II, since hyperactivation (by overexpressing UAS-sqh DD ) makes the squamous cells contract, preventing their expansion. It has become clear over the last few years that the forces generated by non-muscle myosin and actin are major components of the machinery in a large number of contexts. However, within this broad spectrum of mechanisms, it is also clear that the logic of how myosin contractility is harnessed to shape tissues is quite diverse. As well as the previously established processes of cellular apical contraction, and the remodelling of neighbouring cell contacts within the plane of an epithelium, we can now add force generation by accumulation and activation of myosin in patterned groups of cells as a mode of controlling the architecture of tissue development. Ex vivo imaging Dissection and mounting of the wing imaginal disc was performed as in Aldaz et al. 29 . All the process was performed at room temperature. We have used three different confocal systems: a Zeiss LSM510 on an upright microscope; a Zeiss LSM 710 on an inverted microscope; and a Perkin Elmer spinning disk UltraVIEWERS with an Orca ER CCD camera (Hamamatsu). Typically we took between 80-120 Z-sections at 1 micrometer interval every 20/50 min. The images were analyzed using Adobe Photoshop CS2, Image J, and Volocity (Improvision) software. Surface measurements UAS-RFP, Grunge-Gal4/sqh-GFP and UAS-RFP, Grunge-Gal4/UAS-zip-RNAi;sqh-GFP late third instar larval discs were dissected and mounted in culture medium and immediately imaged or cultivated for 3.5 hours before confocal imaging. The total apical area and the area occupied by central cells was measured in each image using ImageJ software. The proportion of central cells over the total surface was calculated dividing the "Area occupied by central cells" by "Total area". At least ten images were measured for each of the four different conditions. Mean and standard deviation were calculated and statistical significance of the comparisons was analysed using a two tailed Student's t-test. Laser microdisection The discs were dissected using a Zeiss PALM Laser microdissection system. Discs were mounted in eversion media in an observation chamber 29 . Prior to dissection, a Z-stack of each disc was taken using a Perkin Elmer spinning disk microscope. Discs were dissected using the following settings: speed 30 mm/sec, cut Energy 60, cut focus 75. The parameters were experimentally determined to cut the peripodial epithelium without scarring or photobleaching the disc proper. To measure the hole in the PE, a similar area in approximately the same region (over the hinge of the disc) was selected in every disc, and a photograph of the dissection area was taken prior to dissection. Approximately 20 minutes after dissection, a second Z-stack was taken. Images were analysed using Image J to measure the size of the dissection area and the area of the hole generated. Statistical analysis of the data was performed using the statistic tools from Microsoft Excel. imaginal disc (green: Sqh-GFP; red, Histone2A-RFP). Dots track three individual cells, and white circles label three central cells. The dots move towards the tip of the disc, while the circles stay relatively stationary. E) Frames from Supplementary movie 4: the movement of stripe cells along the hinge lateral region. Stripe cells are labelled in green with odd-Gal4>UAS-lifeactin; myosin II labelled in red with Sqh-mcherry. White dots mark three stripe cell nuclei which move relative to three cells outside the stripe marked with blue dots. F) Section of a lateral view of a folding disc showing the movements of the disc proper. The middle fold (yellow arrow) disappears, the most proximal fold (green arrow) becomes deeper, and the notum (red arrow) becomes more three-dimensional. G) Stripe movements during the first steps of eversion. The PE stripe cells move proximally and laterally (black and green arrows, respectively). In the prepupa the A/P stripe is not visible in a frontal view of the disc; in a lateral view, the highest accumulation of myosin II is concentrated in a shorter stripe at the lateral face of the now more three-dimensional disc. Note that in all figures time is approximate, and time 0 corresponds to the starting point of image capture, not to a specific developmental stage. Scale bars: A, B, C, D, F, 50μm; E 10μm. Arm-GFP (green) disc where myosin II function has been reduced in the stripes (odd-Gal4>UAS-zip-RNAi, red). The stripes do not move laterally as they do in the control. The whole disc is it not able to fold properly, although the disc proper continues the normal morphological changes such as the apposition of wing dorsal and ventral compartments, and the deep increase of some folds. Scale bars: 50μm. a hole cut near to the A/ P stripe (yellow arrow). After two hours the wound healing process has reduced the area of the hole, but later the hole expands again and the folding of the whole disc is impaired. At ten hours the disc proper can be seen emerging from the hole; the opening of the stalk is also visible at this stage (white arrow). Scale bars: 50μm.
2016-05-12T22:15:10.714Z
2013-03-20T00:00:00.000
{ "year": 2013, "sha1": "e641aa59f91ac07c000dc168aa95bdbfea7c4dd1", "oa_license": null, "oa_url": "https://www.nature.com/articles/ncomms2763.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6da951974fb6761e02af901b40b85c3915d63d4f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
250627247
pes2o/s2orc
v3-fos-license
Hidden freedom in the mode expansion on static spacetimes We review the construction of ground states focusing on a real scalar field whose dynamics is ruled by the Klein-Gordon equation on a large class of static spacetimes. As in the analysis of the classical equations of motion, when enough isometries are present, via a mode expansion the construction of two-point correlation functions boils down to solving a second order, ordinary differential equation on an interval of the real line. Using the language of Sturm-Liouville theory, most compelling is the scenario when one endpoint of such interval is classified as a limit circle, as it often happens when one is working on globally hyperbolic spacetimes with a timelike boundary. In this case, beyond initial data, one needs to specify a boundary condition both to have a well-defined classical dynamics and to select a corresponding ground state. Here, we take into account boundary conditions of Robin type by using well-known results from Sturm-Liouville theory, but we go beyond the existing literature by exploring an unnoticed freedom that emerges from the intrinsic arbitrariness of secondary solutions at a limit circle endpoint. Accordingly, we show that infinitely many one-parameter families of sensible dynamics are admissible. In other words, we emphasize that physical constraints guaranteeing the construction of full-fledged ground states do not, in general, fix one such state unambiguously. In addition, we provide, in full detail, an example on $(1 + 1)$-half Minkowski spacetime to spell out the rationale in a specific scenario where analytic formulae can be obtained. Introduction Quantum field theory on curved spacetimes has lead to significant improvements in our understanding of different physical phenomena ranging from particle production in cosmology to Hawking radiation in black hole physics.In the analysis of the vast majority of the available models, the first step consists in constructing full-fledged quantum states for free fields.Under the mild assumption that the correlation functions are Gaussian, it reduces to the identification of an on-shell two-point correlation function that has to abide to physically motivated constraints.The prime example in this direction is the Hadamard condition, which ensures not only that the quantum fluctuations of all observables are finite, but also that Wick ordered fields can be constructed following a locally covariant scheme.In turn, it entails control of the underlying renormalization group and of interactions that are studied at a perturbative level.Yet, in many concrete scenarios one is limited to abstractly argue the existence of such distinguished two-point functions and an explicit construction is, at best, elusive. Major improvements occur when one concentrates on static spacetimes M R×Σ, regardless of whether they are globally hyperbolic or not.Let us consider, for simplicity, a free, scalar field Ψ : M → R that abides to the Klein-Gordon equation.By calling t the time coordinate along R, the latter simplifies to where K is an elliptic, second order partial differential operator.The key rationale consists of reading K as a symmetric operator on the Hilbert space H := L 2 (Σ, dµ Σ ) of square-integrable functions with respect to the measure dµ Σ induced by the Lorentzian metric tensor of M on Σ.This leads to two notable advantages, one at a classical and one at a quantum level, as described next.At a classical level, solutions of Equation ( 1) can be constructed as follows.Since K is a real and symmetric operator, it admits a non-necessarily unique self-adjoint extension K. Assuming, for convenience, that K has positive spectrum, and given initial data (Ψ 0 , Ψ0 ) ∈ C ∞ 0 (Σ) × C ∞ 0 (Σ) ∩ D( K) × D( K), for each t ∈ R we have where each term is well-defined using spectral calculus.Moreover, there exists a unique Ψ ∈ C ∞ (M) such that, Ψ| Σ t = Ψ t and ∇ n Ψ| Σ t = Ψt , where Σ t ≡ {t} × Σ, t ∈ R, while n is the unit vector field normal to Σ t .If (M, g) is a globally hyperbolic spacetime without boundary, than there exists a unique choice for K and the dynamics is therefore unambiguously determined [1].On the contrary, if K has more than one self-adjoint extension, then multiple, physically inequivalent scenarios do exist.Markedly, the latter is not a remote possibility: it occurs for example when (M, g) is a globally hyperbolic spacetime with a timelike boundary, see [2].This class of backgrounds encompasses several physically interesting scenarios such as AdS spacetime, which was first analyzed in the language above by Ishibashi and Wald in [3].It must be stressed that, using the language of boundary triples [4], the infinite set of different choices of self-adjoint extensions for K can be put in correspondence with the choice of a boundary condition for Equation (1). At a quantum level, the assumption that (M, g) is static guarantees a considerable advantage: the existence of a ground state.Under the same premises of the previous paragraphs, the associated two-point function ψ 2 (t, x, t , x ) can be constructed directly as the integral kernel of the operator [5] Observe that there exists a different ground state for each self-adjoint extension K, in accordance with the fact that each K characterizes a different physical system.Furthermore, if the underlying background is a globally hyperbolic spacetime with or without timelike boundary, then ground states are of Hadamard form as a consequence of the results of [6,7].While at this stage the analysis of a scalar quantum field theory on a static spacetime seems a rather well-understood problem, the drawback lies in two crucial details.On the one hand, when non unique, an explicit construction and characterization of all self-adjoint extensions of K in Equation ( 1) is a daunting task.On the other hand, the quantitative evaluation of physical observables in concrete scenarios, such as on black hole spacetimes, requires a deeper and more hands on knowledge of the two-point function, far beyond the spectral level as per Equation (3).To bypass this conceptual hurdle, it is customary to consider static backgrounds with a high degree of symmetry.Beyond reasons of mathematical simplicity, this class includes many physically relevant backgrounds, such as cosmic strings, black holes and asymptotically AdS spacetimes. In this paper, we consider the class of n-dimensional static spacetimes M that are isometric either to where I ⊆ R, and Σ n−2 j are Cauchy-complete, connected, (n − 2)-dimensional Riemannian manifolds of constant sectional curvature j.The line element associated to the metric tensor on M reads where f and h are suitable positive functions.Barring some technical aspects that will be specified in the next sections, we emphasize that a large class of spacetimes is characterized by the line-element above, including black hole backgrounds ranging from the three-dimensional static BTZ spacetime to the n-dimensional Schwarzschild or Schwarzschild-AdS spacetime. On top of these manifolds, we consider a real, scalar field Ψ whose dynamics is ruled by the Klein-Gordon equation, which, as before, can be written as per Equation (1).With the construction of a quantum field theoretical framework in mind, we are interested in obtaining distinguished two-point functions that correspond to full-fledged ground states.Although the procedure outlined above is applicable, especially when considering scenarios where boundary conditions needs to be imposed, it is common to follow a more computationally oriented approach that exploits the underlying symmetries.In the following, we sketch the steps usually followed in the literature in these scenarios.More details will be given in the next sections of this work. Consider a solution of the Klein-Gordon equation on (M, g) assuming that it admits a mode expansion: where Y j (ϕ 1 , ..., ϕ n−2 ) are the eigenfunctions of the Laplace operator on Σ n−2 j whose corresponding eigenvalue is denoted by η j , while ω ∈ R plays the standard rôle of frequency. The only unknown function R ωη j can be shown to satisfy an eigenvalue equation AR ωη j = λR ωη j where A is a second order differential operator in the radial coordinate r whose domain is the interval I, here taken for definiteness as (a, b).Most notably A can be written in the form of a, possibly singular, Sturm-Liouville operator, see [8]. Similarly to the rôle played by K in Equation (1), one reads A as a symmetric operator on a space of square-integrable functions over the interval I = (a, b).[9], three options are possible. Following von Neumann's theory of deficiency indices A can admit just one self-adjoint extension, a one-parameter or a two-parameters family of self-adjoint extensions.In most applications, the last option does not occur. At this stage it is necessary to pause the description of the procedure to construct a two-point function and draw the attention to self-adjoint extensions.More precisely, from the viewpoint of the differential equation AR ωη j = λR ωη j , the existence of such extensions can be inferred by looking at the behavior of solutions close to a and b, the endpoints of the interval I. Henceforth, for definiteness, we focus on a.The general theory of Sturm-Liouville operators guarantees that, for any λ ∈ C, there is always a distinguished function u, called principal solution.In Section 3 we dwell on the technical details of this concept.For now it suffices to say that u tends to zero as r → a + faster than any other solution that is linearly independent from it, and it is square-integrable in any neighborhood of the endpoint a. Notwithstanding, the existence of another solution, called secondary solution, that is linearly independent from u and square-integrable in any neighborhood of a depends on the differential problem at hand.If such a solution does not exist at both endpoints, then it happens that there is a unique self-adjoint extension for A, see [8].More interesting is the scenario for which that is not the case for some value of λ ∈ C, on account of the fact that if a secondary solution exists, then it is highly non unique. The intrinsic arbitrariness of the secondary solution lies at the core of this work.Suppose there exists a secondary solution at a, but only the principal one at b. From the viewpoint of the Sturm-Liouville operator, the choice of a specific secondary solution at a is irrelevant as its rôle lies only in establishing a one-to-one correspondence between self-adjoint extensions of A and boundary conditions of Robin type, assigned at the endpoint a.These exhaust all possibilities at the level of the ordinary differential equation and therefore one can read the choice of two different secondary solutions as two different, albeit equivalent, ways to span the same space of boundary conditions and, consequently, of solutions of the underlying ordinary differential equation.In most of the recent literature in quantum field theory on curved spacetimes, a consequential dogma has always been to consider the secondary solution as an innocuous and physically insignificant abstraction.Our main goal is to argue that this by far not the case since, tracking back the problem at a fully covariant level, making different choices of the secondary solution, while keeping Robin boundary conditions, impacts significantly the analysis of scalar field Ψ.More precisely one is able to codify at a covariant level a much larger class of admissible boundary conditions than just a one-parameter family as the standard analysis might suggest. The physical relevance of choosing a secondary solution is limpid when one constructs the two-point correlation function of the underlying ground state.Let us thus focus once more on the procedure we started sketching at Page 4 and let us state the subsequent steps, as follows. Since the two-point correlation function ψ 2 must obey the Klein-Gordon equation it is suitable to use the same mode expansion as for the construction of the solutions Ψ.This, together with the ansatz that only positive frequencies contribution are of relevance, identifies a full-fledged ground state.On account of the large isometry group of the background, ψ 2 is completely determined up to a kernel along the radial direction. As explained in Section 5, such kernel can be constructed using an algorithmic scheme once a self-adjoint extension for the operator A has been chosen in terms of a Robin boundary conditions imposed at the level of principal and secondary solutions. We emphasize that the procedurehas been extensively applied on static spacetimes with a timelike boundary in the last years.To mention a few, exhaustive works based on Von Neumann deficiency index theory are [11,12], on AdS spacetimes, and [13], on a static BTZ black hole.Beyond these, analyses based on a mode expansion and on Robin boundary conditions and used in the construction of physically-sensible two-point functions within quantum field theory on asymptotically AdS spacetimes can be found in [14,15,16,17,18], and, more recently, also in [19,20,21,22,23]. The scope of this work is to highlight the fundamental rôle played, in this whole procedure, by the secondary solutions of the underlying Sturm-Liouville problem.To strengthen this statement we consider a simple, yet illustrative example, namely the two-dimensional half-Minkowksi spacetime R × R + .The advantage is that, in this scenario, we can address the problem using explicit, analytic formulae that allow to make clear that even minor adjustments to the choice of secondary solution yield, at the fully covariant level, boundary conditions that have a completely different physical interpretation.In a nutshell, we aim to convey that the choice of secondary solution is of physical consequence. This paper is organized as follows.In Section 2 we show that the Klein-Gordon equation untangles into a Sturm-Liouville problem for the radial part of the Klein-Gordon operator on static spacetimes with maximally symmetric sections.Subsequently, in Section 3 we provide straightforward generalizations of main results from singular Sturm-Liouville theory that allow us to obtain all self-adjoint representations of the latter.Markedly, singular endpoints give rise to an ambiguity in the definition itself of generalized Robin boundary conditions.We clear up this ambiguity in Section 4 by defining generalized (γ, v)−Robin boundary conditions and explaining its connection with the regular case.In Section 4.2 we show how a boundary condition on the radial part translates to a boundary condition on the full solution Ψ. Insofar as solutions of the Klein-Gordon equation characterize a classical dynamics, it is meaningful to include a discussion on the canonical quantization procedure.Hence, in Section 5 we explain the connection between the imposition of the canonical commutation relations and the spectral resolution of the identity given by the before-mentioned Sturm-Liouville problem.We illustrate the main points of this work in a detailed example given in Section 6.Most importantly, this example clarifies in which sense the generalized (γ, v)−Robin boundary conditions imposed on the radial part may render time-dependent boundary conditions on Ψ. Final remarks are given in Section 7. The Klein-Gordon equation In this initial section we introduce both the geometric data of the spacetimes we are interested in and the Klein-Gordon equation.In addition we show that, under the specific assumptions on the background metric, the Klein-Gordon equation can be reduced to a Sturm-Liouville problem. In this work, for n > 2, Σ n−2 j denotes a Cauchy-complete, connected, (n − 2)dimensional Riemannian manifold of constant sectional curvature j, parametrized by (ϕ 1 , ..., ϕ n−2 ), and whose standard metric has an associated line element dΣ n−2 j (ϕ 1 , . . ., ϕ n−2 ).Unless stated otherwise, we shall assume j has been normalized so that j ∈ {−1, 0, +1}.The symbol M refers to an n-dimensional, static spacetime isometric to the warped geometry R × I × Σ n−2 j , where I ⊆ R, while the line element of the underlying metric read in global Schwarzschild-like coordinates (t, r, ϕ 1 , ..., ϕ n−2 ): For convenience, we shall call t ∈ R, r ∈ I ⊆ R, and (ϕ 1 , ..., ϕ n−2 ), respectively, the time, the radial and the angular coordinates.Equation ( 4) is completely specified aside from the two functions f, h.For simplicity, we assumed them to be elements of C ∞ (I; (0, ∞)), although in many instances throughout this work less regularity would suffice.Note that we also allow for the case n = 2, in which M is isometric to R × I endowed with the line element Remark 1.We consider I to be an open interval, say I = (a, b), which might suggest that we are discarding scenarios of notable interest such as globally hyperbolic manifolds with timelike boundary, e.g. the universal cover of AdS n .In these cases, the counterpart of I would include one or both endpoints in the domain of the coordinate r.Yet, if one is interested in the analysis of boundary conditions and their effects, it suffices to focus the attention on the interior of the underlying manifold.Therefore, our analyses can be straightforwardly applied to such cases as well. On M, we consider a free, scalar field with mass m 0 ≥ 0, Ψ : M → R whose dynamics is ruled by the Klein-Gordon equation: where ξ ∈ R, while R is the scalar curvature built out of the metric as per Equation (4).In the case in hand, the D'Alembert wave operator, denoted by , reads where ∆ Σ n−2 j is the Laplace operator on Σ n−2 j .In the following, we construct the solutions of the Klein-Gordon equation.Although Equation ( 5) can be recast in the form of Equation ( 1), we take a route alined with the proceduredescribed in the Introduction.Assuming that the regularity of Ψ is such that we can work at the level of modes, we consider the ansatz where Y j (ϕ 1 , ..., ϕ n−2 ) are the eigenfunctions of ∆ Σ n−2 j with corresponding eigenvalues denoted by η j .Observe that ∆ Σ n−2 j has a continuous spectrum if j ∈ {−1, 0}, while a discrete one if j = 1. Equation ( 6) in combination with Equation ( 5) yields that R ωη j obeys to a secondorder, ordinary differential equation, dubbed radial equation: We can rewrite Equation ( 7) as where If λ = ω 2 , the functions p, q and µ are given by In the remaining cases the corresponding expressions can be derived directly from Equation ( 7), but we omit listing them as they are a rather straightforward modification of Equation ( 10a) and (10b).To conclude this section we observe that Equation ( 7) identifies a, possibly singular, Sturm-Liouville problem, following the standard nomenclature of ordinary differential equations, see e.g.[8]. Self-adjoint extensions Envisioning the construction of ground states for the Klein-Gordon field Ψ, it is essential to obtain the advanced and retarded fundamental solutions associated to the operator P as in Equation (5).To this end we bear in mind a procedure that has been considered in several examples in the literature [16,17,18,19,20,21], mainly when the underlying spacetime possesses a conformal, timelike boundary.The starting point is A, as per Equation ( 8), which shall be read as an operator on the Hilbert space L 2 (I, µ(r)dr). The operator A, and consequently also L, is manifestly symmetric when taken with the dense domain C ∞ 0 (I).Herein, we scrutinize whether L admits self-adjoint extensions and, if so, how many of them.This is a mathematical question that can be answered by combining tools of Sturm-Liouville theory with the theory of unbounded operators on Hilbert spaces, see e.g.[9].Accordingly, in the following we recall the main results known in the literature that are of relevance to our investigation as well as necessary to make this work self-contained.All definitions and lemmas introduced here culminate in Theorem 3.5, which constitutes the resolution to the question in hand. First, let us pose the question more precisely.Consider the Sturm-Liouville problem, as in Equation (8), where as of now we omit the subscripts ω, η j from the radial function for decluttering. In addition, letting L 1 loc refer to locally integrable functions, we assume that 1/p, q, µ ∈ L 1 loc (I), µ > 0, and As alluded to in the previous paragraphs, L as per Equation ( 11) identifies either a minimal or a maximal operator respectively indicated by L min and L max , with corresponding domains where the closure is taken with respect to the graph topology, and AC loc (I) denotes the set of functions that are absolutely continuous on all compact intervals of I. Specifically, our quest is to find self-adjoint extensions L S.A. of L min , whose domain shall be denoted by D S.A. (L S.A. ).If their spectrum satisfies σ(L S.A. ) ⊆ [0, ∞), then we say L S.A. is a positive, self-adjoint extension of L min . In order to address the quest stated above, we introduce some additional tools tailoring the analysis of [8] to the case of interest, i.e.Equation (11) together with the assumptions of the previous sections. For y, z ∈ AC loc (I) we denote the Lagrange sesquilinear form and the Wronskian, respectively, by ii) singular if it is not regular; iii) limit circle if all solutions of Equation ( 11) lie in L 2 (I c , µ(r)dr), ∀c ∈ I; iv) limit point if it is not limit circle. The following definition is especially relevant for our analysis since it differentiates among the solutions of Equation ( 11) depending on their behaviour close to an endpoint. Definition 3.2.Let y be a non-vanishing solution of Equation (11) in I c , ∀c ∈ I. Then we say y is a i) principal solution at e if, for any other solution z of Equation ( 11) that is linearlyindependent from y, y(x) z(x) ii) secondary (or non-principal) solution at e if it is not a principal solution. Definitions 3.1 and 3.2 relate by the fact that at a limit point only the principal solution belongs to L 2 (I c , µ(r)dr), ∀c ∈ I. Manifestly, the classification given by Definition 3.2 is of relevance only when at least one of the endpoints is a limit circle.Thus, for the remainder of this section, we assume that on I = (a, b), a is a limit circle while b is a limit point.In addition, we denote by u and v, respectively, the principal and secondary solutions at the limit circle endpoint and we set [u, v](c) = Λ ∈ C for all c ∈ I.We observe that although a and b might be singular endpoints, for any y, z ∈ D max (L max ), since the following limits exist, it makes sense to define Next, we report three results concerning the interplay between the Lagrange sesquilinear form and Equation (11).Their proofs, omitted here, are a direct adaptation to the case in hand respectively of Lemmas 10. Lemma 3.3.Let L be as per Equation (11).Then for any λ ∈ R and α, β ∈ C, there exists f ∈ D max (L max ) such that Moreover, if a is a regular endpoint, then there exists g ∈ D max (L max ) such that Remark 2. It is interesting to notice that if Λ = 1, Equation ( 16) follows directly from Equation ( 15) by setting g = f .This is not the case if Λ = 1 and this plays a significant part in the discussion of generalized versus regular boundary conditions in the next sections. The following result concerns properties of the self-adjoint extensions of L min , whereas their existence is a direct consequence of Von Neumann lemma [9,Thm.5.43] since the differential operator L in Equation (10a) has real coefficients.For its proof we refer to [8, Th. 10.4.1], and references therein.Conversely, for any g ∈ D max (L max ) abiding to the conditions in item (1), there exists a self-adjoint extension of L min whose domain D S.A. (LS.A.) is defined as per item (2). Our quest reaches a finale with the following paramount result, which is specially tailored to befit singular Sturm-Liouville problems.In particular, it is instrumental to relating the existence of multiple self-adjoint extensions to the choice of specific boundary conditions.We include a detailed proof due to its relevance and in light of the fact that it is not exactly the well-known result as per [8, Thm.10.4.5], but rather a slight generalization of it.Namely, the principal and secondary solutions are not necessarily normalized to [u, v] = 1.Theorem 3.5.Let L be as in Equation (11).As per Definition 3.2, let u and v be principal and secondary real-valued solutions at r = a such that [u, v] = Λ.Then for any identifies the domain of a self-adjoint extension of L min .Moreover, all self-adjoint extensions of L min are of this form. Proof.We divide the analysis in two separate parts: proof of the first statement, and proof of the "moreover" statement. Let (B 1 , B 2 ) ∈ R 2 \ {(0, 0)}.To prove that Equation ( 17) identifies the the domain of a self-adjoint extension of L min , we shall use Lemma 3.4.In other words we set g = B 1 u+B 2 v Λ and, using Lemma 3.1, we can make sure that Observe that item (2) of Lemma 3.4 is automatically fulfilled by Equation ( 17). Consider now L S.A. a self-adjoint extension of L min .By Lemma 3.2, for y, g ∈ D max (L max ), it holds where C 1 := [g, u](a) and C 2 := [g, v](a).On account of Lemma 3.4 there exists g / ∈ D min (L min ) such that [g, g](a) = 0.By setting y = g in the equation displayed above and assuming both u and v to be real-valued, it descends In addition, still Lemma 3.4 guarantees that f ∈ D S.A. (L S.A. ) if and only if [f, g](a) = 0.This reduces to Equation ( 17) setting therein Remark 3. Note that the proof of Theorem 3.5 assuming [u, v] = Λ is analogous to that on [8, Thm.10.4.5] that assumes [u, v] = 1.It is worth mentioning that this normalization does not select a secondary solution.It is easy to see that this is the case if we take into account that [u, u] = 0 for real-valued u.In turn, the reality of both u and v is an essential aspect of the validity of the proof.In addition, a consequence of this restriction is that Equation ( 17) can be equivalently written in terms of the Wronskians instead of the Langrage sesquilinear form, i.e. Generalized (γ, v)-Robin boundary conditions In this section we take a closer look at the boundary condition stated in Equation ( 17) and we reiterate two important facts: • although Equation ( 17) depends on the choice of the pair (B 1 , B 2 ) = (0, 0), it is always possible to rescale y ∈ D max (L max ) so to fix one of the parameters to 1, i.e. we can consider only pairs of the form (1, B 2 B 1 ); • the characterization of D S.A. (L S.A. ) also depends on the chosen secondary solution. At the mere level of the Sturm-Liouville problem under consideration, the freedom in the choice of secondary solution is inconsequential if one is interested in characterizing all self-adjoint extensions of the corresponding operator.Notwithstanding, it plays a distinguished, physically relevant rôle when we turn back to analyzing the dynamics of the Klein-Gordon field ruled by Equation (5).The following definitions aim at highlighting this freedom in the overall process and the difference that occurs when considering a singular rather than a regular Sturm-Liouville problem. In particular, we say that y abides to a (i) generalized Dirichlet boundary condition at a if it satisfies a generalized (0, v)-Robin boundary condition: lim It is worth stressing that Definition 4.1 is applicable only to regular Sturm-Liouville problems since it implicitly requires differentiability of the solution at the endpoint a.In addition, consistently with what one could a priori expect, the "Dirichlet boundary condition" is actually independent of the choice of the secondary solution.In the following, we elucidate more in detail the connection between the two definitions above. Reduction to the regular case Consider the setting of Theorem 3.5, and assume that r = a is a regular endpoint as per Definition 3.1.For real-valued u and v such that {u, v} = Λ, a solution y = cos(γ)u + sin(γ)v ∈ L 2 (I, µ(r)dr) satisfies a generalized (γ, v)-Robin boundary condition, as per Equation (18).Hence That is, at a regular endpoint, for a given u there is a choice of secondary solution v for which the generalized (γ, v)-Robin boundary condition, as per Definition 4.2, yields a regular γ-Robin boundary condition, as per Definition 4.1.Conversely, if we do not choose it in such a way and if γ = 0, then a generalized boundary condition does not necessarily reduce to a regular one. Generalized (γ, v)-Robin boundary conditions and the Klein-Gordon equation In view of the foregoing discussion, it is natural to wonder what is the consequence of choosing a specific generalized (γ, v)-Robin boundary condition at the level of the fully covariant Klein-Gordon equation.This question becomes especially relevant when we are working on a globally hyperbolic spacetime with timelike boundary [2], such as the Poincaré patch of an n-dimensional anti-de Sitter spacetime (PAdS n ).In this case, it is known that the dynamics is completely specified when, and only when, initial data are supplemented with a boundary condition assigned at conformal infinity.Then, one would slavishly follow the analysis outlined in the previous sections.For definiteness, let us assume we are working under conditions for which Definition 4.2 is meaningful.Explicitly, take Ψ to be a solution of the Klein-Gordon equation ( 5) written as the mode expansion given in Equation ( 6).In addition, let u and v be realvalued principal and secondary solutions at an endpoint a for the radial equation, such that the radial mode satisfies a generalized (γ, v)-Robin boundary condition.One can infer that the latter translates to a boundary condition on Ψ: where ) is the integral over the spectrum of the Laplace operator on . Details regarding the latter are left to the reader since they play no rôle in our discussion, yet we observe that if j = 1 this integral reduces to a sum of (hyper-)spherical harmonics, whereas if j ∈ {0, 1} it is nothing but an ordinary Lesbegue integral.Similarly, σ(A) dλ is formally the integral over the spectrum of the self-adjoint extension A with respect to the associated spectral measure.For all practical purposes in many instances σ(A) = (0, ∞) and the integral reduces to a standard Lesbegue integration on the half real line. With the discussion of Section 4 in mind, we see that Equation ( 22) reduces to a regular Robin boundary condition only under special conditions.In addition, we highlight that generalized (γ, v)-Robin boundary conditions on R(r) translate at the level of the Klein-Gordon equation, to a wide variety of boundary conditions, including time-dependent ones.Although at this stage, this statement might be elusive and hidden in the meanders of Equation ( 22), it is manifest in the concrete example thoroughly discussed in Section 6.2. The quantum dynamics The analysis of the classical solutions to the Klein-Gordon equation is just the starting point to obtain a full-fledged, covariant quantization framework.In this paper we shall not give all the details of the latter, see [16,17,18,19,20,21], rather we focus on the construction of ground states admitting generalized (γ, v)-Robin boundary conditions.Definition 5.1.Let M and P be as defined in Section 2. A two-point function of a quantum state is a bidistribution ψ 2 ∈ D (M × M) such that (i) it solves the Klein-Gordon equation in both entries: (ii) it satisfies the canonical commutation relations: where E is the advanced minus retarded fundamental solution associated to P ; (iii) it is positive: In turn, the bi-distribution E ∈ D (M × M) is a solution of the initial value problem: where Σ t is any constant-time hypersurface, while δ Σ is the Dirac delta thereon.It is important to stress that E is a priori not unique, depending both on the underlying geometry and on the parameters ξ and m 0 of the Klein-Gordon equation, see Equation (5).The details for its construction using tools of Sturm-Liouville and spectral theories can be found in [10] and references therein. As detailed in [10, Ch.2] and hinted at in the Introduction, among the plethora of two-point functions on a spacetime admitting Schwarzschild-like coordinates, as per Equation ( 4), one can always distinguish the ones that characterize ground states.They are of the form where Θ denotes the Heaviside step function and, for compactness, we have introduced the notation θ = (ϕ 1 , ..., ϕ n−2 ).Using Equation (23a) in combination with the canonical commutation relations in Definition 5.1 and with the completeness of the eigenefunctions of the Laplace operator ∆ Σ n−2 j , it turns out that the unknown ψ 2 (r, r ) can be obtained by the spectral resolution of the Green function G(r, r ) associated to Equation (7), see [24,Ch.7].Namely, promoting λ to a complex variable, one make use of the chain of identities where C ∞ is an infinitely large circle in the λ-plane with a counter-clockwise orientation.In the next section we give a neat and tangible example that unveils how the choice of different secondary solutions, even with the same value of γ, yields well-defined but inequivalent ground states.It corroborates our statement that, in a system where states can be constructed following a mode decomposition, the choice of a secondary solution for the radial equation remains free even after imposing all the physical constraints necessary to guarantee a sensible framework. An illustrative example: the wave equation on R × R + In this section, we outline a simple yet most illustrative example aimed at highlighting the relevance of the generalized (γ, v)-Robin boundary conditions: a massless, real, scalar field on the 2-dimensional half-Minkowski spacetime R × R + .Although the endpoints to be considered in the corresponding Sturm-Liouville problem are regular, since the underlying manifold is globally hyperbolic with a timelike boundary, it is conceivable to impose thereon generalized (γ, v)-Robin boundary conditions. As functions of x and considering ω as a possibly complex parameter rather than a Fourier variable conjugated to time t, it follows that y 1 (x) and y 2 (x) are not squareintegrable at x = ∞ unless Im(ω) > 0 and Im(ω) < 0, respectively.Therefore, as per Definition 3.1, this endpoint is a limit point and the most general square-integrable solution therein can be written concisely as Both y 1 (x) and y 2 (x) are square-integrable in a neighborhood of x = 0. Still according to Definition 3.1, x = 0 is a limit circle. Generalized versus regular Robin boundary conditions Since the limits of y 1 , y 2 and of their derivatives exist as x → 0, we can cast the generalized (γ, v κ )-Robin boundary condition above as a regular γ-Robin boundary condition: We find that which highlights the difference between a generalized and a regular Robin boundary condition.Markedly, in the regular scenario we can choose a secondary solution based on the property of obtaining a frequency-independent parameter β κ ; in this case, β 1 .Yet, frequency-dependent boundary conditions are also of physical relevance and hence there is no a priori reason to discard them. Time-dependence of the boundary conditions Analogously to the discussion in Section 4.2, given a radial solution of the wave function that satisfies a generalized (γ, v κ )-Robin boundary condition, it is legitimate to wonder which is the corresponding boundary condition satisfied by the solution of the wave equation on R × R + .For ψ κ given by Equation ( 27), let us consider a general solution: It can be written as where, with a slight abuse of notation, we have denoted the Fourier transform with a hat.We find We can now read which boundary condition is satisfied by Ψ κ (t, x) at x = 0 for each κ. Formally, working at the level of distributions, we look for operators such that The above are satisfied if we take c κ and ζ κ as Accordingly, the solution Ψ κ (t, x) satisfies, for κ ∈ {2, 3}: That is, the solutions Ψ 2 (t, x) and Ψ 3 (t, x) do not satisfy a regular, time-independent, γ-Robin boundary condition at the boundary as it does Ψ 1 (t, x). The Green functions To construct the ground state for a Klein-Gordon field admitting generalized (γ, v κ )-Robin boundary conditions on the two-dimensional half-Minkowski spacetime, we follow the rationale outlined in Section 5. On account of Equation ( 25), the only unknown is ψ 2 (x, x ), which in turn can be constructed from the Green function G(x, x ) of the radial equation.Note that, in the case in hand, the rôle of the radial coordinate r is played by the cartesian coordinate x. Let the Wronskian between the principal and the secondary solutions be given by {u, v κ } =: Λ κ .Then the one between the two general square-integrable solutions given by Equation (26) and Equation (27) reads Therefore, the Green functions G κ (x, x ) for the different choices of secondary solutions, which satisfy where (x < , x > ) = (x, x ) if x < x and (x < , x > ) = (x , x), otherwise.Observe that the dependence on ω is implicit in the solutions ψ κ and ψ ∞ .Moreover, for γ ∈ R, it holds true that For notational convenience, but with no loss of generality, let us consider x < x and γ ∈ [0, π 2 ] in the remainder of this section.Once more we fix s := sign(Im(ω)), Im(ω) = 0. Explicitly, the Green functions can be written as . The resolution of the identity For all three values of κ, the suitable contour to be considered for the integration of G κ is the "pac-man" in the ω 2 -complex plane, which is tantamount to integrating 2ωG κ in one semi-disk in the upper or lower ω-complex plane, as illustrated in Figure 1.Note that although G 1 and G 3 diverge in the limit ω → 0, ωG κ has no poles in the ω-complex plane.In addition, since Jordan's lemma holds true, it follows that Next, let us outline the computation of the integral given in Equation (31), to highlight the fact that all three possible choices of a secondary solution does yield the resolution of the identity, as per Equation (25).For G 1 , this computation is standard, [24,Ex.7.3.2,Pg.454].Yet, we do provide a step-by-step solution for all κ in a supplementary notebook available online [25]. The two point functions Directly from the spectral resolution, as stated in Section 5, we obtain the spatial part of the two-point function in each case: With the three choices of secondary solutions we obtain three one-parameter families of two-point functions for three different ground states in the 2-dimensional half-Minkowski spacetime: Ψ κ (t, x, t , x ) = 33) for different values of κ.One may ponder on the significance of such discrepancy, since it could be the case that different integrands yield equal integrals (as it happens for the resolution of the identity, for example).Yet, as it turns out, the term ψ κ (x, x) has a physical interpretation: it characterizes the probability of de-excitations of a two-level system with energy gap Ω, at a fixed spatial position x and interacting for an infinite time with the quantum field in the ground state specified by Ψ κ (t, x, t , x ).Such physical observable has been extensively used in the last years to probe a wide range of characteristics of the underlying quantum field theoretical framework, see [10,26] and the references therein. Conclusion In this work we have highlighted the existence of a hidden freedom in the standard procedure of constructing ground states for a real scalar field in a large class of static backgrounds.In particular we have observed that, when working at the level of the so-called radial equation, boundary conditions of Robin type can be imposed by using an arbitrary secondary solution.While this choice appears to be moot at the level of the underlying ordinary differential equation, it bears notable consequences at a fully covariant level.Interestingly, we have argued and shown via the concrete example of the two-dimensional half-Minkowski spacetime that, by exploring such freedom, one can account for a large class of boundary conditions that are structurally quite different from regular Robin boundary conditions, possibly including time-dependent ones. From a structural viewpoint, the choice of secondary solution does not alter the effectiveness of the methods used until now for the construction of ground on static spacetimes.Nevertheless, it does open the possibility of studying a much larger class of boundary conditions and of investigating the physical consequence of the various different choices.To conclude we emphasize that a rationale similar to the one considered in this paper can be taken also in the investigation of boundary condition of Wentzell type, see e.g.[19,27].Yet, a full-fledged analysis of this scenario would require a lengthy discussion that is worth leaving to a future work. Definition 4 . 1 . (Regular γ-Robin boundary condition) Let L be as per Equation(11).Given any self-adjoint realization L S.A. , we say that y ∈ D S.A. (L S.A. ) satisfies a regular γ-Robin boundary condition at a if lim r→a {cos(γ)y + sin(γ)y } = 0 for γ ∈ [0, π), where the prime indicates the derivative along the r-direction.In particular, we say that y abides to a (i) regular Dirichlet boundary condition at a if it satisfies a regular 0-Robin boundary condition: lim r→a y = 0 and lim r→a y = c ∈ R. (ii) regular Neumann boundary condition at a if it satisfies a regular π 2 -Robin boundary condition: lim r→a y = 0 and lim r→a y = c ∈ R. Definition 4.2.(Generalized (γ, v)-Robin boundary condition) Let L be as per Equation (11) and let u be the principal solution at a and v any secondary solution at a, real-valued and such that {u, v} = Λ.Given any self-adjoint realization L S.A. , we say that y ∈ D S.A. (L S.A. ) satisfies a generalized (γ, v)-Robin boundary condition at a if lim r→a {cos(γ){y, u} + sin(γ){y, v}} = 0 for γ ∈ [0, π). Remark 4 . Each generalized (γ, v)-Robin boundary condition on the radial mode R(r) yields a different Green function G(r, r ) for the radial equation and, consequently, a different two-point function-a different ground state.
2022-07-19T06:42:31.770Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "465eec7ec2b8feb5734ce58fecd0724da7c670a9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10714-023-03099-3.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "465eec7ec2b8feb5734ce58fecd0724da7c670a9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
236535344
pes2o/s2orc
v3-fos-license
The Use of Rock Shelters During the Early Neolithic in the North of Alicante (Spain). The Site of Penya Roja de Catamarruc (Alicante, Spain) as a Case Study The first Neolithic communities settled in the East of the Iberian Peninsula developed a complex strategy of land occupation. These strategies evolved as their social, demographic, and economic bases were transformed. In this paper, we focus on the analysis of archaeological sites located under rock shelters, which were recurrently occupied throughout the Early Neolithic. To deepen this analysis, we reviewed the archaeological record of Penya Roja de Catamarruc (Planes, Alicante), as well as other sites of similar characteristics. This information, combined with different spatial analyses – prominence, visibility, and capacity of use of the soils – allowed us to define a series of patterns of occupation and exploitation of the territory of the first Neolithic communities. This study highlights the importance of the forest as a resource related not only to hunting and gathering as traditionally seen, but also to shepherding. Introduction Human mobility across territories has been constant throughout the species' history, allowing them to occupy all inhabitable regions of the planet. Often, the occupation of new spaces involved adapting to new ecological conditions. This was enabled by the enormous capacity for adaptation of humans, who can transform and adapt their ways of life to continuously changing environments. One of the best analysed transformation processes is that associated to the Neolithic expansion beyond the early adoption foci. Such expansion and subsequent consolidation of the Neolithic way of life on the European continent shows processes of adjustment and transformation of the economic bases and, consequently, of territorial behaviour (Guilaine, 2000), social relations, and even the ideological universe. Neolithic communities of the Mediterranean area, which include the groups of pioneers who settled in the Valencian area around 5600 BC from different regions of the central and western Mediterranean (Bernabeu, Molina, Esquembre, Ramón, & Boronat, 2009;García Atiénzar, 2010), were characterised by an economic system based on cereal agriculture and shepherding. Excavations carried out in the region, as well as the wide range of radiocarbon dating, have shown the existence of a large network of settlements from the second half of the 6th millennium BC (García Puchol et al., 2018). Although this was a phenomenon of coastal expansion, the archaeological record shows that there was a rapid advance towards the interior. Therefore, within a few generations, evidence of the farming way of life can be identified both on the coast and in inland valleys, some of which are over 50 km from the sea (Figure 1). The first villages were in plain areas, near riverbeds, exploiting well-irrigated and fertile soils. Evidence founded from sites like Mas d'Is or Benàmer show the existence of villages made up of scattered domestic units, sometimes linked to huts built with ephemeral materials, combustion structures, and areas of activity (Jover, Pastor, & Torregrosa, 2019). One example is Mas d'Is, which exhibited large ditches that served a range of potential functions (Bernabeu, Orozco, Díez, Gómez, & Molina, 2003). At the same time, mountainous cavities surrounding these valleys were also occupied. Their morphological features, as well as their archaeological record, have raised the possibility that caves may have been permanently inhabited, as seen in Cova de l'Or, although the possibility of other types of use remains plausible (Martí, 2008). Other caves, located both on the coast and in the Serpis valley, as well as in the natural corridors that connect both areas, might have acted as temporary refuges associated with different activities such as shepherding, mollusc gathering, hunting, etc. (Bernabeu & Molina, 2009a;García Atiénzar, 2009;García Puchol & Aura, 2006). In some instances, caves were used as burial sites (Bernabeu, Molina, & García, 2001;García Borja, Salazar-García, Aura, Cortell, & Velasco, 2016). Such use lasted over time, generating a palimpsest that makes it difficult to assess their ritual importance during the Early Neolithic. Sites associated to farming way of life are located near the river Serpis and adjacent valleys to the seacoast. Nevertheless, between these two areas there is a rugged landscape dominated by mountain ranges and narrow valleys, conforming unsuitable territories for agriculture suitability. It is precisely in this region that the greatest amount of evidence related to the Early Neolithic has been documented (García Atiénzar, 2009), mostly related to the use of caves or rock shelters. The few excavations carried out in these contexts show that these are specialised occupations, generally related to a combustion structure around which different activities were performed. One of the best-known cases is Cova d'en Pardo, characterised by an area of activity associated with hunting and dated to around 5600 BC (Soler, Gómez, García, & Roca de Togores, 2011;Soler et al., 2013). More recently, similar examples have been reported, such as Cova del Randero (Soler, Gómez, & Roca de Togores, 2014), dated towards the end of the 6th millennium BC. In this paper, we will focus on sites located in rock shelters. Despite the considerable amount of evidence, the available information is limited by the poor preservation of the stratigraphic series, due to the sites' morphological characteristics and their exposure to atmospheric agents. Furthermore, these shelters have been occupied from the Early Holocene to recent times for livestock and hunting purposes, thus altering the archaeological deposits. These settlements are in corridors -Vall de Gallinera, Vall d'Alcalà, Vall de Seta, etc.that connect the upper Serpis valley with the coastal plain and the inland valleys. One of the bestknown examples is Abric de la Falguera, located at the source of the river Serpis (García Puchol & Aura, 2006). Evidence from phase VI shows that it was sporadically but repeatedly used, in association with the exploitation of the surrounding landscape, especially for sheep grazing. Microsedimentological analyses confirmed its pastoral use since the Early Neolithic, although this was unlikely to be the only activity developed. The observed structurespits and fireplacesthe abundant and varied archaeological material, as well as carpological remains relate it to the storage of food (García Puchol & Aura, 2006). Other shelters have also been excavated, such as Abric del Barranc de les Calderes and Coves d'Esteve (Doménech, 1990), Abric del Tossal de la Roca (Cacho et al., 1995), or Coves de Santa Maira (Aura et al., 2000;Verdasco, 2001), but these studies focused on Palaeolithic or Mesolithic occupations. In the next lines, we will consider Penya Roja de Catamarruc, a site whose materials refer exclusively to Neolithic occupations (Asquerino, 1972), as a case study. 2 Penya Roja de Catamarruc: Location, Archaeological Actions, and Material Culture Penya Roja de Catamarruc shelter is situated on the Northern slope of the Les Calderes ravine, in the northern foothills of the Cantacuc mountain range. This is a strategic location with easy access to the Encantà ravine, which connects with the middle valley of the river Serpis, and also with the valleys of Gallinera and Alcalà, natural corridors from the interior towards the coast. The site is located under a rocky wall about 50 m high and 150 m long, forming a natural shelter facing north. Today, the shelter is delimited by an artificial wall built by contemporaneous shepherds. Several excavations were carried out at the site, the first one in 1970 by E. Faus. However, the only systematic intervention was conducted under the direction of M a .D. Asquerino during the month of June 1971 (Asquerino, 1972). The task consisted of two 1 sq m soundings and a third one of 1.5 m × 1 m (Figure 2), using artificial layers of 10 cm. Three stratigraphic levels were identified. The first, about 30 cm thick, offered the largest number of objects; the second, about 15 cm thick, was characterised by an abundant presence of ashes and small coals; and the last, also about 15 cm thick, was sterile. Later, members of the Centre d'Estudis Contestans recovered archaeological materials, some of which match those documented by Asquerino. The ceramic set, which includes the materials conserved in the Archaeological Museum of Alcoy, from Asquerino's excavations, and those conserved in the Centre d'Estudis Contestans, consists of 141 fragments of which 31 (22%) provide morphological and/or decorative information, while 110 (78%) are uninformative. The method of analysis we have used is based on the initial proposal of Bernabeu (1989) which has been applied in contemporary contexts such as Cova de les Cendres (Bernabeu & Molina, 2009b) The study of ceramic fragments and the reconstruction of vessels identified a total of 14 vessels from 45 fragments, by means of the minimum number of individuals method and considering decorative, morphological, and technological characteristics (Figure 3). The profile analysis shows a large-sized globular container (vessel 1), an ellipsoidal pot (vessel 2), and four medium and small-sized semi-spherical vessels that resemble a plate (vessel 11) and bowls (vessels 7, 12 and 14). A total of 26 ceramic fragments are decorated, mostly showing cardial impressions (11 fragments of vessels 1, 4, 5, 6, and 10). The decorations are characterised by the presence of horizontal bands alternating with undecorated stripes, composition defined as "zoned Cardial" (Manen, 2002). In some cases, these compositions are bordered by bands filled with short strokes made with a shell edge (vessels 4 and 5). Vessel 1 displays a possible figurative motif framed between vertical and horizontal bands of cardial impressions (Figure 3). The motif depicts an anthropomorphic T-shaped figure, showing part of the body and a complete left arm in one fragment and part of the right arm in another. Vessels with this type of figurative scenes were argued to be found in sanctuaries or spaces of social aggregation (Martí & Hernández, 1988). However, their presence in other contexts used as seasonal shelters or as pensuch as the Abric de la Falgueraleads to reconsideration of the symbolic value of this type of vessel. In fact, Bradley (2005) already stressed the impossibility of separating the symbolic sphere from the domestic sphere in many of the prehistoric societies. Vessel 9 has a ribbon decorated with instrument impressions, and Vessel 2 has incised and impressed decoration formed by two bands filled with oblique incisions alternating with rounded-point instrument impressions. Finally, applied decorations (4 fragments; vessel 3) are defined by smooth ribbons. Regarding technology, all vessels' surfaces are smooth. Irregular firings are the most prevalent (57%), followed by reduction firings (43%). Most fragments show erosion and concretions on their surfaces, and, in some cases, they have lost part of the surface, especially the inner one. Medium and large-sized particles of quartzite and limestone are the main inclusions recorded. In terms of thickness, thin-walled fragments are the least frequent (21%), while fragments with medium walls accounted for 36% and thick-walled fragmentsbetween 9 and 16 mmaccounted for 43%. The higher percentage of thick-walled vessels confirms the presence of large containers. The lithic set is made up of knapped (n = 52) and macro-lithic pieces (n = 2). The knapped lithic industry ( Figure 4) could be defined as a predominantly flake production (25%), although several laminar products were also recovered (17%). Flakes present different typologies, being larger than the blades, with some of them showing cortex. Laminar production is made up of both blades (n = 6) and bladelets (n = 3). All blades have proximal fractures, except one with a mesial fracture. They have a triangular section, although two exhibit trapezoidal section. Of the three bladelets, only one is complete, while the other two show distal fracture. Like the larger ones, they have triangular sections, except for one. The presence of retouched tools is limited to several flakes with a denticulate (Figure 4(4)) and a geometric microlithcircle segment with abrupt retouching (Figure 4(1)). The set also includes a large quantity of debris, 54%, and three cores of flakes and blades at different stages of exploitation (Figure 4(6)). These characteristics are comparable to those documented in Neolithic contexts in the area (García Puchol, 2005; Juan-Cabanilles, 2009). Both the flakes and the knapping remains show different degrees of heat alteration on the surface. Some extractions preserve cortex from the primary nodules, which is evidence of first generation and later debitage. In the same way, we can see numerous pieces derived from the preparation of the surfaces of the cores that conform the necessary supports for the creation of different tools. As an example of this process, we can see a piece that has cortex in its distal area and three laminar negatives on its obverse, with a possible frustrated extraction of the blade. The flint recovered is always in shades of brown, from greyish to yellowish, associated with the Serreta type, and greyish white, which can be related to the Catamarruc type (Molina, 2016), both lithologies being present in the immediate surroundings. The presence of cores, the large number of debris, as well as different alterations, in addition to the scarcity of tools, could be indicative of the functionality of the site with short stays during which knapping activities would be carried out. The recovered macro-lithic tools consist of a possible roughed tool on limestone without clear signs of use and a hammer fragment, also on limestone, with clear signs of use. Although scarce, these pieces reveal the use of elements of greater size and weight to carry out other activities, such as the processing of plantbased foods, related to the punctual use of the shelter. Three ornaments were also recovered: an oval pendant on a polished shell with a slight central depression, a Columbella rustica, and a freshwater mollusc -Melanopsis sp.,the last two perforated. Oval shell pendants with a bulging base have been found in Early Neolithic caves such as Or, Cendres, Sarsa, or En Pardo. Perforated Columbella rustica are also common in ancient Neolithic sites, being widely present for this chronology in the entire western Mediterranean (Pascual, 1998). Finally, the faunal remains, although scarce and very altered by different post-burial processes, show the presence of ungulates and bovines, as well as rabbits. The material evidence, especially the pottery, allows us to propose at least two occupations associated with different Neolithic moments, despite the unclear distinction derived from the stratigraphy defined by Asquerino. The first period is characterised by cardial decorations dated from the second half of the 6th millennium BCca. 5500-5200 cal BC (Cardial Neolithic)and the later occupation dated between the end of the 6th and the beginning of the 5th millennium cal BCca. 5200-4800 cal BC (Epicardial Neolithic)characterised by incised-impressed decorations. However, the lack of a chronological definition of the remaining evidence and the lack of a clear stratigraphy do not allow us to confidently define duration, or characteristics of these occupations. The same occupation pattern can be seen in other sites in the area, transitioning from cardial pottery to fragments with sgraffito decoration, characteristic of the second half of the 5th millennium BC. These rock shelters, in addition to a chronology like that of Penya Roja, have features comparable in terms of their location in the natural corridors which connect the Serpis valley with the Mediterranean coast or in terms of their archaeological register (García Atiénzar, 2004, pp. 144-148). Geostatistical Analysis Spatial and geostatistical analyses¹ in Archaeology aim to provide a better understanding of the use that past societies made of their immediate surroundings and of the underlying decisions related to the selection of certain sites. In this paper, these analyses complement the available archaeological information, which is limited by the context of the site, enabling us to delve into the patterns of occupation and exploitation of the landscape more confidently. Prominence analysis (Llobera, 2001) compares the absolute height of a point with the average and median height of the immediate surroundings -1,000 m. This revealed that all shelters are always below the average of their surroundings, being in the lower part of relatively narrow ravines (Table 1 and Figure 5).  1 Spatial analyses have been carried out using different algorithms integrated in QGIS 3.10 software. This position would ensure direct access to water and pasture, making these enclaves suitable for shepherding, as supported by the faunistic record of Abric de la Falguera, dominated by milk teeth and newborn remains of sheep (Pérez Ripoll, 2006, p. 130). Viewshed analyses of the environment at medium distance -5,000 m - (Kvamme, 1999) show variable orientation, always towards the nearest water courses. The visual field of the shelters is limited to the most immediate surroundings, with the main exception of Coves d'Esteve, which has an extended visual range. The visual fields obtained can be sectorial, with a range of less than 45°, or partial, when visibility is restricted to certain areas (Table 1 and Figure 6). These characteristics show an interest in controlling the most immediate resources, fundamentally pastures and water, and not so much in movement throughout the territory. The capacity of use of soils associated with the catchment areas (Higgs et al., 1970;Hunt, 1992) is related to forest soils located on abrupt slopes or at the bottom of small ravines (Figure 7). Their edaphological characteristics, as well as their remoteness from well-irrigated and fertile soils and permanent water courses, make these environments unsuitable for the development of agriculture. Discussion The faunistic, anthracological, and micromorphological evidences recovered in some of these shelters (Molina, Carrión, & Pérez-Ripoll, 2006;Verdasco, 2001) suggest occupation between mid-spring and early summer. In this sense, Falguera's highly fragmented faunistic record has been related to the intensive use of the shelter as a sheepfold (Pérez Ripoll, 2006, p. 134). These places could also be associated to the hunting of wild animals. However, the limited visual control of the surroundings from the shelters makes it unlikely for them to be considered as specialised hunting and wild herd control posts. With these data, we believe that this type of sites could have been part of a complex strategy of occupation and exploitation of the biotic possibilities of the territory. These occupations would have been sporadic and seasonal, although not necessarily repeated in a cyclical manner. Although the exact duration of these occupations is difficult to determine, it was likely to be of several days, judging by the presence of vessels of certain storage capacity, as well as the evidence of lithic knapping activity. This would be in accordance with the practice of transterminance in which a group of shepherds belonging to a larger community settled in the villages at the bottom of the valleys, moving the flocks of sheep and goats to the mountainous areas in search of pasture (García Atiénzar, 2006. This network of sites reveals that farming communities maximised the economic possibilities of their territory since the Early Neolithic. These shepherds developed a mobility strategy throughout the territory, combining these activities with agriculture, which would have taken place in the surroundings of villages at the valley bottoms and near fertile soils, according to palynological analyses (López-Sáez, Pérez, & Alba, 2011). This would be a flexible and extensive model of territory exploitation which would involve the displacement of the domestic unit (or part of it), generating a secondary occupation depending on the agricultural cycle. This is an unstructured and unsystematic model, with prolonged absences that would deem conditioning and cleaning tasks unnecessary. This is observed during the 5th millennium BC, when the occupation of caves and shelters as sheepfolds for livestock intensified, displaying characteristic corral fires or fumiers associated to the hygienisation of the remains of livestock (Badal & Atienza, 2008;Badal, 1999). This practice is not exclusive to the eastern part of the Iberian Peninsula, but it has also been reported in other Neolithic Mediterranean sites such as Kitsos in Greece, Pupućina Cave in Croatia, Grotta de l'Uzzo or Arene Candide in Italy, and Fontbrégua, Font Juvénal, Baume Ronze, or St. Marcel d'Ardèche in France (Brochier, 1991;Brochier, Villa, & Giacomarra, 1992;Miracle, 2006). While this model of economic management of the territory was being built, these same communities developed strategies of symbolic occupation of the landscape evidenced by rock art present in shelters located in these same valleys and ravines. The management of a large territory by scattered groups with low demographics could explain the presence of sanctuaries of macro-schematic and schematic rock art or the decorations with a symbolic character of the ceramic vessels (Martí & Hernández, 1988) which could have functioned as elements of internal cohesion. materials and the technical and administrative documentation related to the site and the interventions carried out. We would like to thank the reviewers of the paper for their advice and annotations, which have substantially improved the original. Funding information: Part of the results of this research are in the framework of the doctoral thesis of SMA, who has a training contract with the Vice-Rectorate for Research of the University of Alicante (UAFPU2018-045). Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
2021-08-01T13:28:48.358Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6191f9931305c445f1244b30b4b414322c587574", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/opar-2020-0165/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "35d628e7315b90959107feb373e117724a2fa6ad", "s2fieldsofstudy": [ "History", "Environmental Science" ], "extfieldsofstudy": [] }
137214671
pes2o/s2orc
v3-fos-license
Analysis on dynamic tensile extrusion behavior of UFG OFHC Cu Dynamic tensile extrusion (DTE) tests with the strain rate order of ~105 s−1 were conducted on coarse grained (CG) Cu and ultrafine grained (UFG) Cu. ECAP of 16 passes with route Bc was employed to fabricate UFG Cu. DTE tests were carried out by launching the sphere samples to the conical extrusion die at a speed of ~475 m/sec in a vacuumed gas gun system. UFG Cu was fragmented into 3 pieces and showed a DTE elongation of ~340%. CG Cu exhibited a larger DTE elongation of ~490% with fragmentation of 4 pieces. During DTE tests, dynamic recrystallization occurred in UFG Cu, but not in CG Cu. In order to examine the DTE behavior of CG Cu and UFG Cu under very high strain rates, a numerical analysis was undertaken by using a commercial finite element code (LS-DYNA 2D axis-symmetric model) with the Johnson – Cook model. The numerical analysis correctly predicted fragmentation and DTE elongation of CG Cu. But, the experimental DTE elongation of UFG Cu was much smaller than that predicted by the numerical analysis. This difference is discussed in terms of microstructural evolution of UFG Cu during DTE tests. Introduction The dynamic tensile extrusion (DTE) technique is a newly developed mechanical test [1]. In the ordinary DTE test, a spherical sample launched at high velocity passes through an open conical die. Due to the smaller die exit diameter than the sample diameter, the sample experiences severe tensile deformation. Therefore, the DTE test can characterize the mechanical response of materials under both high strain rate and high strain circumstances. The DTE technique has been applied to coarse grained (CG) pure metals such as Cu [1], Ta [2], Zr [3], etc. In the case of cubic CG metals (i.e. Cu and Ta), recrystallization hardly occurred during DTE although significant adiabatic heating is expected in association with high strain rate deformation (usually higher than 10 5 s -1 ). Instead, recovered microstructures were dominant such as elongated grains, micro bands, elongated subgrains, and equiaxed subgrains as a function of deformation. Unlike cubic metals, HCP CG Zr exhibited recrystallization which was rationalized by adiabatic heating over α/β phase transformation temperature. Extensive and intensive researches during past two decades clearly reveal that ultrafine grained (UFG) materials exhibit very different mechanical and thermal responses from CG materials. There are several studies on mechanical behavior of UFG materials at high strain rates [4]. However, the strain rate employed in those studies (typically 10 3 s -1 order) was quite lower than that being attainable in the DTE test. Therefore, the mechanical response and corresponding microstructural evolution of UFG materials at strain rate faster than 10 5 s -1 order is hardly reported in literature at present. Meanwhile, the constitutive models involving external and internal variables are commonly employed to describe the mechanical behavior of materials. In particular, extrapolation of the constitutive models is very helpful to predict the mechanical behavior of materials subjected to extreme conditions where experimental works are difficult. Accordingly, several constitutive models have been developed and most of them has been shown their validitywhether they are empirical or physically-based. In terms of the strain rate and the grain size, comparison of the experimental DTE behavior of UFG materials with such models is expected to extend the validity of the models to more extreme conditions. In line with the above mentioned, the DTE test were conducted on CG and UFG Cu and their DTE behavior was numerically analyzed. The present study is anticipated to provide further understanding of dynamic mechanical behavior of UFG materials and useful information for constitutive modeling under extreme conditions. Experimental Commercial OFHC Cu bars (20 mm diameter) were annealed at 900 C for 1 hr. Some annealed bars were subjected to 16 passes equal channel angular pressing (ECAP) with route Bc in order to fabricate equiaxed UFG Cu. The sphere samples of 7.62 mm diameter were machined from the central part of the annealed bars and ECAP-ed bars for DTE tests. DTE tests were carried out by using an all-vacuumed gas gun system which consists of the gas gun, the sample flying barrel, the DTE die chamber, and the sample recovery station ; the details of the DTE equipment are described elsewhere [5]. The velocity of sample in this experiment was ~475 m/sec upon reaching the DTE die. After DTE tests, the sample fragments were soft recovered. The numbers and the order of fragments exiting the die were confirmed by the high speed photography. Besides, the complete fragment recovery was ensured by comparing the weight of all fragments with that of the initial sample. A routine microstructural observation were made on CG and UFG Cu before and after the DTE tests with optical microscopy, scanning electron microscopy, transmission electron microscopy, and electron backscattered diffraction (EBSD). The DTE behavior of CG and UFG Cu was numerically analyzed by using a commercial finite element code (LS-DYNA 2-dimensional axis-symmetric model [6]). The Johnson -Cook model was employed in the numerical analysis. Five unknown parameters in the Johnson -Cook model were obtained by conducting tensile tests at 10 -3 s -1 and 1 s -1 and compression tests at ~4000 s -1 on CG and UFG Cu ; tensile tests and compression tests were carried out on a hydraulic universal testing machine and a split Hopkinson pressure bar tester, respectively. In the numerical simulation process, the 2D R-adaptive re-meshing was done in order to prevent severe distortion of the mesh. That is, a completely new mesh was created every 1 µsec in order for the elements to keep a regular shape and a characteristic dimension. The new mesh is initialized from the old mesh by a least square approximation. The simulation results were compared with the experimental ones in terms of total DTE elongation (sum of the axial elongation of individual fragment) and the number and the dimension of fragments. Summary of microstructure evolution Microstructural evolution during DTE was described in detail in Ref. [5], so its summary is provided in this section. Before DTE, in the case of CG Cu, the average grain size was ~ 120 µm, and texture was weak. Meanwhile, UFG Cu consisted of equiaxed grains of the average size of ~0.35 µm with a high angle boundary faction ~0.66. The strong texture of {111} <110> was developed by ECAP with route Bc. Soft-recovered fragments of CG and UFG Cu after DTE are shown in Fig. 1a ; the conical fragments (fragment 1 for each sample) are the remnants remained in the DTE die. CG Cu was fragmented into 4 pieces while UFG Cu was fragmented into 3 pieces. The average DTE elongations of the three runs for each sample were ~490 % and ~340 % for CG Cu and UFG Cu, respectively ; DTE elongation = (Σdi -do)/do where di is the axial length of the i th fragment and do is the initial sample diameter. Microstructure of fragments of CG Cu exhibited severely elongated grains without evidence of dynamic recrystallization (DRX). Instead, mechanical twins were frequently observed (Fig. 1b); mechanical twinning dissipates energy and therefore delays formation of RRX enclaves [7]. A double fiber texture of strong <111> and moderate <100> parallel to the deformation axis was developed (Fig. 1c), as typically observed in uniaxially processed FCC metals and alloys such as wire drawing [8]. Unlike CG Cu, fully recrystallized grains of ~3.5 µm were observed in fragment 2 of UFG Cu. Correspondingly, the frequency of 60° misorientation remarkably increased due to reformation of annealing twins during DRX (Fig. 1d). The EBSD inverse pole figure (IPF) of fragment 2 of UFG Cu (Fig. 1e) shows the moderate <100> and weak <111> fiber textures which coincide with the recrystallization texture of uniaxially deformed Cu. Fig. 2 shows true stress -strain curves of CG Cu (Fig. 2a) and UFG Cu (Fig. 2b) at different strain rates. As usual, regardless of the strain rate, CG Cu exhibited extensive strain hardening after low stress yielding while near-perfect plasticity without strain hardening after high stress yielding occurred in UFG Cu. The deformation behavior of CG Cu and UFG Cu was modeled by using the Johnson -Cook (J-C) model by considering thermal softening associated with adiabatic heating ; where σ is the stress, σo is the yield stress, ε  is the strain rate, o ε  is the reference strain rate (usually 1 s -1 ), T is the material temperature, Tm is the melting temperature, Tr is the reference temperature at which σo is measured, β is the heat conversion coefficient (0.9 for metals), ρ is the density, and Cp is the specific heat at constant pressure. σo, B, C, n, and m are the constants to be determined from the experimental data; their values for the present case are listed in Table 1. As seen in Fig. 2, the curves predicted by the J-C model with the parameters in Table 1 (black lines) shows good agreement with experimental ones (red lines). Numerical simulation results As aforementioned, the present DTE behavior of CG Cu and UFG Cu was numerically by the LS-DYNA FEM code with the 2D R-adaptivity re-meshing and the above J-C model. A simulation example for strain after complete fragmentation at ~60 µsec is presented in Fig. 3. The number of fragment is correctly predicted by simulation, i.e. 4 fragments for CG Cu and 3 fragments for UFG Cu. The simulated total length of CG Cu and UFG Cu was 38.1 mm (DTE elongation ~400%) and 33.8 mm (DTE elongation ~320%), respectively. The simulated DTE elongation of UFG Cu is in reasonable agreement with the experimental one (~340 %). In contrast, for CG Cu, the experimental DTE elongation (~490 %) was larger than the simulated one. As shown in Fig. 1a, the tip of fragment 1 of CG Cu shows macro-shear deformation. A relatively large localized deformation due to macro-shear at the tip may be a reason for error in DTE elongation simulation for CG Cu. Distribution of strain, strain rate, stress, velocity and temperature in the CG Cu and UFG Cu samples at 40 µsec was presented in Fig. 4 ; it is worth mentioning that the values of these constitutive variables increased until complete fragmentation (i.e. to ~60 µsec). The maximum strain was developed at the necked region in both sample with the similar value. The strain at the necked region upon fragmentation reached ~5.5. Simulation revealed more localized necking in UFG Cu, possibly causing smaller DTE elongation than CG Cu. In both sample, the strain rate was also maximum at the necked region with 10 5 s -1 order which is at least one order or more higher than that achievable by the ordinary Hopkinson test. The maximum strain rate of UFG Cu was slightly higher than that of CG Cu, corresponding to more diffused strain distribution in the latter. The stress imposed by impacting the die was higher in UFG Cu due to its higher yield and flow stresses. The sample velocity was maximum at the exiting tips by the inertia effect. The tip (i.e. maximum) velocity of CG Cu was faster than that of UFG Cu. As expected considerable temperature rise occurred by adiabatic heating. Temperature at the stretched portion in the straight channel was close to or even higher than 700 °K which is about 0.5 Tm. It was locally over 800 °K (~0.6 Tm of Cu) upon fragmentation. Recrystallization of cold-worked metals and alloys usually occurs above ~0.5 Tm. Therefore the present adiabatic heating is enough for DRX in all DTE samples. However, as shown in Fig. 1a, DRX did not occur in CG Cu. Instead, mechanical twins were often observed as shown in Fig. 1b. It was reported that mechanical twinning does not store the strain energy as much as dislocations [9]. Subsequently, it was demonstrated [8] that, under dynamic loading, mechanical twinning behaves as an active energy dissipation source and therefore it suppresses DRX. UFG Cu was in the higher energy state and contain a larger area of grain boundaries compared to CG Cu due to ECAP. In addition, the critical stress for mechanical twinning increases with decreasing the grain size [10]. These factors are favorable for relatively easy DRX in UFG Cu. Under quasi-static loading which does not accompany DRX, UFG materials usually exhibit very limited ductility with shear mode. Under dynamic loading as the present case, DRX of several µm order grain size occurred in UFG Cu and it resulted in ductile failure with necking, in spite of its inferior DTE elongation to the CG sample.
2019-04-28T13:13:23.553Z
2014-08-08T00:00:00.000
{ "year": 2014, "sha1": "7bc939cac067edcf3900a7ad6a7041a98936fae1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/63/1/012144", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8035fc2e78a8961e6ee3e846742a3815128ae783", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
1576331
pes2o/s2orc
v3-fos-license
Effect of Echium oil compared with marine oils on lipid profile and inhibition of hepatic steatosis in LDLr knockout mice Background In an effort to identify new alternatives for long-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) supplementation, the effect of three sources of omega 3 fatty acids (algae, fish and Echium oils) on lipid profile and inflammation biomarkers was evaluated in LDL receptor knockout mice. Methods The animals received a high fat diet and were supplemented by gavage with an emulsion containing water (CON), docosahexaenoic acid (DHA, 42.89%) from algae oil (ALG), eicosapentaenoic acid (EPA, 19.97%) plus DHA (11.51%) from fish oil (FIS), and alpha-linolenic acid (ALA, 26.75%) plus stearidonic acid (SDA, 11.13%) from Echium oil (ECH) for 4 weeks. Results Animals supplemented with Echium oil presented lower cholesterol total and triacylglycerol concentrations than control group (CON) and lower VLDL than all of the other groups, constituting the best lipoprotein profile observed in our study. Moreover, the Echium oil attenuated the hepatic steatosis caused by the high fat diet. However, in contrast to the marine oils, Echium oil did not affect the levels of transcription factors involved in lipid metabolism, such as Peroxisome Proliferator Activated Receptor α (PPAR α) and Liver X Receptor α (LXR α), suggesting that it exerts its beneficial effects by a mechanism other than those observed to EPA and DHA. Echium oil also reduced N-6/N-3 FA ratio in hepatic tissue, which can have been responsible for the attenuation of steatosis hepatic observed in ECH group. None of the supplemented oils reduced the inflammation biomarkers. Conclusion Our results suggest that Echium oil represents an alternative as natural ingredient to be applied in functional foods to reduce cardiovascular disease risk factors. Background The increased intake of omega-6 fatty acids during the 20th century as a result of an elevation in vegetal oil consumption (of more than 1,000-fold) contributed to a decline in the tissue concentration of long-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) [1,2], which might be associated with the increased incidence of inflammatory disorders, such as atherosclerosis [3]. The development of atherosclerotic plaques is associated with several clinical cardiovascular events. Considering the health effects of LC n-3 PUFA toward the reduction of cardiovascular disease (CVD) risk [4,5], many industries have added eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) from marine oils to food formulations or supplements, aiming to explore this health claim. In 2004, the Food and Drug Administration (FDA) qualified the health claim of products containing EPA and DHA [6]. A similar recommendation was also provided by the American Heart Association (AHA), who suggested consumption of 1 g/day of EPA + DHA for patients with CVD and 2-4 g/day for patients with hypertriglyceridaemia [7]. The cardioprotective effects of LC N-3 PUFA appear to be due to a synergism between multiple mechanisms including triacylglycerol (TG) lowering, improving membrane fluidity, anti-inflammatory, antiarrhythmic and antithrombotic effects [5]. The scientific evidence concerning the beneficial effects of the LC N-3 PUFA on lipid profile and inflammation were obtained from several studies using animal and human models. However, these effects and the mechanisms by which they occur are restricted to the action of EPA and DHA [8]. Other non-marine sources of omega 3 fatty acids (N-3 FA), such as alpha-linolenic acid (ALA) or stearidonic acid (SDA), can be converted in vivo to EPA and DHA by the desaturase and elongase enzymes in a tissue-dependent manner, the liver being the major site of this conversion [9]. It has been reported that the conversion rate of ALA is low (5-10% for EPA and < 1% for DHA), which diminishes the efficacy of these alternative sources in the reduction of cardiovascular risk [10][11][12]. However, due to dietary preferences, safety, sustainability, cost and oxidative stability aspects, other non-marine oils alternatives must be evaluated [3,[9][10][11]13,14]. It has been suggested that the low rate by which ALA is converted to EPA is a result of the limited activity of Δ6-desaturase when linoleic acid (LNA) is also present [15]. However SDA, a precursor of EPA that is found in plants, such as Echium (Echium plantagineum), black currant seed and other genetically modified seeds, does not need Δ6-desaturase activity to be converted into EPA [3,10]. In an effort to identify new alternatives for LC N-3 PUFA supplementation, the objective of this study was to compare the effects of three sources of N-3 FA (algae, fish and Echium oil) on lipid composition and some inflammatory biomarkers using LDL receptor deficient mice (LDLr knockout mice) as model. Oils and reagents The N-3 FA used in this study were commercial products: the algae oil containing 40% DHA (DHASCO) was obtained from Martek Biosciences W (Winchester, KY, USA), the fish oil (EPA1T1600 MEG-3™) containing EPA (20%) + DHA (12%) was obtained from Ocean Nutrition W (Dartmouth, NS, Canada) and the Echium oil containing 11.5% SDA (AW39144ECH) was obtained from Oil Seed Extraction W (Ashburton, New Zealand). All reagents were purchased from Sigma Chemical Co. (St. Louis, MO, USA), Merck (Darmstadt, Germany), Calbiochem Technology Inc. (Boston, MA, USA) and GE Healthcare (Little Chalfont, Bucks, UK). The aqueous solutions were prepared with ultra-pure Milli-Q water (Millipore Ind. Com. Ltd., SP, Brazil), and the organic solvents were of HPLC grade. Fatty acids profile of the oils used in this study was analyzed by gas chromatography and is shown in Table 1. Animals and diets Forty male homozygous LDL receptor-deficient mice (LDLr Knockout mice, C57BL/6) weighing 25-29 g (4.0-4.5 months of age) were purchased from the Faculty of Pharmaceutical Sciences (São Paulo, Brazil). The mice were housed in plastic cages (5 animals/cage) at constant temperature (22 ± 2°C) and relative humidity (55 ± 10%), with a 12-h light-dark cycle. Food and water were available ad libitum, and animals were divided into four groups. All groups were fed with a high fat diet for 4 weeks (Table 2), and were supplemented with an oil-in-water emulsion (190 -240 μL/d) per mouse containing fish oil (FIS), algae oil (ALG), Echium oil (ECH) or water (CON) by gavage. Amount of EPA, DHA, SDA and ALA ingested from added oil per day is presented in Table 3. We opted to compare the effect of the same amount of EPA+DHA in all groups. To achieve this objective the rate of conversion from SDA to EPA (4:1) proposed by Whelan [16] was adopted in our study. By this way, the EPA+DHA dosage applied was 0.7, 0.8 and 0.7 mg/day using Fish, Alagae and Echium oil respectively ( Table 3). The emulsions were prepared weekly by mixing the respective oil with water using a high-pressure homogenizer (Homolab mod A-10, Alitec, São Paulo, Brazil). The emulsions were prepared in less than 2 minutes, and the temperature was kept below 40°C, during this short time. After, the emulsions were transferred to 2 mL eppendorf tubes, immediately immersed in nitrogen and kept at −80°C until the time of gavage. All this procedure was repeated twice a week. Emulsion characteristics are presented in Table 3. Diet consumption was measured daily and animals were weighed individually twice a week. After 4 weeks, the mice were fasted for 12 h, and anaesthetised with a mixture containing xylazine 2% (Sespo Ind. e Com. Ltda., Paulínia, Brazil), ketamine (Syntec do Brasil Ltda., Cotia, Brazil) and acepromazine (Vetnil Ind. e Com. de Prod. Veterinários Ltda., Louveira, Brazil). Blood samples collected from the brachial plexus were immediately centrifuged (1,600 × g for 15 min at 4°C), frozen under liquid nitrogen and stored (−80°C) for further analysis. The liver was excised, dried with lint and weighed. Small pieces of the larger lobe were frozen for Western blotting and further analysis, and a piece of the smaller lobe was immersed in 10% buffered formalin solution for histopathological examination. Subsequently, the animals were perfused with a cold NaCl solution (0.9%, 240.0 mL) via their left ventricle to remove the excess of blood. The animal protocol was approved by the Ethics Committee for Animal Studies of the Faculty of Pharmaceutical Sciences (São Paulo, Brazil). Measurements Fatty acids were isolated from the liver and diets using the extraction methodology proposed by the Association of Official Analytical Chemists (method 996.06) [17]. The fatty acid methyl esters (FAME) were suspended in hexane and analyzed by a gas chromatograph (GC17A Shimadzu Class CG, Kyoto, Japan) equipped with a 30 m × 0.25 mm (i.d.), 0.25 μm film thickness fused silica capillary column (Supelcowax, Bellefont, PA, USA) and a flame ionization detector. Helium was used as a carrier gas, and the fatty acids were separated using a 10°C/min gradient from 80 to 150°C and then a 6°C/min gradient from 150°C to 230°C. Standard mixtures with 37 FAME and PUFA 3 methyl esters from Menhaden oil (Sigma Chemical, St. Louis, MO, USA) was used to identify the peaks. The results were expressed as percentage of the total fatty acids present. The serum lipoprotein concentrations, including total cholesterol, high density lipoprotein (HDL) and TG, were quantified using an enzymatic colorimetric test from Labtest (Lagoa Santa, MG, Brazil). The low density lipoprotein levels (LDL) and VLDL were estimated using the Friedewald formula [18]. The inflammation biomarkers (C-reactive protein (CRP), interleukin-6 (IL-6) vascular cell-adhesion molecule-1 (VCAM), Inter-Cellular Adhesion Molecule (ICAM) and adiponectin) were analysed in serum samples using Multiplex commercial Kits (Millipore, St. Charles, MO, USA). Liver histology The representative liver fragments were fixed in a 10% buffered formalin solution for approximately 48 hours. Then, the fragments were fixed in paraffin. The material was submitted to microtomy with a cut of 5 μm and stained with hematoxylin and eosin for histopathological evaluation [19]. Western blotting analysis of hepatic peroxisome proliferator activated receptor α (PPARα) and liver X receptor α (LXRα) The total nuclear protein was extracted from the frozen liver tissue samples using the specific commercial Kit NEPER (GE Healthcare, Little Chalfont, Bucks, UK Statistical analysis The effect of each N-3 FA source on biomarkers was compared by one-way ANOVA and Tukey HSD test. Equivalent non-parametric ANOVA was applied when there was no homogeneity of variances (Hartley test). A probability value of 0.05 was adopted to reject the null hypothesis. All calculations and graphs were performed using the software Statistica v.9 (Statsoft Inc., Tulsa, USA). Results All groups showed the same weight gain and diet consumption (Table 4). Animals supplemented with Echium oil presented lower total cholesterol and triacylglycerol concentrations than Control, and lower VLDL than all of the other groups, constituting the best lipoprotein profile observed in this study. None of the N-3 FA sources altered any of the inflammation biomarkers (Table 4). The effect of N-3 FA associated with high fat diet on the histological evaluation of the liver tissue is presented in Figure 1. Hepatic steatosis can be observed in the CON group with fatty infiltration around the portal space. Fish and Echium oils attenuated hepatic steatosis, whereas algae oil did not promote any protection against the hepatic steatosis induced by the high fat diet. The N-3 FA must be present in liver to exert their effects on lipid metabolism. Of the main N-3 FA applied in our study (ALA, SDA, EPA and DHA) (Figure 2), only EPA was observed in a higher concentration in the liver homogenate of animals supplemented with fish and Echium oils (Table 5). Moreover, a lower omega 6/ omega 3 ratio (N-6/N-3 FA ratio) (p<0.001) was observed in liver of animals supplemented with Echium oil (Table 5). In order to better investigate the action of N-3 FA on the hepatic steatosis, two transcription factors involved in lipid metabolism were evaluated. Figures 3 and 4 present the influence of the N-3 FA supplementation on the hepatic transcription factors LXRα and PPARα, which are associated with fatty acids synthesis and oxidation, respectively. Animals supplemented with algae oil experienced a significant increase of PPARα expression when compared to all other groups (Figure 3). A decrease in LXRα expression was observed in the groups supplemented with fish and algae oils (Figure 4), whereas Echium oil did not alter any of these two transcription factors. Discussion The three sources of N-3 FA fatty acids were effective in improving the plasma lipid profile of the LDLr knockout mice. Among them, Echium oil provided the best results in terms of VLDL and total cholesterol reduction and contributed to the attenuation of hepatic steatosis. None of these oils was able to reduce the inflammation caused by the high fat diet, according to the biomarkers evaluated in this study. The supplementation with fish and Echium oils increased EPA concentrations in liver homogenate. The capacity of SDA to increase EPA content in different tissues is still controversial. Zhang et al. [20] reported that the supplementation of LDLr knockout mice with Echium oil (10% of total energy intake) resulted in a significant enrichment of EPA in plasma lipids. According Harris [21] the direct dietary intake of SDA has been proposed to be another strategy to increase tissue EPA levels, since SDA does not depend of Δ6-desaturase to be converted in EPA. LXRs and PPARs are nuclear receptors that play crucial role in the regulation of fatty acid metabolism [22]. The hypolipidaemic effect of algae and fish oils has been partially attributed to the downregulation of LXRα, with a subsequent inhibition of fatty acid synthesis, associated with the upregulation of PPARα, which promotes β-oxidation. Several studies have demonstrated that EPA and DHA reduce TG and VLDL acting as PPARα agonists and LXRα antagonists [5,23,24]. According to Chilton et al. [3], SDA and EPA reduce the level of mRNA for Sterol Regulatory Element-Binding Protein 1C (SREBP1c), Fatty Acid Synthase (FAS) and stearoyl CoA desaturase 1 (SCD) in liver, suggesting that a possible mechanism to explain TG reduction would be associated with a decrease in the LXRα and consequently in the genes that codify proteins involved in fatty acids hepatic synthesis. This mechanism could be clearly observed to the both marine oils applied in our study, but not to the Echium oil. Animals supplemented with Echium oil showed the most significant VLDL reduction and attenuated steatosis, although no differences had been in regards to LXRα and PPARα expression. In fact, the mechanisms for the reduction of the plasma TG levels by Echium oil are unknown. Although the dose applied in our study was 5-fold lower, our results agree with those reported by Zhang et al. [20], who observed a reduction in TG and VLDL levels after Echium oil supplementation without changes in PPARα and LXRα expression. These results suggest that Echium oil can exert its beneficial effect on lipid metabolism and hepatic steatosis via mechanisms other than those reported for marine oils. In addition, it has been recommended [25] that studies involving SDA adopt a dose equivalent to EPA for supplementation. However, when this procedure was carried out in our study (Table 3), N-6/N-3 FA ratio of emulsion containing Echium oil (6.7) became lower than emulsions with algae (16.7) and fish (16.6) oils. Thus, differences observed in biomarkers between ECH group and the other two supplemented groups (ALG and FIS) can have also been influenced by these difference in N-6/N-3 FA ratio. Mice supplemented with Echium oil showed reduction of N-6/N-3 ratio in liver (Table 5). According to Parker et al. [26] the increase of N-6/N-3 FA ratio in liver is associated with higher steatosis, since this condition can favor lipogenesis and inflammation processes. These findings have also been confirmed in human and animal studies [27,28]. Thus, the lower N-6/N-3 FA ratio in liver homogenate can have contributed to the attenuation of steatosis observed in ECH group. None of these three N-3 FA fatty acids sources was able to reduce serum inflammatory biomarkers. Ishihara et al. [29] observed that, in whole blood of Balb/c mice, the production of Tumor Necrosis Factor -α (TNFα) was suppressed by ALA, SDA and EPA supplementation. However, the dose applied by the authors was 53-fold higher than the dose used in our study. Our high-fat diet was formulated on basis of the diet applied by Safwat et al. [30] to promote hepatic steatosis in rats. The authors observed that after 10 weeks, the animals developed hepatic steatosis, insulin resistance, hypertriglyceridaemia, and increased VLDL levels, but they observed no evidence of hepatic inflammation or fibrosis, suggesting that the hepatic steatosis was in its early stages. It is possible that the dose of EPA, DHA and SDA used in our study, although corresponding to an intake of 2 g/day for humans, was not sufficient to reduce the inflammation biomarkers when the process is at its initial steps. In spite of this, the high N6/N3 FA ratio present in the diet (16:1), typical of Western diet [5], may have annulled the potential LC N-3 PUFA anti-inflammatory action due to the higher availability of ARA than EPA as substrate to the oxidation mediated by cyclooxygenase and lipooxygenase enzymes. Conclusions In our study, the supplementation with three different sources of N-3 FA fatty acids was evaluated using LDLr knockout animals fed with a high fat diet. It was observed that the best combination of results, in terms of plasma lipid profile and steatosis, was achieved by the supplementation with Echium oil, and the mechanism involved in this favourable result seems to be different from those involved with EPA and DHA metabolism, maybe due to the lower N-6/N-3 FA ratio present in the liver of animals supplemented with Echium oil. Theoretically, it is possible to transfer a metabolic pathway for EPA and DHA synthesis from a marine organism to an oilseed crop plant [14]. However, while this option is not available, our study confirms that Echium oil represents an alternative as natural ingredient to be applied in functional foods to reduce cardiovascular disease risk.
2017-06-24T09:32:47.488Z
2013-03-19T00:00:00.000
{ "year": 2013, "sha1": "00df0c65aa9dc06686987add900fc984b83768fd", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-12-38", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f039fda0863943be75ec5677f5a7960b9b42cc03", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
7267476
pes2o/s2orc
v3-fos-license
Analysis of specific pre-operative model to valve surgery and relationship with the length of stay in intensive care unit O perfil da cirurgia cardiovascular vem-se modificando, pois o número de pacientes submetidos a revascularização do miocárdio (RVM) é estático ou vem reduzindo, enquanto a cirurgia valvar apresenta aumento progressivo do número de procedimentos.(1) No Brasil, na análise de mais de 115.000 cirurgias cardíacas realizadas entre 2000 e 2003, a mortalidade relatada foi de 8%. Entre fatores de risco para óbito em cirurgia valvar, destacam-se: idade avançada, sexo feminino, doença pulmonar obstrutiva crônica (DPOC), classe funcional da insuficiência cardíaca, disfunção ventricular, prioridade cirúrgica, hipertensão arterial pulmonar, disfunção renal, doença valvar associada à cardiopatia isquêmica, reoperação e endocardite infecciosa.(2) A cirurgia cardíaca ainda responde por consideráveis despesas nos cuidados à saúde.(3) Nas últimas duas décadas, inúmeros modelos de risco pré-operatório foram propostos para avaliar o risco de mortalidade e morbidade em curto prazo no pós-opeFelipe Montes Pena1,5, Jamil da Silva Soares2, Ronald Souza Peixoto3, Herbet Rosa Pires Júnior4, Beatriz Tose Costa Paiva5, Frederico Vieira Dias Moraes5, Patricia Chicharo Engel5, Nayara Campos Gomes6, Genevania de Souza Areas Pena7 assessment, given the continued research on preoperative variables able to influence immediate surgery outcomes.However, these models were mostly developed focused on CABG.In this context, the use of valve disease-specific preoperative score, in different profile populations, is highly relevant. (3,4)onsidering the advances both in surgical management and intensive care, high-risk patients who otherwise would have the heart surgery contraindicated, are currently considered suitable for cardiac surgery, leading to higher long intensive care unit (ICU) stay rates. (5,6)In European countries as well as in the United States, where usually more ICU beds are available, is common the lack of ICU beds.Similarly Brazil has not enough beds. (7)Albler's Score (AS) is an easy to use tool, works with simple variables and uses regular preoperative tests and easy to measure risk factors, rendering it feasible for any institution. (8)his study aimed to evaluate the correlation between the AS model and the postoperative ICU length of stay (LOS). METHODS The medical records of 110 consecutive patients in a university hospital undergoing valve replacement surgery, either alone or associated with other procedure(s), between January 2007 and July 2008, were retrospectively assessed.The clinical and demographic characteristics, as well as preoperative variables, were organized according to Ambler et al. (1) The data were collected using a standardized sheets which included social, demographic, clinical, pre-, intraand postoperative variables.The patients were evaluated from the pre-surgery admission to the hospital to the ICU discharge.The principal study endpoint was the length of ICU stay reported in days.The study was conducted after approval by the Hospital Escola Álvaro Alvim's Ethics Committee, approval number 325534, being assured the medical records data privacy and confidentiality for their use exclusively to fulfill this study purposes. Statistical analysis The comparison between the ≤ 3 days and > 3 days LOS groups variables was conducted using the Student's t test.The predictive AS performance was analyzed using the Receiver of Operating Characteristic (ROC) curve.Were assumed as normal length of stay the cases with ≤ 3 days LOS and prolonged those with > 3 days (1 day equal to 24 hours in the ICU).The ROC curves were plotted for both additive and logistic AS analysis.The area under the ROC curve (AUC) was correlated to the contingency coefficient C, evaluating the test's predictive power, which was defined as > 0.8 excellent, >0.75 very good and >0.7 good discriminative power.A 0.5 value was defined as indefinite discriminative power.The additive and logistic models' AUCs were compared using the Hanley-MacNeil test.The SPSS 13.0 software was used for the analysis. RESULTS One hundred and ten patients underwent valve surgery either alone or associated to other procedure(s).The AS variables and the surgeries performed are displayed in table 1.The mean additive AS was 6 (range 1-17) and the logistic 5% (range 0.2% to 30.10%).The ICU LOS ranged from 2 to 20 days, and the mean LOS was 4.2±2.6 days.The Table 2 displays the mean length of stay, categorized as either normal or prolonged, as well as the patients' distribution according to their additive and logistic predictive risks.Forty three (39%) patients stayed in the ICU longer than 3 days and 67 (61%) stayed ≤ 3 days.Table 3 shows the differences between the baseline characteristics for the ≤ 3 days and > 3 days groups, where comparatively, the > 3 days patients had higher AS measured risk levels. In table 3, when the variables differences for the ≤ 3 days and > 3 days groups are analyzed, significant differences are found for age, with the >3 days group being older (60.1 ± 10.7 years, p=0.04); the normal LOS group had more women than the prolonged LOS group (46.3% versus 34.9%; p=0.03); patients undergoing aortic valve replacement were predominantly female (58.2% versus 32.5%; p = 0.02).The table 4 presents the values found for each group according to the coefficient C and Hanley-MacNeil test. ROC curves were plotted for additive and logistic AS analysis (Figure 1).Regarding the logistic AS, the LOS > 3 days AUC was 0.73, and for LOS ≤ 3 days 4. The AUCs were significantly different according to the Hanley-MacNeil test.For the prolonged ICU stay group, the logistic AS showed a higher discriminative power versus the additive. DISCUSSION Continuously growing healthcare costs considerably pressure healthcare mangers, who are required to control costs while keeping quality levels.In this scenario, useful preoperative risk assessment models are relevant; however, most of them are focused on mortality only.This study evaluated the relationship between the AS and the ICU length of stay after valve surgery, and our results showed good correlation between the logistic AS and the ICU length of stay. (7,9)he predictive power for LOS was observed to be good when the logistic AS was correlated with > 3 days length of ICU stay, while for the additive model predictive power was indefinite.As far as we could assess, this is the first study designed to evaluate the prediction of ICU length of stay without the EuroSCORE.The 0.73 coefficient C found in our data indicates good logistic AS discriminative power for > 3 days stay.Yet, for the additive AS analysis, the coefficient C found was not compatible with good discriminative power. In previous studies when cardiovascular outcomes were analyzed, in addition to longer ICU stays, increased multiple organ failure and therefore higher costs and mortality rates were found. (10,11)he AS was developed to predict early mortality after valve surgery, and so far was not considered for ICU length of stay analysis.In our series, the logistic model was more used for daily clinic evaluation, although is not used in other sites due to its complexity.Some risk factors are included in this risk model, such as arterial hypertension, atrial fibrillation, body mass index, smoking status and diabetes, which are not part of EuroSCORE, demonstrating the need of a specific for valve surgery model. (12)The Parsonnet's Score is a previous example of a preoperative risk evaluation tool, which in addition to postoperative mortality was proven effective for ICU length of stay prediction. (13)According to our sample, when the groups are analyzed including aortic valve and age, these are possibly correlated with longer stay, as this illness is direct related with the elderly, known to have increased comorbidities and possibly longer ICU LOS.In our series the ≤ 3 days LOS group was predominant (p=0.002),likely due to the much younger age range versus the LOS > 3 days group (49.4 ± 14.5 versus 60.1 ± 10.7). Prolonged length of ICU stay, in addition to the extra financial burden added to the healthcare system, is also an issue for ICU beds availability.Therefore, a planned intervention would be convenient.With this focus, a preoperative risk based model may prove to be an essential cost-benefit analysis tool. (14)Kurki et al. (15) found close relation-ship between the preoperative risk score, evaluated by the Cleveland model, and the ICU length of stay.A preoperative risk score increase was associated with longer postoperative LOS. Several series have proposed to identify preoperative risk factors associated with prolonged ICU stay, however all of them with limited samples. (16,17)qually, Janssen et al. (18) recently published a risk model for > 3 days length of stay based in a 104 patients sample. The power to predict prolonged length of ICU stay, and to assess the surgical risk and benefit, is essential.These can be objectively evaluated with a risk prediction model, allowing better communication with family members and improved safety.The herein results suggest that the AS logistic model is an useful tool to predict prolonged ICU stay; however its predictive performance is not of 100%. (18,19)The identification of patients with higher risk of prolonged ICU stay allows the management of beds, as well scheduling surgeries for the most convenient time. (20,21) he use of these scores for higher postoperative morbidity and mortality risk patients prioritization is relevant to consider.Although not evaluated in our study, it could lead to early admission of higher risk of prolonged ICU stay patients. This study has limitations that should be commented.Its retrospective nature prevents the preclusion of some confounding factors.Also, our sample was small when compared to the large samples required for validation and hardly achievable in single-center trials.In this study analysis the length of ICU stay was categorized as dichotomous variables.The ROC curve analysis is largely built based on the assumption of dichotomous results.Additional research on prediction of post-cardiac surgery ICU length of stay as continuous variable is warranted.Another important limitation to comment is that as our data were collected in one single center, our findings can not be extrapolated. CONCLUSION In our sample analysis the AS preoperative logistic risk model had better predictive power of ICU length of stay than the additive model.Nevertheless, considering the limitations of the currently available models, a tool to predict the ICU length of stay remains an unmet need. Figure 1 - Figure 1 -ROC (Receiver Operating Characteristic) curves for the additive and logistic models. Table 3 -Ambler score variables distribution according to the intensive care unit length of stay ICU -intensive care unit; SD -standard deviation; BMI -body mass index; AF-AVB -atrial fibrillation-atrioventricular block; VT-VF -ventricular tachycardia-ventricular fibrillation.Results expressed as mean ±standard deviation or number (%).Student's t test. Table 2 -Ambler score distribution and intensive care unit length of stay ICU -intensive care unit.Results expressed as mean ± standard deviation or number (%). Table 4 -Ambler score predictive and logistic models coefficient C CI -confidence interval.
2018-04-03T03:37:05.130Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "61a7a874d6436e2fe7ef865ae885b1b0b52e0ca6", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rbti/a/CTG5tFHmNLjdY8yGNmcfqrm/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "27d0800f81d346b80faac24266e84a316bb7e36f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203284173
pes2o/s2orc
v3-fos-license
School Route Mapping in Semarang This paper aims to examines some routes for children go in to school, so the route can lead their parents to choose a travel mode for their children to go to school. The methods used in this research is GIS mapping and spatial distribution analysis. This mapping route refers to parents where parents ussually take their children to school to be dominant, even for distance the school less than 1 km. This mapping attempts to promote walking and cycling to school for children based on the concentration of the location of the student’s residence with a range of distance from the school location less than 1000 meters or more than 1000 meters. Therefore, the mapping process can help the parents to make a decision for transportation modal choice. Introduction Increased urban activity is caused by urbanization; according to Kantsebovskaya [1] urbanization is a multi sectoral phenomenon or process, both in terms of cause and effect. Urbanization can also be interpreted as urban population growth [2]. Therefore the increase in population will also increase its urban activities. The growth of urban areas not only develops at the city centre, but will continue to affect the surrounding area and even urban suburbs. This will result in the movement from the urban suburbs to the city centre or otherwise, in order to meet daily needs. The movement from one place to another urban population has been mobilized. Mobilization is defined as a person's ability to move freely, easily and regularly, has a purpose to meet the life needs of his activities [3]. One of them is the activity of students traveling to school, which is one of the activities in urban areas classified as very high and routinely carried out every day, in addition to going to work [4]. According to Vovsha P and Petersen E [5] who conducted research related to the choice of mode to travel found that it turns out that children are active agents in the decision-making process and the development of modal choices for travel distribution. This has led to some developing countries, including Indonesia, school activities being one of the causes of inefficient use of vehicles. School-age children also have regular movements which take place every day and at certain hours. In general, the movement of school students is also classified as not as dynamic as the movement of adults or like a worker. Semarang city is the capital of Central Java province which apparently has been progressing quite rapidly in terms of urban development and infrastructure. Semarang city began to develop from the centre to the suburbs which were dominated by many high-rise buildings, also categorized into two areas namely the city centre and suburbs. This is closely related to the distribution of facilities in the central and peripheral areas which will affect the pattern of transportation movements. Activities education is one aspect that contributes to improving the movement of transport. One example is educational facilities for elementary schools in the city of Semarang. Based on the analysis of data from Statictic Bureau in Figures 2002-2014 shows in the city centre population growth has decreased, while in suburbs has increased approximately reached 39,000 inhabitants. Population growth that occurred in the suburbs was not followed by the addition of adequate elementary school facilities. Meanwhile, if the terms of level of service, service facilities elementary schools in the central region is still relatively better than the suburbs. This indicates that the majority of elementary school age residents from the suburbs choose to go to school in the city centre. Such conditions cause more and more vehicle movements from the periphery to the city centre or even vice versa. 2. Spatial aspects related to travel behavior Spatial aspects related to travel behavior show their relation to spatial aspects. The spatial aspect that is often associated with travel behavior is about land use patterns and city forms. City design can affect travel behavior, if a city is well organized, the availability of pedestrian and low air pollution will tend to travel by bicycle or even on foot. Another study also shows that the location of housing and the distance to the city centre will affect travel behaviour [6][7][8]. There is a relationship of travel behaviour can be shown through the relationship of the city structure, social and economic conditions of a person, accessibility to a destination and other things that arise from within a person can influence his decision to make movements). In general there 5 main aspects that influence travel behaviour, namely; social environment; individual resources; individual motives; distance to various activities; transportation infrastructure [6,9]. The relationship between distance and facilities An activity-based approach based on opinion [10-12] offers a conceptual framework. Objectives are usually facilities visited in order to carry out activities such as workplaces, shops, schools, public offices or restaurants. According to them, almost all travel activities come from needs or want to carry out other activities those are stationary or unchanging. Daily life is considered as a sequence of activities carried out by individuals in different places for 24 hours day and night. Activities carried out to meet physiological needs (eating, sleeping), institutional needs (work, education), personal obligations (children, shopping) and personal preferences (recreational activities) [12]. The community will prefer facilities that are close to their daily destinations. For example, if you want to send something, it will be easier to use a post office close to the workplace compared to a post office that is close to a place of residence which means that once you make a move you can do 2 activities. As an example, the goal is the workplace, the activity is to send goods and facilities are workplaces and post offices. Movement carried out at a shorter distance and time using a private car is found in the city center at work, shops and other facilities. There are several reasons for the concentration of this movement, such as a geographer from Germany, Walter Christaller (1933Christaller ( /1966) put forward the theory of central location. The city center becomes the center of the community workplace from the suburbs, shopping centers and also the center of transportation facilities so that it has a high appeal. Concentration of facilities in urban areas also increases the likelihood for visitors to carry out several activities within a smaller geographical area and by itself increases the competitiveness of urban centers as retail locations and other services [13]. However, residents do not visit the city center only for functional reasons. The city center is also a number of recreational and entertainment activities [14]. Movement without using a vehicle will be higher by people who live in the city center because the number of trips that are accessed with a short distance to more service centers.Some debates also draw attention to the fact that cities that have gridshaped roads provide higher local scale connectivity [15]. Methodology This study is used data and information were mainly from field observation using questionnaires, interviews and document reviews [16]. In this study, there are two stages of analytical techniques, namely analysis of range service of elementary school locations and analysis of alternative routes to school. The following below explains the analysis techniques used as follows: Analysis of range service of elementary school locations The method used to determine the range of school location services to residential areas using buffer techniques. Buffer is an approach that is done through modeling SIG aims to look at the intersection or connection between the range of a service facility by providing the desired limit or range. Based on SNI 03-1733-2004" Tata Cara Perencanaan Lingkungan Perumahan", the range of primary school services is 1,000 meters. In this study, the range of school services that will be identified can be divided into 5 parts; 200 meters range, 400 meters range, 600 meters range, 800 meters range, 1,000 meters range and range > 1,000 meters. This analysis aims to determine the location of the student's residence with the range of school facilities to be identified. Analysis of route go to school The next analysis is the analysis of the route to school based on the concentration of student residence and the range of school services that are closest to school. The analysis technique to do the network analysis the selected group's location adjacent student residence and has a range of services within the school less than 500 meters to 1000 meters. Then the location of the student's residence that has been grouped will be identified as an alternative route where the student's residence will go to school by using route analysis techniques in network analysis. Route analysis is a method used to determine the optimal route between two or more objects connected by a road network. In this study the object that is connected is the location of the school and the location of the student's residence. Location of Elementary School The research location is located in the city center (elementary school or SD 01 Rejosari) and suburbs (elementary school or SD 02 Sendangmulyo). The choice of school location is based on the number of students who register the most in both locations, which is obtained from the new student admission data in 2017 Semarang City Education Office (Dinas Pendidikan Kota Semarang). In addition, the data obtained from the residence of the students then analyzed by the service coverage area elementary school (SD) as far as < 1000 meters and > 1000 meters. Environmental conditions around the location of the selected schools both located in the center or outskirts of the city will be a consideration for students and parents. The overview of school environmen is ilustrated in Table 1. SD 01 Rejosari is located in a residential neighborhood along the Dr. Cipto road corridor which is the main access from the North to South (Rejosari-Tembalang). The distance between the school location and the main road is about 300meters. There is no adequate pedestrian path around the area. SD 02 Sendangmulyo is located in neighborhood street or in residential area. There is no adequate pedestrian path around the area. Range Service of Elementary School Travel distance that must be taken by students every day to go to school is certainly a consideration for students and parents in making decisions. Both students and parents will consider how they can travel the distance that separates homes where they live to school more easily, quickly and safely. The following Figure 1 and Figure 2 are a description of the location of the school on the location of student residence in the range of 200 meters to > 1000 meters. Travel patterns of elementary school students in the central area and the outskirts of Semarang City are travel patterns that are classified as external-internal. This pattern of travel originated from the movement towards the two primary schools, not only from around the school service range (within a range of <1000 meters) but also from outside the service range (in the range> 1000 meters). This elementary school student's travel pattern can be seen in Figure 1 and Figure 2 which is a description of the location of the student's residence on the school location. Travel to elementary schools in the downtown area of movement that comes from external (in the range> 1000 meters) that is equal to 15%, while in the suburban areas that come from external that is equal to 44%. Travel to elementary schools in the downtown area of movement originating from internal (within a range of <1000 meters) that is equal to 85%, while in suburban areas that originate from the internal that is equal to 56%. Elementary School Route Mapping Based on the analysis of the route to school for elementary school 01 rejosari can be grouped into 2 parts, namely within the range of <500 meters and> 500 meters. At a range of <500 meters there are 3 routes, while within a range of 500 meters there is 1 route. (see Figure3). Based on the analysis of the route to school for elementary school in the suburbs (SD 02 Sendangmulyo) can be grouped into 2 parts, namely within the range of < 500 meters and > 500 meters. At a range of < 500 meters there are 2 routes, while within a range of >500 meters there is 2 route. (see Figure 4). Conclusions Based on the results of the analysis, it is known that with a wide distribution of travel patterns and trips that are routinely carried out every day elementary school students both at the center and the suburbs will indirectly affect city travel. Regular movement of students to school need to be supported their good pedestrian facilities around the school. To support the provision of good pedestrian facilities in the school environment, it is necessary to determine the route to school. Determining the route to school is determined based on the concentration of the location of the student's residence with a range of distance from the school location less than 1 kilometer is expected to increase the pattern of student travel to school using walking or cycling.
2019-09-17T01:08:10.054Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "d712e3001c92bdb4ad8b323ce3c412204e62a0b5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/313/1/012015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b1f9c044da221f34c13b01a67ba7abeb047ce7eb", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
6345597
pes2o/s2orc
v3-fos-license
Gastrointestinal bleeding and massive liver damage in neuroleptic malignant syndrome. BACKGROUND Neuroleptic malignant syndrome (NMS) is a rare side effect of antipsychotic therapy characterized by fever, muscular rigidity, altered mental status, increased level of serum creatinine phosphokinase, and increased number of white blood cells. The mortality rate of patients with NMS remains elevated. METHODS We examined the clinical records of patients diagnosed with severe NMS admitted to the Clinical Toxicology Unit, Florence University Hospital, between 1990 and 2004. RESULTS Eight patients presented with this neurological disorder. All were treated with supportive therapy, which included dantrolene, levodopa/benserazide, benzodiazepines, metamizole and/or paracetamol, and antibiotics. Five survived and three died. Of the three deceased, two had large hemorrhages in the gastrointestinal tract, and one had massive liver damage and diffuse hemorrhages throughout the body. CONCLUSION Our results suggest that gastrointestinal bleeding is a frequent cause of death in NMS patients. Bleeding may occur as a consequence of commonly accepted medical treatments (especially the use of cyclooxygenase inhibitors as antipyretic agents) and NMS-induced changes in blood coagulation status. To increase the survival rate of these patients, it is necessary to avoid using drugs that may facilitate gastrointestinal lesions and to utilize procedures known to decrease the risk of bleeding. Introduction Neuroleptic malignant syndrome (NMS) is a rare, potentially fatal complication of antipsychotic therapy and may occur in patients treated with either typical or atypical neuroleptic agents (Shalev et al 1989;Robb et al 2000;Stanfield and Privette 2000). The syndrome is characterized by fever, muscular rigidity, altered mental status, increased level of serum creatinine phosphokinase, and increased number of white blood cells (Ebadi et al 1990;Pelonero et al 1998;Adnet et al 2000). It has also been described after the withdrawal of dopaminergic agents, such as L-dopa or inhibitors of catechol-o-methyl transferase, in patients affected by parkinsonian disorders (Friedman et al 1985;Iwuagwu et al 2000). These observations suggest that changes in dopamine receptor function may be largely responsible for the clinical findings present in these patients. The proposed medical treatment of the syndrome is: (1) elimination of neuroleptic treatment; (2) supportive therapy; (3) administration of dopamine receptor agonists or agents able to increase the function of the dopaminergic system; (4) administration of dantrolene, a compound able to inhibit the release of Ca 2+ from sarcoplasmic reticulum thus reducing muscle tone and heat production; and (5) administration of antipyretic agents to reduce body temperature (Ward et al 1986;Kaufmann and Wyatt 1987;Rosenberg and Green 1989;Tsutsumi et al 1998). It is widely accepted that lethal complications may occur in variable percentages (from 1% to 50%) of these patients and that the most common causes of death are deep venous thrombosis with pulmonary embolism, acute renal failure, pneumonia and other types of pulmonary failure (adult respiratory distress syndrome especially with rabdomyolysis), myocardial infarction, and sepsis (Kaufmann and Wyatt 1987;Shalev et al 1989). In a retrospective evaluation of the cases admitted to the Clinical Toxicology Unit, Florence University Hospital, we found that gastrointestinal bleeding and massive liver failure with diffuse hemorrhages could result in death. Here we report our experience and suggest that careful control of gastrointestinal function and coagulation status may significantly reduce the mortality rate in NMS patients. Methods We examined the clinical records of patients admitted to the Toxicology Unit of Florence University Hospital between 1990 and 2004. This unit admits patients with drug dependence, drug side effects, poisoning, and those who have attempted suicide. Eight out of fifteen thousand patients presented a typical diagnosis of NMS with all the key features of the syndrome as reported in Table 1. Results The drug involved and the age and outcome of the eight NMS diagnosed patients are reported in Table 2. Five of these patients completely recovered, while three died. Among the latter three, two were under treatment with chlorpromazine, and one was treated with levomepromazine plus amitriptyline (see Table 2). Thus, all the patients with poor outcomes had been treated with agents able to antagonize not only dopamine but also muscarinic receptors (Costa et al 1978;Kwok and Mitchelson 1982). Finally, it is important to note that no history of gastrointestinal pathology was previously present in these patients. Case reports Case 1 A 31-year-old female with a psychiatric diagnosis of bipolar disorder was treated with chlorpromazine (300 mg/day), haloperidol (12 mg/day), diazepam (20 mg/day), promazine (10 mg/day), and orfenadrine 100 mg/day. She was found agitated and confused with increased muscular tone and diffuse tremors. Physical examination of the abdomen and thorax was negative. Her temperature was 39.3 °C, heart rate 120 beats per minute with regular rhythm, and blood pressure 140/80. Laboratory findings were: serum creatinine phosphokinase (CPK) 1895 U/L, lactate dehydrogenase 835 U/L, WBC 14 300/µL, hemoglobin 10.3 g/dL, hematocrit 31%, and platelets 181 000 mm 3 . The patient's serum Na + level was 142, K + 4, and Cl -108 mEq/L. Blood urea nitrogen was 1.37 g/L and creatinine 6 mg/dL. The brain CT showed no signs of tumors or cerebral or subarachnoid hemorrhage. A lumbar puncture showed clear CSF with normal intracranial pressure and no signs of bacterial or viral infections. Supportive therapy was started with the administration of fluids, electrolytes, and antibiotics. After formulation of the diagnosis, the patient received dantrolene (60 mg intravenously [IV] every 8 h) and bromocriptine (2.5 mg orally, every 12 h). For the next two weeks, the patient improved and was transferred to a psychiatric ward where she received chlordiazepoxide (100 mg/day). However, her general condition deteriorated and ten days later she returned to the Clinical Toxicology Unit unconscious. She had breathing difficulties, and her serum electrolytes were Na + 176, K + 6.5, Cl -153 mEq/L, and blood urea nitrogen was 3.17 g/L. She was treated with fluids, nutrients, diuretics, antibiotics, and cortisol (1 g IV). Within two days her general condition again transiently improved. Since the patient was agitated, the psychiatrist prescribed diazepam (20-40 mg/ day) and chlorpromazine (50 mg/day intramusculary [IM]). The fever returned, her blood pressure suddenly decreased, and blood hemoglobin content reached 6.5 g/dL with hematocrit at 19.6%. Partial thromboplastin time (PTT) time was 40 s. Ranitidine (200 mg IV three times a day) and packed red cells (5 units in three days) were promptly administered together with supportive therapy. In the next few days, the patient had repeated episodes of melena and emesis with the characteristic "coffee grounds" appearance. Supportive therapy and packed red cells were repeatedly administered but her general condition deteriorated and the patient died. Case 2 A 43-year-old man who had been treated with benzodiazepine, levomepromazine, and tryciclic antidepressants for bipolar depression was admitted to the unit. His doctor reported that in the week before his admission the patient complained of repeated loss of equilibrium with falls. On admission, he appeared unresponsive and in a stuporous state. His blood pressure was 120/80, pulse 100 beats/min with regular rhythm, muscular tone was rigid with tremor, and his body temperature was 39 °C. An abdominal examination was unremarkable for acute findings and no pathological or abnormal sounds were present during auscultation of the lungs. A few petechiae and ecchymoses were noticed throughout the body. The patient was immediately treated with an infusion of fluids and antibiotics (cephtriazone 2 g). A few hours later, when the diagnosis of NMS became clear, dantrolene (60 mg every 8 h) was administered IV and levodopa/carbidoba (250/25 mg every 12 h) was administered by nasogastric tube. Since body temperature increased to 40.6 °C, sodium metamizole (1 g IV) was also administered along with ice packing. On the second day of admission, the patient's general condition deteriorated. Blood pressure slowly decreased to 60 mmHg, blood CPK levels reached 135 000 U/L, blood hemoglobin decreased to 6.9 g/dL, and platelets decreased to 100 000 mm 3 . PTT time increased to 43 s and plasma fibrinogen content to 610 mg/dL. Standard resuscitation therapy with infusion of plasma-expanders, hydrocortisol 2 g IV, and slow infusion of dopamine was unsuccessfully attempted. The patient died 36 h after admission to the hospital. The main findings detected at autopsy were massive liver necrosis with petechiae diffused in most tissues including brain. Case 3 A 60-year-old man suffering from paranoid schizophrenia had been treated, until a few days before admission, with chlorpromazine (100 mg/day), haloperidol (5 mg/day), zuclopentixol (50 mg/day), carbamazepine (800 mg/day), and chlordiazepoxide (50 mg/day). On the day of admission, he was rather confused, muscular tone was rigid with tremors, body temperature 40 °C, blood pressure 120/70, heart rate 100/min with a normal rhythm. The abdominal examination was unremarkable for acute findings but showed an appreciable increase in liver volume. The base of the right lung was dull with normal fremitus. Laboratory findings were: CPK 5400 U/L, WBC 14 300/µL, hemoglobin 17.8 g/dL, hematocrit 50%, and platelets 156 000 mm 3 . Plasma fibrinogen content was 333 mg/dL and PTT 25.7 s. The patient's serum Na + level was 144, K + 3.35, and Cl -107 mEq/L. Blood urea nitrogen was 0.98 g/L and creatinine 2.4 mg/dL. Arterial blood gases were: PaO 2 70.1, PaCO 2 37.7, pH 7.40. Supportive therapy was immediately started with infusion of fluids, electrolytes, and antibiotics (ceftazidime 2 g three times a day). Dantrolene (60 mg every 8 h IV) and levodopa/carbidoba (250/25 mg every 12 h through the nasogastric tubing) were administered as soon as the diagnosis of NMS was formulated. To reduce the high fever, metamizole (1 g IV) along with ice packing was repeatedly used. In the next four days, fever remained elevated, muscular tone increased, and level of consciousness decreased. A progressive decrease in blood platelet content, together with an elevation of fibrin degradation products (D-dimer: 1144 ng/mL) suggested activation of fibrinolysis and possible disseminated intravascular coagulation. The abdomen of the patient was evaluated during a surgical consult but, because of the general increase in muscular tone, it was not possible to reach an acceptable diagnosis. Seven days after admission, the patient had a massive hematemesis, his blood pressure decreased to 60 mmHg and, in spite of standard resuscitation therapy, he died. At autopsy, gastric and duodenal ulcers were found, together with an acute necrotizing enterocolitis and an acute purulent peritonitis. Pulmonary edema was the considered immediate cause of death. Discussion Our clinical observations show that bleeding is an important cause of death in patients with NMS and suggest that actions aimed at avoiding or reducing bleeding and gut damage could significantly improve the prognosis of this "malignant" disease. Previous reports have shown that death occurs because of cardiovascular collapse, pulmonary embolism, aspiration pneumonia, or renal failure due to rhabdomyolisis (Kaufmann and Wyatt 1987). Other serious complications in these patients are myocardial infarction, sepsis, and disseminated intravascular coagulation (Pelonero et al 1998). Case 2 had massive liver damage and signs of blood loss, possibly due to intravascular coagulation, while case 3 had clear laboratory signs of intravascular coagulation that probably contributed to the development of intestinal perforation and massive loss of blood. There are also a number of pharmacological reasons that could explain why gastrointestinal bleeding was frequent in our patient series. The most obvious is probably the use of nonsteriodal antiinflammatory drugs (NSAIDs) to reduce body temperature. This is a commonly accepted procedure for the treatment of elevated fever (Kaufmann and Wyatt 1987) in spite of the fact that NSAIDs inhibit prostaglandin synthesis, thus decreasing epithelial mucus formation and mucosal resistance to injury. NSAIDs may cause lesions not only in the stomach, but in the duodenum, ileum, and colon (Wolfe et al 1999). In our clinical records, NSAID administration was associated with that of the H 2 receptor antagonist ranitidine, a procedure that was obviously not sufficient to prevent tissue damage and bleeding. NSAIDs inhibit both cyclooxygenase (COX) 1 a constitutive enzyme present in most of the cells, including platelets, and COX 2, an inducible enzyme particularly abundant in neutrophils and in macrophages (Vane et al 1998). Inhibition of platelet function could certainly have facilitated bleeding in NMS patients (Patrono et al 1985). Dantrolene was another drug administered to NMS patients. It has been previously observed that patients treated with this drug may suffer a number of side effects including gastric irritation, abdominal cramps, and constipation (Patrono et al 1985). These side effects are not surprising since dantrolene inhibits calcium flux across the sarcoplasmic reticulum and may inhibit the depolarizationinduced contraction of smooth muscles thus changing gastrointestinal and colon motility (Ward et al 1986). Dantrolene administration may also cause important liver damage (Utili et al 1977;Donegan et al 1978), and its use may certainly be involved in causing the massive liver necrosis of case 2. All the patients also received agents able to stimulate dopamine receptors. Case 1 was treated with bromocriptine while cases 2 and 3 with L-dopa/benserazide. Dopamine receptor agonists were administered on the assumption that they could facilitate recovery. It is indeed widely accepted that when dopamine is locally injected in the pre-optic anterior hypothalamus it reduces body temperature (Cox et al 1978), while neuroleptic injected into the basal ganglia may cause muscular rigidity and generate heat (Adnet et al 2000). Dopamine interacts with at least 5 receptor subtypes (Emilien et al 1999) and it is not clear which of them is involved in human thermoregulation. It is known, however, that dopamine receptor agonists (including dopamine, bromocriptine, and apomorphine) affect gastric and intestinal secretion and motility often leading to emesis (Morris 1978;Parkes 1981). Thus it is reasonable to assume that systemic administration of dopamine agents could increase secretion of the gastrointestinal tract, cause alteration of the peristalsis and contribute to the fatal outcome of cases 1 and 3. Finally, all the patients with fatal outcome had been treated for prolonged periods (years) and were under treatment, at the appearance of NMS symptoms, with drugs able to antagonize muscarinic receptors. Case 1 had received chlorpromazine together with orfenadrine, case 2 received levomepromazine and amitryptiline, and case 3 had received chlorpromazine, zuclopenthixol, and carbamazepine. All these agents have a significant affinity for muscarinic receptors (Costa et al 1978;Kwok and Mitchelson 1982). It is widely accepted that these receptors play a key role in the control of gastrointestinal motility and secretion (Stockbrugger 1988;Nelson et al 1996;Ehlert et al 1999), and that a prolonged treatment with muscarinic receptor antagonists causes supersensitivity of these receptors. This supersensitivity may be easily observed as an abstinence syndrome in patients treated for prolonged periods with antidepressants. Vomiting and diarrhea together with perspiration are the main signs of this pathology (Dilsaver and Greden 1984). It is therefore reasonable to assume that withdrawal of muscarinic antagonists contributed to an increase in gastrointestinal motility and secretion in cases 1 and 3 who died with gastrointestinal bleeding. The three fatal cases described suggest that the mortality rate is still elevated in patients with severe NMS. They also suggest that in the management of these patients it may be useful to: (1) avoid the use of NSAIDs; (2) carefully monitor blood coagulation status to rapidly detect and possibly correct signs of intravascular coagulation; and (3) use agents able to minimize the risk of mucosal damage in the gastrointestinal tract (proton pump inhibitors and/or prostaglandin agonists). Finally, the elevated mortality rate in our patient series in which all the patients received dantrolene and dopamine receptor agonists suggest that further clinical studies are necessary before assuming that the administration of these agents is a useful therapeutic procedure.
2014-10-01T00:00:00.000Z
2005-09-01T00:00:00.000
{ "year": 2005, "sha1": "8c76ca97e6551d11d0fddf9c31fb808456190a01", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "deeaefaa4466ef84d9bbc99cea6cfe3d2c7a13e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119458613
pes2o/s2orc
v3-fos-license
Rolling Skyrmions and the Nuclear Spin-Orbit Force We compute the nuclear spin-orbit coupling from the Skyrme model. Previous attempts to do this were based on the product ansatz, and as such were limited to a system of two well-separated nuclei. Our calculation utilises a new method, and is applicable to the phenomenologically important situation of a single nucleon orbiting a large nucleus. We find that, to second order in perturbation theory, the coefficient of the spin-orbit coupling induced by pion field interactions has the wrong sign, but as the strength of the pion-nucleon interactions increases the correct sign is recovered non-perturbatively. Introduction The spin-orbit coupling is an important ingredient in nuclear structure theory. Its presence implies that it is energetically favourable for the spin and orbital angular momentum of a nucleon to be aligned, particularly if this nucleon is moving close to the surface of a larger nucleus. This explains the phenomenon of magic numbers, and it is important in the description of halo nuclei, to name just two examples. Unlike the spin-orbit force encountered in the study of electron shells of an atom, the nuclear spin-orbit force is not merely a relativistic effect but is caused by the strong interaction physics of nuclei. The Skyrme model is an effective description of QCD, and a candidate model of nuclei with a topologically conserved baryon number. It successfully accounts for phenomena such as the stability of the alpha-particle, the long-range forces between nuclei, and quantum numbers of excited states of very light nuclei. Some of the recent successes of the model include reproducing the excited states of oxygen-16 [1] and carbon-12 [2], nuclear binding energies of the correct magnitude [3], accurately modelling neutron stars [4] and a geometric explanation for certain magic nuclei [5]. However, one of the challenges in analysing the Skyrme model has been accounting for the spin-orbit coupling. There have been several attempts to calculate the spinorbit term in the nucleon-nucleon potential [6,7,8,9,10]. Most of these calculations were only valid for large separations and were also perturbative, and so corresponded to calculations taking into account one-and two-pion exchange. Almost all obtained the nucleon-nucleon spin-orbit coupling with the wrong sign, although [6] obtained the correct sign by introducing additional mesons in the model. The conventional description of the spin-orbit force is in the framework of relativistic mean field theory [11], which couples nucleons to several mesons (including the pion, σ, ρ and ω). An interesting perspective was put forward by Kaiser and Weise [12]: they argued that the spin-orbit coupling receives several contributions, including a wrong-sign contribution from pion exchange; this is compensated by other effects, including meson exchange and three-body forces. This seems to be related to the sign problem in the Skyrme model. In this article we investigate in a novel way how a short-range spin-orbit coupling arises in the Skyrme model. Unlike relativistic mean field theory, our calculation is non-relativistic and incorporates pions but no other mesons. Our calculations are for a somewhat simplified model, but we hope this model captures the essence of the effect. Our main discovery is that the sign of the spin-orbit coupling is wrong at weak coupling, where a perturbative approach would be valid. However, the sign is correct when the coupling between the nucleon and the surface of the nucleus with which it interacts is stronger. A key property of a Skyrmion, distinguishing it from an elementary nucleon, is that it has orientational degrees of freedom. It is a spherical rigid rotor. After quantisation [13], the basic states are nucleons with spin 1 2 , but there are also excited states with spin 3 2 corresponding to Delta resonances, and further states of higher spin and higher energy that play no significant role. The states simultaneously have isospin quantum numbers (isospin 1 2 for the nucleons and 3 2 for the Deltas). In our model, a dynamical Skyrmion interacts quantum mechanically with a background multi-Skyrmion field modelling the nuclear surface. The interaction involves a potential that depends on the Skyrmion orientation and its position, and the potential has a strength parameter that we consider as adjustable. When the parameter is small, a perturbative treatment works. However, the spin-orbit coupling has the wrong sign in this regime. When the parameter is larger (but not too large), the spin-orbit coupling for the Skyrmion has the correct sign. Indeed, in this latter regime, a better approximation to the Skyrmion wavefunction is to say that the orientation has its probability concentrated near the minimum of the orientational potential, with this minimum varying with the Skyrmion's location on the surface. The quantum state is now close to the classical picture of a Skyrmion rolling over the nuclear surface, maintaining a minimal orientational potential energy. This classical rolling motion gives the correct sign for the spin-orbit coupling. In earlier work, Halcrow and one of the present authors investigated a model of this type [14], but they only treated the case of a disc interacting with another disc in two dimensions. When the potential is strong, the model becomes a quantised version of cog wheels rolling around each other. Here we do better, by treating a realistic three-dimensional Skyrmion interacting quantum mechanically with a nuclear surface. However, we still need to make various approximations. For example, we assume the height of the Skyrmion above the surface is fixed. Our analysis is based on the following well-known interpretation of the phenomenological spin-orbit coupling. Consider a nucleon near the surface of a spherical nucleus. Suppose that in addition to the usual kinetic terms, the hamiltonian for the nucleon contains a term of the form a S. N × P , where a is a parameter, S is the spin of the nucleon, P is its momentum, and N is an inward-pointing vector normal to the surface, which may be interpreted as the gradient of the density of nuclear matter. Since the position vector r of the nucleon equals −r N /| N |, this term equals −(a| N |/r) S. L, where L = r × P is the orbital angular momentum of the nucleon. This is the usual form of the spin-orbit coupling. In order to give the correct magic numbers, the spin-orbit coupling must prefer spin and angular momentum to be aligned rather than anti-aligned, so the parameter a needs to be positive. The advantage for us of the formula (1.1) is that it applies when the nucleon is interacting with an essentially flat nuclear surface, as in the model we will discuss below. We will refer to (1.1) as the spin-momentum coupling. Note that N is essential here, and implies that there is no coupling for an isolated nucleon, nor for a nucleon deeply embedded inside a nucleus. There are two practical difficulties with this approach: the first is that the interaction between Skyrmions and multi-Skyrmions is poorly understood at short distances, and the second is that the complicated spatial structure of known multi-Skyrmions with finite baryon number would make the calculations laborious. We solve the first of these problems by working in the lightly bound version of the Skyrme model [15], for which multi-Skyrmions and their interactions are accurately captured by a point particle description, although the particles still have orientational degrees of freedom. We solve the second problem by supposing that the multi-Skyrmion representing the core of the nucleus is large, and approximating its surface by a plane. Since Skyrmions in the lightly bound model naturally arrange themselves to sit at vertices of an FCC lattice, this surface has a high degree of symmetry, making the calculation tractable. In the next section we review the 2D toy model of [14], but in a modified and simplified form. Here the dynamical, Skyrmion-like object is a coloured disc, and it moves in the background of a straight, periodically coloured rail, rather than around a larger coloured disc as in [14]. The potential depends on the colour difference between the disc and the rail at their closest points. The translational and rotational motion of the disc is quantised, and we compare the result of a perturbative treatment, valid θ x X Figure 1: Coloured disc on a fixed coloured rail. One period of the rail colouring is shown. when the potential is weak but which leads to a spin-momentum coupling of the wrong sign, with a non-perturbative approach that can deal with stronger coupling but is still algebraically straightforward. The price to pay for working non-perturbatively is that we must assume that the moment of inertia of the disc is small; in our perturbative calculation, no such assumption is necessary. The strong coupling result gives the correct sign for the spin-momentum coupling. In the later sections we perform similar calculations in the more realistic 3D setting. Here, the Skyrmion is visualised as a coloured sphere moving relative to a coloured surface, and the potential again depends on the colour difference at the closest points. The calculations can be done by hand, exploiting the assumed lattice symmetries of the (planar) nuclear surface, but are nevertheless considerably more complicated. The reader may wish to skip the details here. Disc on a rail We start with a two-dimensional toy model of spin-momentum coupling, rather similar to what was analysed in [14]. Consider a vertical disc at a fixed height above a fixed, straight rail. The disc can move along the rail and also rotate. Both the edge of the disc and the rail are coloured, and the potential energy is a periodic function of the colour difference at their closest points. When the colours match, the potential energy is lowest. Let us assume that the disc is coloured so that for the potential to remain at its lowest value as the disc moves classically, the disc needs to roll along the rail. See Figure 1. This model is similar to a cog on a rack rail, which can only roll, but not slip. Classically there is spin-momentum coupling, as the (clockwise) spin of a rolling cog is a positive multiple of its linear momentum. Let X be a linear coordinate along the rail. The colour χ along the rail is an angular field variable, and as with an ordinary angle we assume χ takes any real value and identify values that differ by 2π. We suppose that χ = X, so the colour is periodic along the rail, with period 2π. Let the disc have radius 1 and assume that when it is in its standard orientation, the colour is the same as the angle around the disc measured from the bottom in an anticlockwise direction, i.e. the colour is χ at angle χ. Suppose now that the position and orientation of the disc are (x, θ), where x is the location of the centre of the disc, projected down to the X-axis, and θ is the angle by which the disc is rotated clockwise relative to its standard orientation. The bottom of the disc then has colour θ, and the rail under this point has colour x. We suppose the potential energy of the disc in this configuration is We next introduce some dynamics. Suppose the disc has unit mass, and moment of inertia Λ, so the Lagrangian for its motion is The equations of motion arë Note that as the potential only depends on x−θ, there is a conserved quantityẋ+Λθ. One solution of the equations is x = µt, θ = µt for any constant µ -this is rolling motion. The conjugate momenta to x and θ are 3) and the Hamiltonian is with conserved quantity p + s. We now quantise. Stationary wavefunctions are of the form Ψ(x, θ), and the momentum and spin operators are The stationary Schrödinger equation is where the operator on the left hand side is the Hamiltonian (2.4) expressed in terms of the momentum and spin operators. The configuration space of the disc has first homotopy group Z, so wavefunctions can acquire a phase when θ → θ + 2π. Bearing in mind that we are modelling a fermionic nucleon interacting with a large nucleus, we choose this phase to be π. Wavefunctions then have a Fourier expansion a superposition of half-integer spin states. The free motion, in the absence of the potential, has separately conserved momentum p and spin s, and the basic stationary state is where p is arbitrary and s is half-integer. This state has energy We now suppose that Λ is small, so that 1 Λ is large compared to V 0 and to p 2 . The expressions we derive later will only be valid provided p 2 1 Λ . In this regime, the low energy states are those with s = ± 1 2 . This is physically what we are interested in. Spin 3 2 nucleons (i.e. Delta resonances) have energy about 300 MeV greater than spin 1 2 nucleons, and spin-orbit energies are much less than this, of order 1 MeV. So we mostly neglect the small parts of the wavefunction with s = ± 3 2 or larger. Because of the restriction to n = ±1 states, i.e. those with s = ± 1 2 , the wavefunction reduces to Ψ(x, θ) = ψ 1 (x)e i 1 2 θ + ψ −1 (x)e −i 1 2 θ . (2.10) A stationary state like this is not strictly compatible with the Schrödinger equation, because the potential couples it to s = ± 3 2 states. We can deal with this by calculating the matrix form of the Hamiltonian restricted to this subspace of wavefunctions. Recall that there is the conserved quantity p + s. This implies that if ψ 1 (x) = e ipx then ψ −1 (x) = Ae ip x , where p = p + 1, for some amplitude A. Momentum p itself is not a good label for states, but instead we can use r = p + s, where r takes any value in the range (−∞, ∞). The wavefunction (2.10) becomes, for a definite value of r, Alternatively, the crystal momentum k could be defined to be p mod 1 and the (first) Brillouin zone to be − 1 2 ≤ k ≤ 1 2 , but because of the restricted range of spins, we do not need the formalism of Bloch states mixing momentum p with all its integer shifts. We now work with basis states 1 2π e i(r− 1 2 )x e i 1 2 θ and 1 2π e i(r+ 1 2 )x e −i 1 2 θ . These are normalised in {0 ≤ x ≤ 2π , 0 ≤ θ ≤ 2π}. The matrix elements of the Hamiltonian (2.4), or equivalently the operator on the left of (2.6), are where the diagonal terms are kinetic contributions. The upper off-diagonal term comes from the matrix element of the potential and the lower off-diagonal term is the same, by hermiticity. The potential makes no contribution to the diagonal terms. It is now convenient to express the energy eigenvalues E of H 2×2 as E = 1 2 ε + 1 8Λ . The matrix with eigenvalues ε is 14) and the eigenvalue equation det( H 2×2 − ε1) = 0 reduces to with solutions ε ± (r) = r 2 + 1 4 ± r 2 + V 2 0 . (2.16) The spectrum has two branches, the lower branch ε − (r) and the upper branch ε + (r), and is symmetric under r → −r. When V 0 = 0 the spectrum simplifies to ε(r) = (r ± 1 2 ) 2 , whose graph consists of two intersecting parabolas, with minima at r = − 1 2 and r = 1 2 , and a crossover at r = 0. We are mainly interested in low energy states on the lower branch, near the minima of ε − (r). There is an important bifurcation at a critical strength of the potential, , ε − has two minima at r = ± 1 4 − V 2 0 and a local maximum at r = 0. For V 0 > 1 2 , there is just one minimum at r = 0; here p = ± 1 2 , so the crystal momentum k is located on the boundary of the Brillouin zone. The upper branch ε + (r) has simpler behaviour, as it just has a minimum at r = 0 for all positive V 0 . Figure 2 shows graphs of the eigenvalue spectrum for two typical values of V 0 . Recall the form of the (unnormalised) wavefunction (2.11). We evaluate A using the condition that 1 A is the eigenvector of the matrix (2.14). On the lower branch of the spectrum and on the upper branch Note that |A| 2 = 1 at r = 0 on both branches, so spins ± 1 2 are superposed there with equal probability. On the lower branch, the total (unnormalised) wavefunction at r = 0 is 2 cos 1 2 (x − θ), so the highest probability occurs for θ = x, where the disc is oriented so as to minimise the potential energy. This is compatible with a rolling motion. Except in cases where |A| is very close to 0, or much larger than 1, the quantum states of the disc cannot be thought of as having a definite momentum p or spin s, because the potential strongly superposes states where these have different values. So to consider the correlation between the momentum and spin, we work with their expectation values P and S . The expectation value of the spin is where A is given by expressions (2.17) and (2.18), respectively, on the lower and upper branches. The expectation value of momentum follows immediately, as p + s = r for both contributing states in (2.11), so Graphs of P and S as functions of r are shown in Figure 3. They are plotted together with ε − for states on the lower branch, for the typical values of V 0 we selected before; for states on the upper branch, they are plotted together with ε + . Interesting to note is that P vanishes wherever E (or equivalently ε) is stationary with respect to r, as one can see from the graphs. This is because and the right hand side is the matrix form of the momentum operator. Taking expectation values gives the result. (One also needs to use the identity d dr Ψ|Ψ + Ψ| d dr Ψ = 0 for normalised states.) When the potential is relatively weak, such that V 0 < 1 2 , then P passes through 0 at the non-zero minima of ε − on the lower energy branch. On the other hand S does not change sign near here. The signs of P and S are therefore not strongly correlated for these low energy states, and we conclude that for weak potentials there is no significant spin-momentum coupling. Near r = 0, where ε − has a local maximum, P and S have opposite signs, so momentum and spin are anticorrelated. This is the opposite of the classical correlation of momentum and spin for a rolling motion. Similarly, on the upper energy branch, the expectations of momentum and spin have opposite signs for all r, so they are anticorrelated. When the potential is stronger, such that V 0 > 1 2 , we find the correlation we are seeking. Here, the low energy states on the lower branch are near r = 0, and we see that P and S have the same sign. It is straightforward to estimate these quantities analytically for small r. They are and for V 0 > 1 2 both their slopes with respect to r are positive. In fact, because P is zero only at r = 0 when V 0 > 1 2 , there is a spin-momentum correlation of the desired sign for all r, on the lower branch. On the other hand, the momentum and spin are anticorrelated for all r on the upper branch. The conclusion is that the potential has to be quite strong to achieve the spinmomentum coupling for quantum states that mimics the classical phenomenon of rolling motion for a cog. As in the usual model of spin-orbit coupling for a spin 1 2 particle, there are two states, a lower energy state with a positive correlation, and a higher energy state with an anticorrelation. Perturbation theory We have just seen that the spin-momentum correlation has the desired form only when the potential is quite strong. Nevertheless, it is of some interest to calculate what happens in perturbation theory. When the potential is weak, we can calculate the energy spectrum to second order in perturbation theory, treating V 0 as small. The perturbative result overlaps what we have already calculated, and we can allow for the possibility that the moment of inertia Λ is not small. This is a useful check on our calculations, both for the two-dimensional disc, and later, when we consider three-dimensional Skyrmion dynamics. When V 0 = 0, the eigenstates of the Hamiltonian are Ψ 0 (x, θ) = e ipx e isθ , with definite momentum and spin, and energy E = 1 2 p 2 + 1 2Λ s 2 . Low energy states are those with s = ± 1 2 and p 0. These are near the centre of the Brillouin zone. Let us focus on the states with s = 1 2 (the results are similar for s = − 1 2 ), whose energy is Recall that when the potential is included, there is still the good quantum number r = p + s, so the states that we are focussing on have r = p + 1 2 1 2 . The effect of the cosine potential −V 0 cos(x − θ), at leading order, is to mix the unperturbed state Ψ 0 = e ipx e i 1 2 θ with states where p is shifted by ±1, i.e. the states e i(p+1)x e −i 1 2 θ and e i(p−1)x e i 3 2 θ , whose unperturbed energies are 1 2 (p + 1) 2 + 1 8Λ and 1 2 (p − 1) 2 + 9 8Λ , respectively. The potential has no diagonal matrix element, so the energy is unchanged to first order in V 0 . The eigenfunction of the Hamiltonian to first order in V 0 , for the fixed value p + 1 2 of r, is where the denominators of the coefficients are proportional to differences between the energies of the unperturbed states. The energy of the state Ψ, to second order in V 0 (found either by acting with the Hamiltonian, or by using the standard formula) is . (2.24) This formula is valid, provided the unperturbed energy differences are not small compared to V 0 . So V 0 must be much less than 1 and p must not approach − 1 2 . The perturbative approach therefore definitely fails for the states near r = 0 that we were considering earlier for fairly strong V 0 . However, it is successful for small p, even if Λ is not small and the last term of the formula (2.24) makes a significant contribution. Therefore, perturbation theory allows us to consider easily the spin 3 2 contribution to low energy states, in contrast to our matrix method, which required this contribution to be negligible. Let us compare our previous calculation of ε − , as a matrix eigenvalue, with this perturbative estimate. From the expression (2.16), and converting it back to give the energy E as a function of momentum p, we find, to second order in V 0 , that and this agrees with (2.24) provided Λ is small. So the matrix method and perturbation theory agree where they should. The conclusion is that perturbation theory is a good way to find states of the disc in a certain regime, but that regime does not extend to where spin-momentum coupling has the correlation we are seeking. In the following sections we shall investigate the quantised three-dimensional dynamics of a Skyrmion in a background potential. We should expect the matrix method to be more effective than perturbation theory for finding the desired form of spin-momentum coupling. We shall need a model where the potential is fairly strong, and where states of the Skyrmion with spin 3 2 and higher are suppressed, relative to the spin 1 2 states. A multi-Skyrmion modelling a nucleus with mass number N is described by N Skyrmion-like point particles, each with three positional degrees of freedom and three orientational degrees of freedom. The rotational degrees of freedom could be expressed using an SO(3) matrix, but for quantum mechanical calculations it is more convenient to use an SU(2) matrix q. Throughout this section we will identify the group SU(2) with the group of unit quaternions, making the identifications i = −iσ 1 , j = −iσ 2 , k = −iσ 3 between imaginary quaternions and Pauli matrices. The Lagrangian for the model consists of standard kinetic terms for the positions and orientations, and interaction potentials between pairs of particles (see [15] for the precise form). The interaction potential is such that the particles tend to arrange themselves into crystals with an FCC lattice structure, with a preferred orientation at each lattice site. In suitable length units the FCC lattice is the set of vectors (x, y, z) ∈ Z 3 such that x + y + z is even. The preferred orientation at lattice site (x, y, z) is i x j y k z . We want to study the problem of a charge-1 Skyrmion rolling along the surface of a half-filled FCC lattice. See Figure 4. We assume that the lattice sites with x + y + z ≤ −2 are filled with particles in their preferred orientations, and consider a Skyrmion moving freely in the plane The degrees of freedom for this Skyrmion are its position coordinates (x, y, z) and its orientation q ∈ SU(2). Its dynamics can be described by a Lagrangian consisting of a standard kinetic term and a potential function V : Π × SU(2) → R. The kinetic terms are invariant under the group SU(2) I × SU(2) S of isorotations and rotations, with action where we identify vectors x ∈ R 3 with imaginary quaternions xi + yj + zk = −i(xσ 1 + yσ 2 + zσ 3 ). These terms are also invariant under translations ( x, q) → ( x + c, q) and parity transformations ( x, q) → (− x, q). The potential function V must be invariant under the group of symmetries of the half-filled lattice. This group is generated by the following transformations: The invariance of V under −τ 2 , −ρτ 2 ρ −1 and −ρ 2 τ 2 ρ −2 implies in particular that V is invariant under the translation action of the two-dimensional lattice The transformations listed above acting on (Π/Γ) × SU(2) generate a finite group which is isomorphic to the binary cubic group (the double cover of the cubic group). Note that since τ 2 = −1 when acting on (Π/Γ) × SU (2) we are free to use ρ, σ, τ as a set of generators. We employ an ansatz for the potential of the form with R(q) the rotation matrix induced by q (i.e. qσ j q −1 = σ i R(q) ij ), and Y ( x) a 3 × 3 matrix-valued function. This ansatz is motivated by the dipole description of Skyrmion interactions; to a good approximation a single Skyrmion interacts with a background field of pions like a triple of orthogonal scalar dipoles, and this dipole interaction has similar q-dependence to our ansatz. Alternatively, one may regard our ansatz as the first two terms in an expansion of V in harmonics on SU (2). Note that this potential satisfies V ( x, −q) = V ( x, q), as required by symmetry. We simplify the ansatz further using Fourier series. Both U and Y are required to be invariant under the lattice Γ, so have Fourier series with summands corresponding to dual lattice vectors. We assume that these Fourier series only contain terms corresponding to the shortest dual lattice vectors; the associated functions are 1 and e ±i a j . x , where The symmetries ρ, σ, τ generate an action of the binary cubic group on the vector spaces occupied by U, Y . Since the ansatz (3.8) is invariant under q → −q, this action descends to an action of the cubic group. Representation theory can be used to find all functions U and Y which are invariant under this action. This calculation involves the irreducible representations of the cubic group: we recall these briefly. Besides the trivial representation A 1 , there is another one-dimensional representation A 2 in which ρ and τ map to 1 and σ maps to −1. There is a unique two-dimensional representation E and two three-dimensional representations T 1 and T 2 ; the first of these is the standard rotational action as the symmetry group of the cube and the second is It can be shown that the functions e ±i a j . x transform in the representation 2T 2 of the cubic group; since this contains no trivial subrepresentations the only allowed form for U is a constant function. Since this constant does not alter differences between energy eigenvalues we set it to zero. The elements of the group act on matrix-valued functions Y by simultaneously multiplying with matrices from the left and right, and by permuting the Fourier modes. The matrix acting from the left corresponds to the representation A 2 ⊕ E, and that acting from the right corresponds to the representation T 1 . The action on Fourier modes is A 1 ⊕ 2T 2 . Therefore the representation acting on the vector space space contains four copies of A 1 , so the space of allowed potential functions has real dimension four. This space of allowed potential functions can be parametrised by (U 0 , U 1 ) = (W 0 e iθ 0 , W 1 e iθ 1 ) ∈ C 2 as follows: The values of the constants can be estimated in the lightly bound Skyrme model using its point particle approximation. One calculates a function Y true by adding up the interaction energies between fixed Skyrmions in the planar lattice x + y + z = −2 and the Skyrmion moving freely in the plane x + y + z = 0, and then calculates its Fourier coefficients. The values obtained are With these parameters the truncated Fourier series Y approx given in eq. (3.10) is a good approximation to Y true , in the sense that the ratio of the squares of the L 2 norms of Y true − Y approx and Y true is 0.095. Our final potential V ( x, q) = Tr(R(q)Y approx ( x)) Figure 5: The path of a rolling Skyrmion. is not exact, even in the point particle description of Skyrmion interactions, but it is analogous to the potential −V 0 cos(x − θ) that we chose for the disc in section 2. We claim that for the parameter values (3.11), the potential given by equations (3.8) and (3.10) induces classical motion similar to a ball rolling on a surface. Consider the situation where a particle moves from (x, y, z, q) = (0, 0, 0, 1) to (x, y, z, q) = (1, −1, 0, ±k). Both of these points are critical points of the potential, and for our parameter set they are minima. We will treat this situation adiabatically, assuming that the mass M of the Skyrmion is much greater than its moment of inertia Λ. If the spatial kinetic energy 1 2 M v 2 is much larger than the energy scale W = W 2 0 + W 2 1 of the potential then the path in space will to a good approximation be a straight line: x(t) = (vt/ √ 2)(1, −1, 0). If the velocity is not too large then, at each time t, q(t) will to a good approximation be the orientation that minimises V ( x(t), q) with respect to variations in q. In this situation the rotational kinetic energy is roughly 1 2 Λv 2 , and the approximation is reliable as long as this is much less than W . Thus our approximation assumes that W/M v 2 W/Λ. We wish to compare this motion with that of a rolling ball. If a ball of radius r rolls with velocity v along a surface with inward-pointing unit normal n its angular velocity will be ω = − n × v/r. For n = (−1, −1, −1)/ √ 3 and v = (v/ √ 2)(1, −1, 0) as above this makes the angular velocity a positive multiple of (1, 1, −2). The angle θ(t) between the angular velocity vector ω(t) = −2q −1q for the path q(t) and the vector (1, 1, −2) measures deviation from rolling motion: acute angles indicate motion similar to rolling, and obtuse angles indicate motion that is opposed to rolling. We have computed q(t) using the adiabatic approximation described above and have hence determined θ(t). The maximum angle along the path is 0.89 ≈ 2π/7, indicating that the motion induced by the potential is similar to that of a rolling ball. This adiabatic motion of a Skyrmion is illustrated in Figure 5 (see also Figure 4). The orientation of the rolling Skyrmion is illustrated at the start, mid-point, and end of the path. The start and end points are neighbouring lattice sites, and their orientations differ by a rotation of 180 degrees about the red-green axis. The most natural guess for the orientation at the mid-point is a rotation through 90 degrees about the same axis, and there are two possibilities here (depending on whether one rotates clockwise or anticlockwise). Figure 5 shows the orientation for one sense of rotation, but the alternative would have made the Skyrmion's red, white and yellow faces visible at the mid-point. Now observe that just below the Skyrmion at the mid-point there is a nearby Skyrmion in the lattice (white and yellow faces visible). It is straightforward to find the pion dipole fields of this pair of Skyrmions along the line joining them and verify that for the illustrated sense of rotation, the fields are identical at the closest points, implying that the potential energy is minimal. (The associated colouring is predominantly green, but with a small tilt towards white and yellow.) If the sense of rotation had been opposite, the field match would have been less good and the energy greater. We conclude that the rolling motion illustrated in Figure 5 is along a particularly deep valley in the potential energy landscape, and favoured as a low energy classical motion. Anti-rolling is disfavoured. Figure 5 suggests that to a good approximation the spin vector S for the rolling Skyrmion points in the direction of the red-green axis, from green to red. This spin vector S, the vector N pointing into the halffilled lattice, and the momentum vector P do not form an orthonormal triad ( P is orthogonal to both N and S , and N . S = −1/ √ 3), but their triple scalar product S. N × P is negative. This is what is expected classically if the parameter a in equation (1.1) is positive. Weak coupling to the potential In this section and the next we will study the quantum mechanical problem of a Skyrmion interacting with the surface of a half-filled lattice. Since the potential experienced by the Skyrmion is periodic it is natural to analyse this problem using the theory of Bloch waves. Let k ∈ R 3 be a crystal wavevector satisfying k 1 + k 2 + k 3 = 0 and let H k be the Hilbert space of wavefunctions Ψ : The first condition ensures that Ψ is effectively defined in the plane Π rather than all of R 3 . The second condition has the implication that two crystal wavevectors whose difference lies in the reciprocal lattice Γ * generated by a j define the same Hilbert space, so k should be regarded as an element of Π * /Γ * . The natural operators on H k are isospin, spin, and momentum. Spin and isospin are just the infinitesimal versions of the actions described in (3.2): Although the space in which the Skyrmion moves is two-dimensional, it will be convenient to write momentum as a three-vector (due to the three-dimensional origin of the problem). Thus we set noting that P 1 + P 2 + P 3 = 0. Then for a plane wave of the form Ψ( It will be useful in what follows to decompose H k into eigenspaces of | S| 2 . Fix a non-negative integer or half-integer and let η : SU(2) → SU(2 + 1) be the spin irreducible representation of SU(2). If ψ : Π → Mat(2 + 1, C) is a matrix-valued function of x then Ψ( x, q) := Tr(ψ( x)η (q)) (4.6) satisfies | S| 2 Ψ = | I| 2 Ψ = ( + 1)Ψ. Thus this wavefunction describes a particle of total spin and total isospin . The space of all such wavefunctions in H k will be denoted H k . The Peter-Weyl theorem implies that any wavefunction in H k can be decomposed as an infinite sum of wavefunctions of this type: The wavefunction Ψ describing the Skyrmion is required to satisfy the Finkelstein-Rubinstein constraints [13]. These simply state that Ψ is an odd function of q: Ψ( x, −q) = −Ψ( x, q). Functions in H k are odd if is a half-integer and even if is an integer. Thus the Finkelstein-Rubinstein constraints require Ψ to be in the subspace H odd k of H k , where the summation over is restricted to half-integers. This ensures the quantised Skyrmion has half-integer spin. Outline of perturbation theory The hamiltonian that we will study is Here M, Λ > 0 are parameters representing the mass and moment of inertia of the Skyrmion, and V is the potential introduced in the previous section. We will construct an effective hamiltonian for the lowest-energy eigenstates using perturbation theory, with the parameters W 0 and W 1 of V treated as small. If V = 0 and k is in the first Brillouin zone (i.e. | k + v| > | k| for all v ∈ Γ * ) then the lowest energy eigenstates in the space H k are clearly of the form Ψ 0 ( x, q) = Tr(ψq)e i k. x , (4.9) with ψ ∈ Mat(2, C). The space of all such wavefunctions has dimension four and will be denoted by K k . The energy of these states is This is minimised by k = 0. In the following calculations we will assume that k is close to 0, discarding terms of O( k 2 ). When the potential V is non-zero, the four degenerate energy levels with energy E 0 will separate. We will study this effect using perturbation theory. Let us review the overall methodology, which generalises the formulae (2.23) and (2.24). We seek an operator I : K k → H k which depends continuously on the parameters U 0 , U 1 in the potential, such that the image under I of the H 0 -invariant subspace K k is Hinvariant, and such that the composition Π K I of I with the orthogonal projection Π K : H k → K k is the identity map. The effective hamiltonian is then defined to be H eff = Π K HI. The operators I and H eff will be constructed as power series in the parameters that appear in the potential. To zeroth order, I is just the inclusion: I|Ψ 0 = |Ψ 0 + O(V ) for all Ψ 0 ∈ H k . The first order correction to H eff is given by (4.11) The term linear in V vanishes. The reason for this is simple: the only non-zero terms in the Fourier series of V Ψ 0 correspond to plane waves of the form e i( k± a j ). x , as one sees from eqs. (4.9) and (3.10), and these are all L 2 -orthogonal to e i k. x . As a consequence, H eff = H 0 + O(V 2 ). The first order correction to I is given by This satisfies HI|Ψ 0 = E 0 I|Ψ 0 + O(V 2 ), so its image is H-invariant up to terms quadratic in V . The second order correction to H eff is given by In the next subsection we will calculate the action of Π K V (H 0 − E 0 ) −1 V on wavefunctions Ψ 0 of the form (4.9), and thereby evaluate H eff to second order. A reader uninterested in the details of this calculation may skip to the final result, eq. (4.34). The effective hamiltonian H eff We begin by analysing V |Ψ 0 , with the potential V given by eqs. (3.8) and (3.10). From eq. (3.8) we see that V ∈ H 1 0 , and from eq. (4.9) we see that Ψ 0 ∈ H 1 2 k . It follows from the Clebsch-Gordan rules that the excited wavefunction V ( x, q)Ψ 0 ( x, q) will be a sum of terms with spin 1 2 and spin 3 2 . Thus with Π denoting projection onto H k . Applying (H 0 − E 0 ) −1 to the spin 1 2 term gives As the Fourier modes that appear in the excited wavefunction V ( x, q)Ψ 0 ( x, q) are e i( k± a j ). x , the operator | P − k| 2 takes the constant value | a j | 2 = 2π 2 /3 on Π 1 2 V |Ψ 0 , which simplifies this expression. The spin 3 2 term can be analysed in the same way, yielding Thus to compute Π K V (H 0 −E 0 ) −1 V |Ψ 0 we need to compute the following four terms: This can be evaluated with the help of the following identity, which is proved in the appendix: We introduce a vector with the index i − j understood modulo 3. Then applying the identity (4.17) yields To apply the operator Π K V to this expression we multiply the function with V , discard all terms in the Fourier series except e i k. x , and apply Π 1 2 with the help of the identity (4.17). The result is The next term, Π K V k.( P − k)Π 1 2 V |Ψ 0 , can be evaluated using a similar method. The calculation will make use of the identity in which e j are the standard basis vectors for R 3 and is an inward-pointing normal vector of unit length representing the normalised gradient of the nuclear charge density. Since k.( P − k)e i( k± a j ). x = ± k. a j e i( k± a j ). x , we obtain Now j k. a j ( n × e 3 ) l−j simplifies algebraically to π( n × k) l and Tr(σ l ψq)e i k. x = 2S l Ψ 0 , so The remaining two terms will be evaluated indirectly, using the identities In other words, we calculate the contributions from the sum of the spin 1 2 and spin 3 2 excited states and subtract the spin 1 2 contribution. We begin with Π K V 2 |Ψ 0 . The term in the Fourier series of (4.28) The other terms in the Fourier series will be annihilated by Π K , so need not be computed. By the Clebsch-Gordan rules, . We only need to calculate the piece in H 0 0 ⊕ H 1 0 , because multiplying a spin 1 2 wavefunction with a spin 2 function yields wavefunctions with spin 3 2 and 5 2 , both of which will be annihilated by Π K . We show in the appendix that, for any vectors v, w ∈ R 3 , Therefore the relevant part of | i R ij (q)u i−j | 2 is | u| 2 + | u| 2 = W 2 0 + 2W 2 1 . It follows that and, using our earlier result (4.21), The term Π K V k.( P − k)V |Ψ 0 can be evaluated using similar techniques. The coefficient of e i k. x in the Fourier series of (V k. (4.32) As before, the other terms in the Fourier series are irrelevant. Also as before, we may replace | i R ij (q)u i−j | 2 with W 2 0 + 2W 2 1 . The resulting sum over j is zero, because j a j = 0. Therefore Π K V k.( P − k)V |Ψ 0 = 0 and, by our previous result (4.25), We are now in a position to evaluate the effective hamiltonian. Collecting together the results (4.13), (4.16), (4.21), (4.25), (4.31) and (4.33) gives This hamiltonian, which is analogous to equation (2.24) in the 2D model, contains the sought-after coupling between momentum and spin (1.1). Besides scalars, this is the only term in the hamiltonian, and it is at first sight surprising that no other terms occur. The explanation lies in the symmetries of the lattice: S. n × k is the only term linear in k which is invariant under the action of the binary cubic group. For the parameter set (3.11) the coefficient of the term (1.1) in H eff is negative, which is opposite to what would be expected based on the classical rolling motion of Skyrmions. This is not such a surprise, given what we learnt from the toy model. In the toy model, spin-momentum effects consistent with the classical rolling motion of Skyrmions only occurred for a relatively strong potential, and were inaccessible to perturbation theory. In the next section we investigate stronger potentials. Strong coupling to the potential In the previous section we discussed the situation where the potential is small; in this section we discuss the case where the potential is slightly larger. Recall that in the 2D toy model, if the potential was strong the lowest energy Bloch wave had a non-zero crystal wave vector (at r = 0 so k = ± 1 2 ). We expect a similar effect in the 3D model. We begin this section by looking for candidate crystal wave vectors for the ground state, using symmetry as a guide. Recall that the hamiltonian is invariant under an action of the binary cubic group. The action of this group on wavefunctions induces an action on the space of crystal wavevectors k. The generator τ acts trivially on k, while the generators ρ and σ act on k as multiplication by the matrices The vectors are special because they represent fixed points of the action of the subgroup generated by ρ and τ , namely the binary tetrahedral group (bear in mind that k is only defined up to addition of the reciprocal lattice vectors a j ). These two crystal wavevectors are plausible candidates for the wavefunction of the ground state at strong coupling. Note that they are at the vertices of the first Brillouin zone, as shown in Figure 6. In order to analyse the Hilbert spaces corresponding to these crystal wavevectors it is convenient to apply a rotation to the lattice and the moving Skyrmion: After rotation, the Skyrmion moves in the plane z = 0, and the half-filled lattice of Skyrmions is the region z < 0. The generators of the binary cubic group now act as follows: After rotation the reciprocal lattice vectors are Perturbation theory in k We will be interested in eigenfunctions of the hamiltonian whose crystal wavevector is close to k ± . It is enough to analyse just wavevectors close to k + , as the transformation τ swaps k + and k − . First we will identify an orthonormal basis |Ψ 0a ∈ H k + for the eigenspace of the hamiltonian with (degenerate) lowest energy eigenvalue E 0 . Then we will consider nearby wavevectors k = k + + δ k. Perturbing k in this way is mathematically equivalent to perturbing the momentum operator: where P 0 = −i∇ x is the usual momentum operator acting on H k + . Thus nearby wavevectors can be analysed using perturbation theory. The perturbed hamiltonian is We will show below that Ψ 0a | P 0 |Ψ 0b = 0 for reasons of symmetry, so the effective hamiltonian acting on this eigenspace is unchanged to linear order in k. Therefore the perturbed wavefunctions satisfy H|Ψ a = E 0 |Ψ a + O(δ k 2 ). We then compute the matrix elements of the hamiltonian H to second order in δ k: For large enough M the second term on the right dominates the third term, meaning that the lowest energy eigenvalue has a stable local minimum at δ k = 0. Below we will quantify how large M needs to be for this to happen. The expectation value P of P = (P 1 , P 2 ) in the state |Ψ 0 is, as we have already noted, zero. Similarly, group theoretical arguments will show that the expectation value S of S = (S 1 , S 2 , S 3 ) has vanishing planar components (although the component perpendicular to the plane will be non-vanishing). For δ k = 0 we expect these expectation values to be non-zero and correlated. More precisely, we expect P 0 to point in the same direction as n × S , where n = (0, 0, −1) is now the normalised gradient of the nuclear matter density. Equivalently, where S ± := S 1 ± iS 2 and P ± 0 := P 1 0 ± iP 2 0 . It is straightforward to derive expressions for these expectation values within the framework of perturbation theory in δ k. The expectation value of P in a normalised We will show below that for sufficiently large M the second term is negligible and we have that P ≈ δ k. For S we compute This equation and (5.16) are analogues of eqs. (2.22) in the 2D model. In terms of κ = δk 1 + iδk 2 , we have that P + ≈ κ and to leading order. Thus to verify (5.15) it is sufficient to show that This concludes the outline of what we intend to show. In the remainder of this section we verify equations (5.19) and (5.20) by explicit calculation. In the next section we provide an alternative verification based mainly on symmetry. Truncation of Hilbert space In order to calculate the eigenstates |Ψ 0a we make a number of simplifying assumptions. First, we assume that the only terms that occur in the spatial Fourier series of Ψ 0a are those with the shortest possible wavevectors, namely Note that these all have the same crystal wavevector; for example, in the case of e 1 and e 2 this is because Second, we assume that the only terms that occur in the expansions of Ψ 0a in harmonics on SU(2) are those corresponding to spin 1 2 . In other words, for 2 × 2 matrices ψ 1a , ψ 2a , ψ 3a . Since these three matrices have altogether 12 degrees of freedom, the eigenstates |Ψ 0a belong to a 12-dimensional subspace of the Hilbert space H k + . These assumptions are justified as long as energies of states in the 12-dimensional subspace are appreciably lower than those in its complement. If the moment of inertia Λ is small then states with spin greater than 1 2 will have much greater energy than the spin 1 2 states considered here, so truncation to spin 1 2 can always be justified by choosing Λ small. To justify the truncation in momentum space, we need to consider the next-shortest wavevectors associated with k + . These are 2 3 ( a 3 − a 2 ), 2 3 ( a 1 − a 3 ) and 2 3 ( a 2 − a 1 ), and their associated kinetic energies are Later we will compare these with the energies of states in the 12-dimensional subspace. The generators r = ρ, τ of the binary tetrahedral group act naturally on wavefunctions H k + via r · Ψ( x, q) = Ψ(r −1 ( x, q)), and these actions fix the 12-dimensional subspace. However, they only define a projective representation and not a true representation, because The binary tetrahedral group is known to be Schur-trivial, meaning that every projective representation can be turned into a true representation by twisting the actions of the group elements. In this case, a true representation is obtained by choosing ρ · Ψ( x, q) = Ψ(ρ −1 ( x, q)) , τ · Ψ( x, q) = ωΨ(τ −1 ( x, q)) . (5.26) Here we have introduced ω = e 2πi/3 = − 1 2 + i √ 3 2 , the cube root of unity. We wish to break up the 12-dimensional subspace of the Hilbert space into irreducible subrepresentations of the binary tetrahedral group. To this end, we review these irreducible representations. Besides the trivial representation, there are two further 1-dimensional representations A a with a = 1, 2, given by The binary tetrahedral group can be identified with the subgroup of the group of unit quaternions generated by −1, ρ = − 1 2 (1+i+j+k), τ = i. The standard identification of unit quaternions with SU(2) matrices gives a two-dimensional representation E 3 . There are two further inequivalent representations Finally, there is a three-dimensional representation F given by R : SU(2) → SO (3). It is straightforward to check that the action of the binary tetrahedral group on the span of e 1 , e 2 , e 3 ∈ H k + is isomorphic to the representation F . The action on the four-dimensional subspace of H 0 consisting of functions of the form Ψ( x, q) = Tr(ψq) is isomorphic to E 1 ⊕ E 2 . This can be seen as follows: the induced action on the 2 × 2 matrix ψ is The matrices acting on the left correspond to the representation A 1 ⊕ A 2 , and those acting on the right correspond to the representation E 3 , so the representation is The action on our 12-dimensional subspace is therefore F ⊗ (E 1 ⊕ E 2 ). This, it turns out, is isomorphic to 2E 3 ⊕ 2E 1 ⊕ 2E 2 . To fully describe the decomposition, we introduce basis vectors f ia , with i = 1, . . . 6 and a = 1, 2: It can be checked that f 1a span an irreducible subrepresentation isomorphic to E 3 , f 2a span a second copy of E 3 , f 3a and f 4a span two copies of E 1 , and f 5a and f 6a span two copies of E 2 . Hamiltonian matrix and the ground state Next we need the matrix elements (5.14) for the hamiltonian acting on our truncated Hilbert space. The non-trivial part is the potential. After rotation, the potential given by equations (3.8) and (3.10) becomes where U α = W α e iθα (α = 0, 1) as before and This acts on wavefunctions from our 12-dimensional space by multiplication, and, in order to have a well-defined action, the resulting functions need to be projected back onto the 12-dimensional space. Consider first the action of the functions e i a j . x with j = 1, 2, 3. In the case j = 1 we find that The first and third of these are orthogonal to e 1 , e 2 , e 3 so only the second of these survives projection onto the span of e 1 , e 2 , e 3 . By performing similar computations we find that the actions of e i a j . x are e i a j . x e k = δ j+1,k e k+1 (no sum over k) . (5.44) In this expression, indices i, j, k are to be understood modulo 3. The effect of multiplying a wavefunction with R ij (q) and projecting back to the 12-dimensional space is described by the identity (4.17). Therefore the action of the functions Tr(A α (x)R(q)) that appear in the potential on the 12-dimensional subspace of the Hilbert space can be computed using equations (5.44) and (4.17), and turns out to be Tr(A α (x)R(q)) · ψ ib = B α;ji ψ jb , (5.45) where B α are 6 × 6 block diagonal matrices of the form The action of the potential function is therefore described by the 6 × 6 block diagonal matrix The latter was computed in the previous section using perturbation theory to be The value of W 2 0 + 2W 2 1 is approximately 1.06. Since we are only interested in energy differences we ignore the term 3/8Λ which occurs in both expressions. Since we have been assuming that Λ is small, the other Λ-dependent term in brackets can be ignored. Thus the state with crystal wavevector k + will have lower energy if This inequality holds for M in the range 4.04 < M < 10.11. Thus for M close to zero (equivalent to small potentials) the state with k = 0 is preferred, but as M increases past the value 4.04 the state with k = k + is preferred. Now we assess the reliability of the approximation that we made by truncating in momentum space. The largest eigenvalue of the 6 × 6 block diagonal matrix that describes the potential is 0.37. Thus the largest energy involved in our calculation is In our truncation of the Hilbert space we neglected states whose energy is bounded below by (5.24). We are justified in neglecting these provided that This means that our approximation is valid for the values of M around 4.04 where the transition between the k = 0 and k = k + states occurs. Expectation values for spin and momentum Now we turn our attention to the expectation value of spin and momentum in the ground state. We need to compute matrices describing the action of P 0 and S on the 12-dimensional subspace of the Hilbert space. The action of S is given by It follows that S 3 f ia = (−1) i+1 1 2 f ia for i = 1, . . . , 6 and a = 1, 2. In particular, for the ground states ψ 0a given by (5.48) we have S 3 ψ 0a = 1 2 (−µf 3a − νf 4a ), and the expectation value of S 3 is 1 2 (µ 2 − ν 2 ) < 0. The action of S + = S 1 + iS 2 is described by the 6 × 6 matrix The action of S − is given by the conjugate transpose of this matrix. As the blocks on the diagonal are zero, the expectation values of S 1 and S 2 in the states ψ 0a are zero, so the expected spin points vertically down into the half-filled lattice of Skyrmions, as was previously claimed. The action of P 0 = −i∇ on the functions e 1 , e 2 , e 3 is simply It follows that the action of P + 0 = P 1 0 + iP 2 0 is described by the 6 × 6 block diagonal matrix The action of P − 0 is given by the hermitian conjugate of this matrix. It follows that the expectation value of P in the ground state is zero as claimed. Using these formulae it is straightforward to verify equations (5.19) and (5.20). We have that The block diagonal structure of the matrix representing H 0 means that the inner products on the left hand side of (5.20) vanish as required. Using these identities and our particular values for U 0 , U 1 , we find that Note that by construction T ij ab = T ji ab . It is straightforward to show using the matrix given earlier for P + 0 that Ψ 0a |P + 0 (H 0 − E 0 ) −1 P + 0 |Ψ 0b = 0 and Ψ 0a |P − 0 (H 0 − E 0 ) −1 P − 0 |Ψ 0b = 0. These two identities imply that T 11 ab = T 22 ab and T 12 ab = −T 21 ab . Altogether, this means that T ij ab is proportional to δ ij . The coefficient can be determined by evaluating So for M < 4.03, the expectation of momentum points in the opposite direction to δ k and for M > 4.03, the region of most interest, they point in the same direction. Notice that the transition occurs at almost exactly the same value of M as where the energy for k = k + drops below that for k = 0. Finally we consider the subleading corrections to the eigenvalue E of H implied by eq. (5.14). This equation can be rewritten in terms of T ij ab as follows: This concludes our verification of spin-momentum coupling based on the crystal wavevector k + . If M > 4.04 then the crystal wavevector k + is preferred over k = 0, and the expectation values of spin and momentum are correlated in the manner predicted by the spin-momentum coupling. Our calculation is reliable as long as M 8.9. Symmetry arguments To conclude, we would like to point out that our results in the previous section are robust and insensitive to the details of the choice of potential function. Many of them can be derived using symmetry alone, as we now explain. We begin by analysing the symmetry properties of the operators P and S. Their commutation relations with ρ are as follows: ρS 3 = S 3 ρ , ρS + = ω 2 S + ρ , ρP + = ω 2 P + ρ . (6.1) Since the hamiltonian commutes with the action of the binary tetrahedral group, the eigenspace corresponding to the lowest eigenvalue E 0 forms a representation K of this group. Generically this representation will be irreducible, as was the case in the above calculation. Since the group element −1 acts non-trivially on the Hilbert space, K must be isomorphic to one of the three representations E 3 , E 1 and E 2 introduced above, because −1 acts trivially in all other irreducible representations of the binary tetrahedral group. The commutation relations above show that the images of K under S + and P + are isomorphic to K ⊗ A 2 . Since tensoring with A 2 cyclically permutes the representations E 3 , E 1 and E 2 , these image representations are not isomorphic to K. It follows that they are orthogonal to K. This means that Ψ 0a |S + |Ψ 0b = 0 and Ψ 0a |P + 0 |Ψ 0b = 0 , (6.2) and in particular that S + and P + 0 have zero expectation value in the ground state. The identity (5.20) can be proved similarly. The operators S + (H 0 − E 0 ) −1 P + and P + (H 0 − E 0 ) −1 S + map K onto a representation isomorphic to K ⊗ A 1 , which is again not isomorphic to K, so the inner products in (5.20) have to vanish. To analyse the identity (5.19) we need the symmetry σ. As has already been noted, σ maps the Hilbert space H k + onto H k − . There is another transformation which swaps k + and k − , namely time reversal T . This acts as 3) The composition σT maps H k + onto H k + . Its commutation relations with S and P are σT P 1 0 = −P 1 0 σT , σT P 2 0 = P 2 0 σT , σT S 1 = S 1 σT , σT S 2 = −S 2 σT . (6.4) Since multiplication with i anticommutes with σT , the transformation σT anticommutes with P ± and commutes with S ± . The operator that appears in (5.19 When composed with projection onto the eigenspace K it defines a linear map K → K. This map commutes with the action of ρ and τ , so by Schur's lemma it acts as multiplication by a scalar. Since it anticommutes with the action of σT , this scalar must be pure imaginary. Thus symmetry arguments show that an identity similar to (5.19) must hold, with λ ∈ R. However, symmetry arguments alone cannot determine the sign of λ. This is because replacing the potential V with its negative −V changes the sign of λ without altering the symmetry properties. Nevertheless, the sign of λ does seem to be fixed by a few coarse features of the above calculation. Consider again the basis vectors ψ 0a for the lowest-energy eigenspace. Each of these can be written as a sum of three terms: Each summand is an eigenvector of P 0 , so has a definite momentum vector. Each summand also determines a unique spin vector v, such that it is an eigenstate of v. σ acting from the left with eigenvalue 1 2 . The momentum vectors and spin vectors for the summands involving e 1 , e 2 and e 3 are listed below: Note that for each summand, the momentum vector points in the opposite direction to the cross product of n with the spin vector. The expectation values for momentum and spin are weighted averages of these vectors. In the case δ k = 0 the three summands contribute equally to the wavefunction, and weighted averages are ordinary averages. Since the momentum vectors sum to zero and the unweighted average of the spin vectors is 1 2 (ν 2 − µ 2 ) n, we recover the results derived earlier. When δ k = 0 the momentum eigenvalues get shifted by δ k and the dominant contribution to the wavefunction is from the summand with the shortest wavevector. For example, when δ k points in the direction − and n × S points in the direction of δ k. There are two effects contributing to the expectation value for P : the shift in momentum vectors and the change of weights. For strong potentials the former dominates, and the expectation value for P points in the same direction as the naive momentum δ k (see the discussion around eq. (5.16)). Thus n × S and P point in the same direction, consistent with the spin-momentum coupling. Note that all of this follows from the correlation between the spin and momentum vectors of the three summands making up ψ 0a , and any vector similar to ψ 0a with µν > 0 would produce the same effect. Thus we expect a similar correlation between spin and momentum for all values of U 0 , U 1 close to those used in our calculation. Conclusions and further work In a classical picture, the experimentally observed nuclear spin-orbit coupling arises from a rolling motion of a nucleon over the surface of a larger nucleus. However, understanding why such a rolling motion is energetically preferred remains something of a mystery. We have shown here that for a Skyrmion close to the planar surface of a half-filled lattice of Skyrmions, a rolling motion is energetically favoured by the orientational part of the potential energy. To describe this planar rolling motion, it is convenient to introduce the notion of spin-momentum coupling. We have next investigated the quantum mechanics of the Skyrmion, first by analysing the hamiltonian describing the Skyrmion interacting with the half-filled lattice of Skyrmions using perturbation theory. A spin-momentum coupling term appears at second order in perturbation theory, but has the wrong sign, at least for the parameter set obtained from the lightly bound Skyrme model. We then calculated spin-momentum coupling at the level of expectation values, and found that the correct sign is recovered non-perturbatively at stronger potential strengths. The change of sign is correlated with a jump in the crystal momentum of the lowest energy state. Our results were based on a half-filled FCC lattice that has been sliced in the plane x + y + z = 0. There is another natural way to slice the FCC lattice, in a plane parallel to one of the coordinate planes (or x = 0, y = 0 or z = 0). It would be interesting to investigate the spin-momentum coupling in that situation.
2018-08-30T08:32:56.000Z
2018-06-19T00:00:00.000
{ "year": 2018, "sha1": "a27ce78ce249d2a0a289dbd7a54e93467a4f84c1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nuclphysb.2018.08.006", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a27ce78ce249d2a0a289dbd7a54e93467a4f84c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
34697928
pes2o/s2orc
v3-fos-license
Investigations of the Plum pox virus in Chile in the past 20 years Sharka disease, which is caused by Plum pox virus (PPV), is one of the most serious diseases affecting stone fruit trees around the world. Identified in Bulgaria in 1931, it was restricted to the European continent until 1992 when the virus was identified in Chile. It was subsequently verified in the USA, Canada, and Argentina. After 20 years since first detecting PPV in Chile, it seems clear that the disease cannot be eradicated in spite of various measures. Considering the seriousness of this problem for the domestic industry, a series of studies have been conducted to determine the distribution and degree of transmission of the disease, its biological and molecular characterization and epidemiological aspects, etc. The available information has allowed national phytosanitary control agencies to take steps to decrease the effects of the virus. However, there is a lack of data with respect to epidemiological factors for a more accurate understanding of the performance of the virus under Chilean conditions. INTRODUCTION The first symptoms of Sharka or Pox were observed by farmers in southwest Bulgaria after the First World War and the first scientist to describe the viral nature of the disease was Dimitar Atanasov in 1933 (Dzhuvinov et al., 2007), calling it Sharka disease or Plum pox virus (PPV). Since then, PPV has become one of the most serious problems for the stone fruit industry in Europe (Németh, 1986). Since the appearance of the first symptoms in apricots (Prunus armeniaca L.) in 1926, studies in different countries affected by this serious disease have revealed its causal agent, its characteristics, different isolates, means of transmission and relevant information with regard to the behavior of different stone fruit cultivars. The most important landmarks include transmission by grafting buds in 1931, demonstrating the viral nature of the disease (Atanosoff, 1932). In 1937, the causal agent of Sharka symptoms was identified and the plum was determined to be the best indicator of the presence of the virus (Prunus salicina L.) cv. Myrobalan (Christov, 1944). In the mid-1960s, the range of herbaceous and woody hosts was determined (Trifonov, 1965). Diagnosis was significantly advanced in the 1970s and 1980s with serological techniques like micro precipitation and later the ELISA test. At the end of this period and beginning of the 1990s, PPV was detected by ELISA in apricot, peach (Prunus persica L.), and cherry (Prunus avium L.) (Topchiika, 1992). Later, it was possible to use much more precise diagnosis techniques like Polymerase Chain Reaction (PCR) (Wetzel et al., 1991;Hadidi and Levy, 1994), resulting in greater knowledge about the range of hosts and viral strains. As well, biotechnological methods associated with genetic transformation generated plant varieties with characteristics of immunity to the virus (Malinowski et al., 2006). The presence of the virus in the Americas was verified for the first time in Chile in 1992 (Herrera, 1994;Herrera et al., 1997) and it was found to have spread to all areas growing stone fruit Herrera and Madariaga, 2002;Muñoz and Collao, 2006). The virus was detected in 1999 in Adams County, Pennsylvania, USA. Quarantine measures were quickly taken to prevent the spread of the disease (Dunkle, 1999). However, there were subsequent reports of positive samples in New York and Michigan (Barba et al., 2011). Subsequently, PPV was detected in Canada in 2000 (Thompson et al., 2001) and in Argentina in 2006(Dal Zoto et al., 2006. While the disease had been limited to Europe for many years, in addition to the Americas it has been detected in Asian countries such as China, Pakistan, India, and Japan (Barba et al., 2011). PPV belongs to the genus Potyvirus, of the potyviridae family. The genome consists of one positive molecule, single-stranded RNA encapsulated in filamentous viral particles, about 750 nm long and 15 nm wide. The virus is transmitted from infected trees by grafting and other vegetative propagation techniques or non-persistently by aphid vectors such as Aphis spiraecola and Myzus persicae. The numerous PPV isolates differ in biological and epidemiological properties such as aggressiveness, aphid transmissibility, and symptomatology. These differences have been serologically and molecularly documented, leading to the clustering of PPV into six types or strains; PPV-D, PPV-M, PPV-EA, PPV-C, PPV-W and PPV-rec (Candresse and Cambra, 2006). The costs associated with the disease involve not only direct losses in stone fruit production, eradication, compensatory measures, and lost revenue, but also indirect costs including those from preventive measures such as quarantine, surveys, inspections, control nurseries, diagnostics, and the impact on foreign and domestic trade . The main causes of PPV spreading throughout the world are illegal trafficking and inefficient virus control in propagation material exchanged among countries. Once the virus is established in an area, the aphid vector species spread it within the same area and later winged forms of these vectors transmit it to neighboring Prunus species. The presence of the disease in a country or specific geographic area has serious agronomic, economic, and policy consequences. Productivity is mainly affected by reduced fruit quality and trees falling prematurely. In addition, its characteristic of being transmitted by aphid vectors and propagation material makes it difficult to control. The PPV does not kill the plant, but if infected plants are not removed, they will serve as inocula to spread the virus to healthy plants. Technical and policy decisions for eradication programs must be supported by quantitative data, hence the importance of knowing the epidemiology of the virus, behavior of host species, and the most affected geographic areas. It is necessary then to establish special regulations and develop specific control strategies to maintain disease-free nurseries. In this sense, the relationship among state agencies responsible for regulations and the nursery operators is essential for virus control. This work, 20 years since the virus was detected in Chile, aims to summarize existing knowledge and envision future research tasks in order to contribute specific criteria for PPV control in our stone fruit industry. Detection and strain characterization During the 1991-1992 growing season, symptoms similar to those caused by PPV were observed in an old stone fruit collection located at Buin (Metropolitan Region), Chile (Herrera, 1994). Apricot plants cv. Bergeron showed chlorotic rings in their leaves, malformation of fruit, with typical rings produced by PPV. Fruit seeds from the affected plants showed typical yellow rings on the surface, while peach plants cv. Springcrest showed chlorosis in new leaves around secondary veins. The presence of characteristic PPV symptoms in apricot and peach, positive reaction to poly-and monoclonal antiserum, observation of particle type potyvirus with specific antisera of the virus under electron microscope confirmed the presence of the virus with a high degree of confidence (Herrera, 1994;1997;2000a;. This data, published in 1994, was the first reference to the presence of this virus in the Americas (Herrera, 1994). Until then PPV had been restricted to Europe. The identification of the virus was reported to the international community at the Conference on Plum pox virus in Bordeaux, France, in 1993 (Acuña, 1993). At the same time, national authorities through the Agricultural and Livestock Service (SAG) established compulsory testing for the virus throughout the country (SAG, 1994). The identification of PPV in Chile was later corroborated by PCR (Rosales et al., 1996), from the viral RNA present in crude extracts of affected plants. The authors prepared cDNA from the extreme 3' terminal region of the virus genome, which served as a template in PCR, in which two pairs of primers were used, one amplifying a 243 bp (base pairs) fragment of the terminal carboxyl region of the cover protein of the viral gene (Wetzel et al., 1991), and a 220 bp fragment in the 3' non-coding region of the PPV genome (Hadidi and Levy, 1994). These fragments of DNA amplification allowed the specific identification of the virus in 24 of 28 analyzed samples. Later, in order to determine the PPV strain of the isolate detected in Chile, Rosales et al. (1996; cloned and sequenced 243 bp fragments (Wetzel et al., 1991) using extracts from plant leaves with PPV symptoms collected in different localities in Chile. The nucleotide sequence of the cloned fragments showed the presence of restriction sites of the enzyme Rsa1 (GTAC), which is characteristic of PPV-D strain. The same fragments contained sites preserved in recognition of the Alu1 enzyme, found in the majority of PPV isolates. Comparing the nucleotide sequences of the amplified fragments to other PPV sequences revealed a very similar homology to PPV-D. The identity varied from approximately 92.6% to 99.2%. The highest percentages of homology were found with those known as typical of PPV-D (PPV-Ranković, PPV-D, PPV-NAT) and the lowest (92.6%) with PPV-El Amar (Cervera et al., 1993). Reyes and others (Reyes et al., 2001; characterized eight Chilean isolates based on biological and molecular methods. The isolates were transmitted by grafting to Prunus tomentosa Thunb. and by mechanical inoculation to Nicotiana benthamiana Domin. The Chilean isolates did not show symptomatological differences from PPV-D in those species. On the other hand, from the molecular point of view, the authors concluded that the eight isolates from different geographic areas of the country included in routine PPV checks, correspond to the PPV-D strain based on comparing the nucleotide sequence of the isolates to those belonging to the D, M, C, and El Amar PPV strains. However, when they compared the sequence of the Chilean isolates within the branch of the virus D strain, they determined that three were closely related to isolates described in central Germany (Deborré et al., 1995), another three showed high homology to isolates described in Poland (Malinowski et al., 2006) and one French isolate (Ravelonandro et al., 1988). This suggests that the Chilean PPV isolates originate from more than one place. Based on the available information and the view that the studied isolates are representative of the virus in Chile, the authors concluded that the only strain of the virus present in the country is PPV-D. Fiore et al. (2010) concluded the same after analyzing 14 isolates from different localities and comparing them to PPV-D type with PCR. This information has allowed SAG to officially report that the only strain of PPV in the country is the D type (Muñoz and Collao, 2006). However, specific isolates have been identified that do not necessarily respond in their molecular behavior to the PPV-D type. Numerous cases have been found under field conditions with no obvious symptoms of PPV, but samples from these plants are positive to specific commercial antiserum of PPV , and positive to specific primers to the 3' end region of the viral genome (Reyes et al., 2001). Reyes et al. (2001), working with asymptomatic isolates from peach trees transmitted to Nicotiana clevelandii A. Gray and N. benthamiana, indicated that unlike the samples from symptomatic isolations, these isolates have a weak reaction to commercial antiserum BIOREBA (Reinach, Switzerland). As well, when using different primers (forward) as complementary to those used to amplify the region corresponding to the coat protein (CP) of viral genome, the forward PPV9115 and PPV9207 give a faint signal. This contrasts with the results of symptomatic peach samples where the amplification of cDNA yields the expected fragments in all cases. These authors concluded that the results suggest important differences between Potyvirus (symptomatic and asymptomatic), partially explaining the different reactions to the commercial BIOREBA antiserum. For years PPV was considered to have a highly conserved nucleotide sequence since the first isolates had a high level of similarity. However, the sequence and characterization of other isolates in recent years reveal a wide range as a product of viral recombination (Cervera et al., 1993). Distribution and dissemination There are 8 545 ha of almond (Prunus dulcis (Mill.) D.A. Webb), 13 174 ha of cherry, 21 001 ha of plum, 1 405 ha of apricots and 13 885 ha of peach in Chile, representing 20.8% of the total area cultivated with fruit trees (ODEPA, 2012). Chile has an expectant position among exporters of stone fruits in the southern hemisphere, making it necessary for the industry, have a special concern for all those factors that mean losses in quality and fruit production. Among these factors are diseases caused by virus and associated for which there are no methods to remove it from the plants. Numerous viruses have been reported globally that affect these fruit tree species (Németh, 1986). The following viruses affecting stone fruit have been identified in Chile; Prunus necrotic ringspot virus (PNRSV) (Ascui and Alvarez, 1988), Prune dwarf virus (PDV) (Herrera, 1996;Herrera and Madariaga, 2002), Tomato ringspot virus (TomRSV) (Auger, 1989) and Plum pox virus (Herrera, 1994). A first approximation of the incidence of these viruses in Chile was reported in 2002, finding a predominance of PNRSV and PDV (20% to 30%) over TomRSV (5%) (Herrera and Madariaga, 2002). In 1998, Herrera et al. (1998) did the first study of the incidence of PPV in commercial stone fruit orchards in Chile. They employed the ELISA method to detect the virus and collected samples at random from orchards (50 samples per orchards) for three growing seasons (1994-1995, 1995-1996, and 1996-1997. The authors established the distribution and incidence of the virus in the country. Prior to this work, PPV had only been determined in specific areas near Santiago (Herrera, 1994). It was concluded that the virus was not affecting a particular area but rather was widely distributed wherever stone fruit production was taking place in Chile. From a total of 10 051 samples collected in the three growth seasons, 15.2 were positive for PPV, Infection rates averaged over the three years were 13.1º% in Atacama Region, 18.5% in Coquimbo Region, 11.9% in Valparaíso Region, 15.7% in the Metropolitan Region, and 16.4% in O'Higgins Region. The most affected species were peach (15.3%) and nectarin (17.2%), followed by plum (8.3%), with minor infection detected in apricot (1.9%). In 2007, a PPV survey of commercial orchards based on 1 396 collected samples (three samples per orchard) and analyzed by immuno-impression in the area where the fruit were cultivated, reported that 3.2% of the samples were positive for PPV-D (Fiore et al., 2010). In this extensive work ) some factors from the epidemiological point of view seem to be relevant and it is necessary to highlight that the methodology included sufficient random samples from every orchard and that the total of samples was more than 10 000. First, the disease was found distributed everywhere that stone fruit is grown in Chile. This suggests that there are sufficient sources of inoculum for the virus to continue spreading from orchard to orchard using aphids as a vector. Second, the obvious PPV symptoms in leaves and fruits were observed only in the Metropolitan and Libertador General Bernardo O'Higgins Regions. In other regions PPV-positive plants were asymptomatic. This contrasts with the European experience where the virus, once established in an area, spreads quickly in the following four seasons and between 60% and 90% of plants show symptoms. The difference may be associated with higher spring temperatures in Europe, which decreases viral concentrations in affected plants and preventing severe symptoms. It also cannot be ruled out that Chilean plant varieties, being mainly American in origin could have different reactions to viral infection from those of plants commonly used in Europe. Third, the highest percentages of infection were found in peach and nectarine rather than plum and apricot. This is consistent with descriptions of PPV-D behavior in Europe where aphid vectors easily transmit easily the virus to peach and nectarine, while it is more difficult to move from these species to plum and apricot (Barba et al., 2011). Infection rates in nurseries are lower than those in commercial orchards. Thus, the work carried out in 2000 (Herrera, 2000a;2000b;Herrera and Madariaga, 2002) on a total of 13 609 nursery plants analyzed by ELISA showed an average PPV-D infection rate in six species (peach, nectarine, plum, apricot, almond, and cherry) of 4.2%. The virus was found in almond (2.6%), plum (7.7%), apricots (35.3%), peach (2.6%) and nectarine (7.7%). In 2006 (Muñoz and Collao, 2006), SAG showed results of mandatory disease control in stone fruit tree nurseries, which considered a total of 158 403 samples tested by DASI-Elisa and PCR over 10 yr and found 0.11% of plants PPV-D. In general, comparing the results of studies in the 1990s to those of more recent studies, the percentages of PPV in stone fruit (from orchards and nurseries) have been decreasing. This could be partly attributed to compulsory SAG to prevent the virus from spreading by monitoring for PPV in nursery and orchard stone fruit. Epidemiology Since PPV was identified in Chile in 1994 (Herrera, 1994) studies have been conducted on its impact on commercial stone fruit orchards Fiore et al., 2010) and nurseries (Muñoz, 2001;Herrera and Madariaga, 2002), as wells as on prevalent strains (Rosales et al., 1998) and molecular characteristics (Reyes et al., 2001). However, there is not enough information on epidemiological aspects to allow for making decisions or establishing criteria with respect to the evolution of the disease in specific areas in Chile. In 2003, Herrera and Madariaga (2003) evaluated the degree of virus spreading in a commercial apricot orchard with three cultivars: 'Dina', 'Castelbrite', and 'Katy'. Results showed that the number of 'Dina' plants with PPV symptoms in fruits increased 26.7% in a period of 4 yr, but in the last season a further 24.2% of the plants were ELISA-positive, although they showed no symptoms. In such this case, plants with symptoms and those in the latency period of the virus increased by 50.9% in a period of 4 yr. The spread of the virus was significantly less in 'Castelbrite' and 'Katy' than in 'Dina'. The speed at which the virus spread in this study was less than in cases in Spain (Llacer et al., 1992), where it required between 2 and 5 yr to reach 100%. A study in France described periods of 8 to 9 yr to reach 100% infection (Adamolle et al., 1994). The results of this work suggest that all the factors for virus spreading in field are present. In fact, the aphid species Myzus persicae, Aphis craccivora, and A. gossyppi were identified as the most abundant among stone fruit and as transmitters of PPV (Muñoz and Collao, 2006). The speed of viral spread depends on the transmission efficiency of the most abundant aphid species in a given place, production of winged forms, winds in the area, and the presence of inoculum sources (weeds and susceptible cultivar). In epidemiological terms, not only population dynamics of vectors play an important role, but also the degree of susceptibility of commercial varieties. Cultivars that are more susceptible to the virus or those that show more severe symptoms or have a shorter incubation period have higher concentrations of the virus. Consequently, varietal behavior will be a determining factor that will affect the greater or lesser spread of the virus through an orchard and/or area. There have been no specific studies in Chile of the response of different commercial varieties to viral infection under field conditions. However, studies of peach and apricot have shown that under our conditions, the virus has distinct effects on different varieties. For example, the peach cultivar 'Mary Gene Tree' showed higher rates of infection than 'Suncrest' (Millán, 1995), while in apricot, 'Dina' is much more susceptible than 'Katy' and 'Castelbrite' (Herrera and Madariaga, 2003). From this perspective, it is necessary carry out much more research on the reaction of Chilean stone fruit cultivars to PPV in order to recommend best material to the producers. Control PPV is the most important disease affecting the stone fruit sector mainly because it can severely reduce fruit quality and consequently reduce exports. It also has a strong capacity to spread under field conditions, covering extensive crop areas. Consequently this pathogen is considered in Europe and the USA as one of the greatest threats to the stone fruit industry (Barba et al., 2011). To prevent the introduction of the virus, countries have implemented regulatory systems for the exchange of plant material. However, when the disease is detected for the first time in a given area, it is necessary to take drastic measures to apply control through eradication schemes. If these are not effective, authorities must establish measures to contain the disease to reduce damage and prevent its spreading to areas free of the virus. From this perspective, it is highly recommended to adopt a holistic approach to control. To do this, PPV control could be coresponsibility of the public and private sectors, with the public sector implementing mandatory control measures as SAG did in Chile (SAG, 1994), and the private sector collaborating via the use of certified genetic material, virus-resistant varieties, vector control, and elimination of affected plants. The eradication of PPV has proven impossible, particularly in Chile, which is why preventive measures have been required to prevent its spread. In this respect, Muñoz (2001) suggested that the mandatory control measures of SAG have been successful in controlling the disease and preventing serious economic damage and the spread of the virus. In the last 10 years the number of positive samples from nursery mother plants taken by SAG has decreased from 1.63% in 1995 to 0.008% in 1999 (Muñoz, 2001). However, 20 years since PPV was detected, it seems that extensive surveys are still necessary to monitor the movement of the virus by large number samples of all stone fruit species from a large number of commercial orchards. Although regulations require nurseries to analyze mother plants for PPV, there is no systematic program for plant certification in nurseries (Müller and Mártiz, 2011). As well there is no detailed information on the susceptibility or resistance to PPV of the varieties used by producers. European countries have made significant efforts to test their own plant species against PPV to define their behavior and recommend the best stone fruit cultivars to producers (Barba et al., 2011). Extensive screening of germplasm has failed to identify sources of resistance among peach species. In contrast, resistant strains of apricot and plum have been identified. In Europe, where the spread of PPV is no longer under control, the cultivation of less susceptible or partially tolerant cultivar allows the continuation of stone fruit production. However, this practice may contribute further to PPV spreading. Under such circumstances, mineral oil treatment to control aphid vectors, which reduces but do not totally prevent PPV from spreading to young plants in nurseries, might be considered as an additional measure. The only in Chile related to cultivar reaction to PPV was done by Wong et al. (2010), who demonstrated the effective resistance of transgenic C5 plum to four Chilean isolates of the PPV-D strain. CONCLUSIONS Twenty years after the detection of PPV in Chile, it now seems clear that the disease cannot be eradicated despite the measures undertaken to do so. This necessarily means that in the future the fruit industry must contain and manage the disease, because of which, complementary efforts of the public and private sectors are needed to prevent the virus from spreading. This review reflects important efforts in detection, identification, distribution, and incidence of the virus in different species of stone fruit in Chile. However, there is a lack of data on epidemiological factors that would allow a better understanding of the performance of the virus under Chilean conditions. Consequently, it is necessary to develop research to accurately determine aphid vector species and their efficiency of transmission to each stone fruit species. There is some evidence of the susceptibility or resistance of our varieties, but more information is needed to make recommendations for their use by producers. As well, there is an evident need to map the areas with greater or lesser amount of viral inoculum in order to concentrate long-term eradication efforts in some areas. As well, the identification of asymptomatic plants under field conditions is necessary as part of studying the epidemiology of isolates in Chile.
2017-09-07T17:27:53.702Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "f7592e710a88f58741baa12e5122f37c7e7029e9", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/chiljar/v73n1/at09.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b482e079d8a716b5a09bcf6a1696834c9efa9620", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Political Science" ], "extfieldsofstudy": [ "Biology" ] }
238942479
pes2o/s2orc
v3-fos-license
Significant enhancement in quantum-dot light emitting device stability via a ZnO:polyethylenimine mixture in the electron transport layer The effect of adding polyethylenimine (PEI) into the ZnO electron transport layer (ETL) of inverted quantum dot (QD) light emitting devices (QDLEDs) to form a blended ZnO:PEI ETL instead of using it in a separate layer in a bilayer ZnO/PEI ETL is investigated. Results show that while both ZnO/PEI bilayer ETL and ZnO:PEI blended ETL can improve device efficiency by more than 50% compared to QDLEDs with only ZnO, the ZnO:PEI ETL significantly improves device stability, leading to more than 10 times longer device lifetime. Investigations using devices with marking luminescent layers, electron-only devices and delayed electroluminescence measurements show that the ZnO:PEI ETL leads to a deeper penetration of electrons into the hole transport layer (HTL) of the QDLEDs. The results suggest that the stability enhancement may be due to a consequent reduction in hole accumulation at the QD/HTL interface. The findings show that ZnO:PEI ETLs can be used for enhancing both the efficiency and stability of QDLEDs. They also provide new insights into the importance of managing charge distribution in the charge transport layers for realizing high stability QDLEDs and new approaches to achieve that. Introduction Owing to their unique luminescence properties, which include high quantum yield, narrow emission spectra and sizedependent colour tunability, colloidal quantum dot (QD) materials are attracting signicant attention of the scientic community for utilization in future optoelectronic and energy harvesting devices. [1][2][3][4][5][6][7] Electroluminescent quantum dot light emitting devices (QDLEDs) are emerging as forefront players for next generation at panel displays. With high electroluminescence (EL) efficiency and stability, in some cases reaching an external quantum efficiency (EQE) of 30.9% 8 and an LT95 of 11 000 hours from an initial luminance (L 0 ) of 1000 cd m À2 , 9 the performance of these devices is quickly approaching that of organic light emitting devices (OLEDs) used in commercial products. Despite this remarkable progress, the EL stability of many QDLEDs remains relatively low, and the root causes of this behaviour are still not well understood. [9][10][11][12][13][14] While early work has focused primarily on the QD light emission layer (EML) for understanding the root causes of this phenomenon, 17 recent ndings show that the charge transport layers also play a major role in their EL stability. [15][16][17] This is especially true for the hole transport layers (HTLs) which comprise organic materials and thus have high propensity to degradation by excitons, thermal and environmental stress factors. 11,[18][19][20] In these devices the electron transport layers (ETLs) are commonly made of ZnO due to its chemical stability, high electron mobility and the energy of its conduction band which well matches that of the QDs and thus facilitates electron injection into the QD EML. Solutioncoated ZnO ETLs however have structural and stoichiometric defects that can act as exciton quenching sites and thereby reduce device EQE. 21,22 Therefore, to passivate the ZnO surface and/or prevent excitons from reaching it a thin layer of a wide bandgap material, such as Al 2 O 3 (ref. 13) or a polymeric material, [22][23][24][25][26] is oen introduced in between the ZnO ETL and the EML. Polymers containing aliphatic amine groups, such as polyethylenimine (PEI) and its ethoxylated derivative (PEIE), are oen used for this purpose. 27,28 These polymers have been used as electrode interfacial layers in OLEDs and organic solar cells (OSCs) in recent years for facilitating electron injection. The efficiency enhancement observed upon their use as ZnO passivation layers in QDLEDs is therefore sometimes also attributed to a similar effect. However, since electron injection into the QDs is already asymmetrically higher than hole injection, an improvement in electron injection should worsen rather than improve charge balance in the EML and thus reduce EQE. Alternative explanations for the efficiency enhancement have therefore been provided, focusing on the role of PEI(E) in passivating the ZnO surface, 28 shiing the electron-hole recombination zone away from the QD/HTL interface 21 or impeding (rather than facilitating) electron supply to the QD EML due to its insulating nature. 27 Conversely, while the inuence of PEI passivation layers on QDLED efficiency has been well studied, their impact on stability has received much less attention, with only one recent study suggesting that they had a limited effect. 21 They also need to be used in the form of ultrathin (<10 nm), pin-hole free layers which is challenging for solution coating. Instead, mixing the PEI into the ZnO in the form of a ZnO:PEI blended ETL instead of a separate layer has therefore been recently proposed, 29 similar to what was done in OLEDs 29-31 and OSCs. [32][33][34][35][36] In the only report applying a ZnO:-PEI(E) ETL to QDLEDs, Shi et al. showed that using a ZnO:PEIE blended ETL can improve the efficiency of inverted blue QDLEDs but its effect on stability was not addressed. 37 In this work, we investigate and compare between using ZnO:PEI blended ETL and a bilayer ZnO/PEI ETL structure on device performance in inverted red QDLEDs. Results indicate that while both ZnO/PEI bilayer ETL and ZnO:PEI blended ETL can improve device efficiency by more than 50% compared to QDLED with the ZnO ETL, the ZnO:PEI ETL has a signicant advantage in terms of improving device stability, leading to more than 10 times longer LT50, dened as time elapsed before the luminance decreases to 50% of its initial value, in the case of ZnO:PEI ETL with 0.3 wt% PEI. Investigations show that the ETL enables a deeper penetration of electrons into the HTL, suggesting that the stability enhancement may be the result of a consequent reduction in hole accumulation in the HTL at the QD/HTL interface, the latter being a known cause of the deterioration in EQE over time. 38 Device fabrication Inverted QDLEDs of the structure ITO/ETL/QD/CBP/MoO 3 /Al are used in this work. Indium tin oxide (ITO) patterned glass substrates from Kintec are rst sonicated with Micro 90 and DI water, and then sequentially rinsed with acetone and isopropanol. The washed substrates are then treated by an oxygen plasma to improve the wettability of the ITO surface. Zinc acetate (197 mg, Sigma-Aldrich) and ethanolamine (54 mL, Sigma-Aldrich) are mixed in ethanol (6 mL, Sigma-Aldrich) on a 50 C hotplate with vigorous stirring at 700 rpm for 40 minutes in a N 2 lled glove box for sol-gel ZnO. PEI solution is prepared by stirring branched-PEI (Sigma-Aldrich) in ethanol or 1-propanol (Sigma-Aldrich) at 700 rpm overnight in a N 2 lled glove box. 1 mL of PEI solutions in ethanol with different concentrations are then mixed with 1 mL of ZnO solution at 700 rpm for 1 hour to make solutions of the different ZnO : PEI ratios. ZnO solutions are spin coated at 1000 rpm on the cleaned ITO substrates followed by 150 C baking on a hotplate for 30 minutes. ZnO:PEI blended solution is deposited by solution coating at 2000 rpm followed by 120 C baking on a hotplate for 30 minutes. For the ZnO/PEI bilayer ETL, 0.5 mg mL À1 of PEI dissolved in 1-propanol is spin coated at 5000 rpm followed by 120 C baking on a hotplate for 20 minutes on top of a ZnO lm. 4 mg mL À1 CdZnSe/ZnSe/CdZnS/ZnSQDs (Mesolight Inc.) dispersed in octane (Sigma-Aldrich) is deposited on the ETL at 500 rpm and baked on a 50 C hotplate for 30 minutes. 4,4 0 -Bis(N-carbazolyl)-1,1 0 -biphenyl (CBP, Angstrom Engineering), molybdenum trioxide (MoO 3 , Angstrom Engineering) and aluminium (Al, Angstrom Engineering) are then deposited using a thermal evaporator (Angstrom Engineering) at 5 Â 10 À6 torr for HTL, hole injection layer (HIL) and anode. Film thicknesses are measured using a Dektak-150 prolometer. Device characterization Current-voltage-luminance (J-V-L) measurements are conducted using an Agilent 4155C semiconductor parameter analyzer with a silicon photodiode and a Minolta CS-100 Chromameter. EL and PL spectra are collected using an Ocean Optics QE65000 spectrometer, using a Newport 67 005 200 W HgXe arc lamp with a monochromator as a PL excitation source. Device EL stability measurements are conducted under a constant current of 20 mA cm À2 using an M6000PLUS OLED lifetime test system. Surface topography measurements are conducted using a Veeco Nanoscope atomic force microscope (AFM). Time-resolved photoluminescence (TRPL) is measured using an Edinburgh Instruments FL920 spectrometer. Delayed electroluminescence is measured with an R928 photomultiplier tube and the signal is amplied using a Keithley 428 current amplier. The prompt EL signal is blocked with the help of a ThorLabs MC1000A optical chopper. Forward and reverse bias pulse signals are applied using a Stanford Research Systems DG535 digital delay/pulse generator. A Tectronix TDS5054 digital phosphor oscilloscope then records the delayed EL signals. The QDLEDs are kept in a nitrogen atmosphere at all times. Results and discussion We rst compare the effect of using a ZnO:PEI blended ETL vs. a ZnO/PEI bilayer ETL vs. a ZnO-only ETL on the EL characteristics of the QDLEDs. For this purpose we fabricate and test four groups of QDLEDs of the general structure ITO/ETL ($32 nm)/ QD (30 nm)/CBP (50 nm)/MoO 3 (5 nm)/Al, with the ETL made of ZnO:PEI with 0.1% or 0.3% PEI by weight (we will denote ZnO:ETL with different concentrations 0.1 wt% and 0.3 wt% as ZnO:PEI 0.1 and ZnO:PEI 0.3 , respectively), ZnO/PEI or only ZnO, the last one to serve as control. The thickness of the ETL in all devices was $32 (AE3 nm), which included the additional thickness of the neat PEI layer ($8 nm) in the case of the bilayer ETL. Fig. 1(a) depicts the general QDLED structure, whereas Fig. 1(b, c and d) present the JVL, EQE and EL spectra of the devices, respectively. As can be seen from the J-V characteristics, the devices with the ZnO/PEI and ZnO:PEI ETLs all have a higher threshold voltage (V th ) and a lower current density at any given voltage in comparison to the control device. One can also see that increasing the PEI concentration from 0.1 wt% to 0.3 wt% in the ZnO:PEI ETL reduces the current at any given voltage. The higher V th and lower current density suggest that the PEI makes electron injection and transport more difficult, an effect that can be attributed to its low conductivity. An examination of the EQE vs. current density characteristics in Fig. 1(c) shows that using the PEI brings about a signicant increase in EQE, increasing the maximum value from 7.7% (in the case of the ZnO control device) to 11.2%, 11.0% and 12.1% for the ZnO/PEI, ZnO:PEI 0.1 , and ZnO:PEI 0.3 devices, respectively. With the J-V characteristics in perspective, the EQE enhancement can be attributed, at least in part, to the role of the PEI in reducing the charge imbalance in the QD EML produced by the asymmetric carrier injection barriers. The passivation of ZnO surface defects by the PEI may also be contributing to this efficiency enhancement. While this passivation effect has only been studied in devices with ZnO/PEI in the past, 24 we may expect a similar effect in the case of the ZnO:PEI blends. The EL spectra ( Fig. 1(d)) show a single emission band with a peak at wavelength 632 nm, indicating that the majority of radiative recombination happens in the QD EML in all devices. To test the passivation effect of PEI of ZnO surface states, we use TRPL measurements to probe changes in the QD exciton lifetime on the various ETLs. Because ZnO surface defects act as efficient quenching sites for QD excitons, their passivation would extend the exciton lifetime and lead to a slower decay in the TRPL characteristics of the QDs in contact with the ETL. Fig. 2 depicts the TRPL characteristics of the QD layers on the four ETL congurations, collected at a wavelength of 630 nm (i.e. the QD emission peak). Clearly, the TRPL decay rates depend on the ETL, with QDs coated on ZnO/PEI exhibiting the slowest decay rate and those coated on ZnO exhibiting the fastest decay rate. The slow decay rate in the case of the ZnO/PEI points to the effectiveness of the PEI layer in passivating the ZnO surface defects, consistent with previous reports. 21,24,37 The slower decay rate of the ZnO:PEI samples relative to the ZnO control indicates that mixing PEI into the ZnO also confers some surface passivation effect although to a lesser extent in comparison to the case where the ZnO surface is covered completely by a PEI layer. Notably, increasing the PEI concentration in the ZnO:PEI layer from 0.1% to 0.6% ZnO:PEI is found to have a negligible effect on the TRPL decay rate (see Fig. S1 †), suggesting that the passivation effect of PEI in the ZnO:PEI blends quickly reaches saturation. These results therefore verify that a reduction in exciton quenching at the ZnO/QD interface contributes to the higher EQE of the ZnO/PEI and ZnO:PEI devices. The similar EQE enhancement with both types of ETLs (i.e. ZnO/PEI and ZnO:PEI) despite the different extents of passivation by the PEI in the two cases however suggests that the surface passivation is not the leading factor behind the EQE enhancement. Next, we test the EL stability of the devices under constant current driving at 20 mA cm À2 . Fig. 3(a and b) show the normalized luminance (relative to the initial luminance, L 0 ) and changes in the driving voltage (driving voltage at time, t, minus the initial driving voltage) vs. time plots of the devices, respectively. The LT50 of the ZnO ETL and ZnO/PEI devices is 46 hours and 62 hours, respectively (from an L 0 of 2500 and 3030 cd m À2 , respectively). In contrast, the LT50 of the ZnO:PEI devices is markedly longer, amounting to 140 hours in the case of the ZnO:PEI 0.1 device and 292 hours in the case of the ZnO:PEI 0.3 device (from an L 0 of 3000 and 3250 cd m À2 , respectively). Using Introducing the PEI into the ZnO layer instead of having it in a separate layer leads to a signicant enhancement in the EL stability of QDLEDs, leading to $10 times longer LT50 at L 0 of 100 cd m À2 . Table S1 † summarizes the LT50 values of the devices and their EQEs. There is a distinct difference between the trends of the control device and the PEI containing devices in the driving voltage trends of Fig. 3(b), with the latter experiencing an initial decrease in the driving voltage before beginning to rise over time. In general, an increase in driving voltage during electrical driving can be attributed to the formation of space charges within the device layers that create internal electric elds of opposite direction to the eld produced by the external bias which require an increase of the external bias to offset their effect and maintain the same current ow. As the difficulty of injecting holes into the QD EML arising from the large energy difference between the HOMO of the HTL and the valence band of the QD is a bottleneck for current ow in QDLEDs in general, one can expect the increase in voltage to be associated with hole accumulation and the build-up of hole space charges in the HTL near the HTL/QD interface. 10,40 The fact that the presence of the PEI alters the trend of the driving voltage initially therefore suggests that it may be slowing down the formation of these hole space charges. One also notes the different curvatures (i.e. trajectory) of the driving voltage trends of the PEI devices where the increase in voltage seems to accelerate in the longer term. Surprisingly this effect seems to be most signicant in the case of ZnO:PEI 0.3 which exhibits the fastest increase in driving voltage despite it having the highest EL stability. That this increase in driving voltage does not seem to negatively affect device efficiency (as inferred from the stable EL) suggests that it may be arising from space charges that are formed far away from the QD EML. In order to investigate the root causes of the stability enhancement we rst investigate the surface topography of the ETLs. Since in inverted devices the QD EML is coated on the ETL, differences in ETL surface topography or roughness may inuence the morphological uniformity of the EML or subsequent layers and thus affect device stability. Therefore AFM scans were conducted on 32 nm thick lms of ZnO, ZnO/PEI of ZnO:PEI coated on ITO glass substrates. The images are shown in Fig. 4(a-d). The surface topographies of the ETLs are very similar and roughness measurements indicate that all lms have very smooth surfaces. The root-mean-square surface roughness (R q ) for the ZnO and ZnO/PEI was 1.360 nm and 1.098 nm, respectively. This is consistent with previous studies showing that coating PEI on ZnO brings about some surface planarization effect. 28 The R q of ZnO:PEI 0.1 and ZnO:PEI 0.3 lms were slightly lower than the ZnO control, with 1.106 nm and 1.271 nm, respectively. Their homogeneous morphology and similarity to that of the ZnO control suggest that the PEI is well dispersed in the ZnO matrix. This is consistent with the TRPL results that show that introducing even a small amount of PEI affects the TRPL decay rates which indicates that the PEI is effective in passivating a signicant fraction of ZnO surface defects and points to strong interactions between the two materials and hence good dispersion. The very similar morphology and surface roughness of all ETLs, however, indicates that surface roughness modication by PEI cannot be the main source of the device stability enhancement in the case of the ZnO:PEI devices. Finding that morphological factors are unlikely to cause the EL stability enhancement of the ZnO:PEI ETL devices and that the use of PEI in the ETLs signicantly affects both the J-V characteristics ( Fig. 1(b)) and the increase in driving voltage over time (Fig. 3(b)), we investigate if the ETLs affect the electron-hole recombination zone or otherwise alter the distribution of electrons and holes in the HTL. We therefore fabricate devices that contain a thin luminescent marking layer in the HTL which will emit light if excitons are created nearby. The 10 nm marking layer consisted of 10% bis[2-(4,6-diuorophenyl)pyridinato-C2,N] (picolinato) iridium(III) (FIrpic) by volume doped into the CBP HTL. FIrpic is selected because of its comparable energy band structure to CBP which minimizes altering charge distribution from that in the original devices. In addition, its high quantum yield and luminescence in the 450-550 nm range, far from the QD emission band (at 632 nm), make it relatively easy to distinguish its EL. The marking layer was placed 10 nm away from the QD/HTL interface in order to avoid quenching FIrpic via energy transfer to the QD layer. 41 The general device structure of these devices therefore is: ITO/ETLs/ QD/CBP (10 nm)/CBP:FIrpic (10 nm)/CBP (30 nm)/MoO 3 (5 nm)/ Al (100 nm). Fig. 5(a) shows the general device structure whereas Fig. 5(b) shows the EL spectra measured from QDLEDs incorporating the different ETLs while driven at 20 mA cm À2 current density. The spectrum of a ZnO device without a marking layer is also included for comparison. All the spectra are normalized to the QD emission band peak intensity to facilitate comparison. The ZnO:PEI devices show signicant emission from the FIrpic marking layer indicating that a signicant number of electrons can penetrate into the HTL and reach the marking layer where they recombine with holes to produce EL. In stark contrast, the spectrum of the ZnO/PEI device shows only very weak (but discernible) emission from FIrpic, indicating that the penetration of electrons into the HTL is much less in this case. The ZnO device shows no detectable FIrpic emission, evident from the comparable background noise to that of the control device without the marking layer over the 450-550 nm range. For any given current density a higher electron current requires that the hole current must be proportionally lower at the same device cross-sectional plane indicating that hole currents must be somewhat lower in the HTL near the QD interface in the case of the ZnO:ETL devices. A deeper penetration of electrons into the HTL also points to a lower concentration of accumulated holes in the HTL at the QD/HTL interface, otherwise the electrons would have been annihilated (i.e. neutralized) by recombination with these holes. The fact that this effect is strongest in the case of the ZnO:PEI devices and that these devices also exhibit a signicantly higher EL stability suggests that there is a correlation between the two phenomena. In this regard the higher stability is possibly associated with a lower concentration of holes in the HTL at the QD/HTL interface whose presence in high concentrations would otherwise reduce the luminescence of the QD EML by Auger quenching or by degrading the HTL in the vicinity of the HTL/QD by excitons. 38 This effect is schematically illustrated in Fig. S4. † While the deeper penetration of electrons into the HTL in the case of the ZnO:PEI devices may seem inconsistent at rst glance with the shis in the J-V characteristics which suggest that these ETLs make electron injection and transport more difficult, it is possible that restricting the electron supply leads to higher internal electric elds within the device that facilitate hole injection from the HTL into the QD layer and/or the penetration of electrons into the HTL, either of which would reduce hole accumulation in the HTL at the QD/HTL interface. For example, reducing the number of electrons in the QD layer would be expected to lead to a higher electric eld across it which may help energy band bending at the QD/HTL interface and facilitate hole injection. The increased hole injection may, in turn, reduce the hole space charges at the QD/HTL interface leading to a higher electric eld across the HTL that can help electrons to penetrate into it. Indeed, increasing the driving voltage has been found to alter the relative height of the FIrpic band and not always in the same direction (for example the height of the FIrpic band rst increases on increasing the driving voltage but then the trend reverses at higher voltages, as shown in Fig. S2 and S3 †) pointing to changes in electric eld distribution within the device and the strong dependence of the extent of electron penetration into the HTL on them. In this regard, the higher stability of the ZnO:PEI 0.3 device in comparison to its ZnO:PEI 0.1 counterpart even though the results in Fig. 5(b) point to a deeper penetration of electrons in the case of the latter device may be due to differences in the internal electric eld distribution in the two cases that facilitate hole injection from the HTL to the QD in the earlier and thus lead to a lower hole space charge in the vicinity of the QD/HTL in it. The higher EQE of the ZnO:PEI 0.3 device relative to its ZnO:PEI 0.1 counterpart at 20 mA cm À2 (reected in their L 0 values of 3250 cd m À2 versus only 3000 cd m À2 at this current) supports this notion as it points to better charge balance in the case of the ZnO:PEI 0.3 device indicating that hole injection from the HTL to the QD may indeed be greater in this device. In a previous study, we found that using ZnO:PEI layers leads to an energy level shi of around 0.5 eV, 36 similar to that observed upon using neat PEI layer. 42 The similar vacuum level shi in the two cases suggests that another factor must be behind the deeper penetration of electrons into the HTL upon using the ZnO:PEI versus ZnO/PEI ETL, and the subsequent signicant differences in their stability. We therefore attribute this behavior to differences in charge distribution and in electric elds across the HTL and QD layers, evident from the observations in Fig. 5, S2 and S3. † To further verify that the use of PEI indeed reduces the supply of electrons, and thereby the conclusion that the deeper penetration of electrons into the HTL in the case of the ZnO:PEI devices must be the result of a higher electric eld within the QDLED structure we test the ETLs in unipolar electron-only devices (EOD). The structure of the EOD was similar to the QDLEDs except that the MoO 3 layer was replaced by a 10 nm LiF layer. The general device structure of these EODs therefore is: ITO/ETLs/QD/CBP (30 nm)/LiF (10 nm)/Al (100 nm) and is illustrated in Fig. 6(a). Under forward bias, i.e. when ITO is at a more negative potential relative to the Al contact, the injection of holes from the Al contact is blocked by the LiF layer. The ow of current therefore proceeds only by electrons, which get injected in the device at the ITO contact and collected at the Al contact. Fig. 6(b) shows the J-V characteristics of these EODs, each comprising one of the four ETL congurations. As can be seen, the current at any given voltage decreases in the order (from highest to lowest): ZnO / ZnO/PEI / ZnO:PEI 0.1 / ZnO:PEI 0.3 , indicating that electron supply by the ETLs becomes harder in the same direction which is in line with what was inferred from the changes in the J-V characteristics of the QDLEDs in Fig. 1(b). The deeper penetration of electrons into the HTL in the case of the ZnO:PEI ETLs must therefore be the result of a higher internal electric eld in these devices, induced by the more difficult supply of electrons. The almost parallel J-V traces and their linearity over the voltage range suggest that electrons can be injected into the CBP HTL from the QD layer and travel across it relatively easily. Seeing that the ZnO:PEI ETLs lead to a greater EL stability as well as a deeper penetration of electrons into the HTL, we also carry out comparative delayed EL measurements on QDLEDs with ZnO:PEI 0.3 , ZnO/PEI and ZnO ETLs to try to glean additional insights into the inuence of the various ETLs on altering charge distribution within the devices. The delayed EL measurements are performed using an experimental setup as described in previous work 38 and for which a schematic is provided in Fig. S5 of the ESI. † In the delayed EL technique, the QDLEDs are driven with a 500 ms forward bias square pulse of magnitude equivalent to the driving voltage required to achieve a current density of 20 mA cm À2 and allow prompt EL to reach steady state (i.e. 4 V, 5.5 V, and 8 V for the ZnO, ZnO/PEI, and ZnO:PEI 0.3 QDLEDs, respectively). Modulating the forward bias voltage to obtain the same current density ensures that the number of charges injected during the forward bias pulse is similar in all devices. It also allows for the study of delayed EL behaviour for devices under the same electrical driving conditions that the EL stability tests were conducted. An optical chopper system is activated to record the EL 50 ms following the end of the forward bias pulse. This delay is sufficiently long for all allowable luminescent exciton relaxation processes to occur and is much larger than a typical QDLED electrical time constant, rendering electrical transient effects negligible. Therefore, any measured EL signal will arise from radiative decay of excitons that are formed aer the termination of the forward bias pulse. Fig. 7(a) depicts the delayed EL intensity signal versus time collected from the QDLEDs. In this gure, time ¼ 0 on the x-axis corresponds to 50 ms aer the end of the forward bias, the time when the optical chopper is completely open. The data are normalized to the intensity at time ¼ 0 to facilitate comparison. The delayed EL signal has the same decay rate in all devices, suggesting that the mechanistic process behind the delayed EL is the same in all of them. In general, the formation of excitons aer the termination of the forward bias pulse in QDLEDs can arise from two processes: (i) recombination of residual (trapped/accumulated) charges in the various device layers including the HTL that become mobile and capable of recombination, producing luminescence aer the forward bias pulse has ended and/or (ii) triplet excitons created within the HTL that diffuse slowly and eventually reach and excite the QDs by energy transfer either directly from those triplet states (by a Dexter process) or by a Förster process from singlet intermediates produced by triplet-triplet annihilation (TTA). To identify the main process behind the delayed EL, we investigate the effect of applying a reverse bias pulse, 200 ms long, applied 650 ms aer the opening of the optical chopper on the delayed EL characteristics. It is known that in devices where process (i) is the dominant mechanism behind the delayed EL, the application of a reverse bias will lead to a permanent reduction in the delayed EL intensity and is sometimes accompanied by the appearance of EL spikes at the beginning and end of the reverse bias pulse due to the redistribution of charges which provides opportunities for electron-hole recombination. 21 On the other hand, in devices where process (ii) is more dominant, the reverse bias will result in only a temporary decrease in the delayed EL signal due to electric-eld induced dissociation of excitons which recovers completely aer the reverse bias ends. Fig. 7(b-d) show the effect of applying a reverse bias pulse of two different magnitudes (5 V and 7.5 V) on the delayed EL signal from the same set of devices. The data are normalized to the delayed EL intensity at t ¼ 0 in order to facilitate the comparison. As can be seen, in addition to the temporary decrease in EL intensity during the pulse, the reverse bias leads to a permanent reduction in the intensity (observed over the 0.4-0.6 ms range in the gures) as well as a sharp delayed EL spike at the end of the pulse (observed at 0.4 ms in the gures), indicating that the delayed EL arises primarily from the recombination of residual charges (i.e. process (i)). In this regard, the reverse bias sweeps out residual electrons and holes in the device layers towards the cathode and anode respectively, away from the QD EML. Therefore, when the reverse bias ends, some of these charges move back towards each other, driven by diffusion and coulombic forces, producing new electron-hole recombination events and hence the EL spike at the end of the reverse bias. The subsequent permanent reduction in the delayed EL intensity, on the other hand, is due to the permanent removal of residual charges by the reverse bias. As seen, this permanent reduction is larger in the case of the ZnO and ZnO/PEI devices relative to their ZnO:PEI 0.3 counterpart (and is larger, although only marginally, in the ZnO device relative to the ZnO/PEI device). The larger reduction suggests that residual charges in these devices are generally more mobile and thus can be swept out more easily by the reverse bias. By contrast, the smaller reduction in the case of the ZnO:PEI 0.3 device points to the presence of a signicant number of less mobile (i.e. strongly trapped) charges. One can also see that increasing the magnitude of the reverse bias (from 5 V to 7.5 V) does not appreciably affect the magnitude of this reduction in the case of the ZnO/PEI or the ZnO devices, again pointing to the more mobile nature of the residual charges in them which makes it possible for even the lower reverse bias to sweep them out as effectively. This is in contrast to what is observed in the case of the ZnO:PEI 0.3 device where the higher reverse voltage leads to a larger reduction in the delayed EL, reecting the role that the reverse bias plays in detrapping the immobile (i.e. strongly trapped) charges that are present in this case. Although it is not possible to determine the polarity or location of these trapped charges from the delayed EL characteristics, correlating these results with those from the FIrpic marking layer devices (Fig. 5) suggests that they may indeed be electrons in the CBP HTL. This is also in view of the fact that electrons have a much lower mobility compared to holes in CBP (electron mobility and hole mobility are 3 Â 10 À4 cm 2 V À1 s À1 and 2 Â 10 À3 cm 2 V À1 s À1 , respectively. 43 ) and hence need higher reverse voltages to be detrapped from their sites in the HTL bulk. This would suggest that some of the electrons that penetrate into the HTL in the case of the ZnO:PEI devices remain deeply trapped in the HTL. This may perhaps explain the different trajectories of the driving voltage versus time trends in Fig. 3(b) where the ETLs that lead to a deeper penetration of electrons into the HTL (i.e. ZnO:PEI 0.3 and ZnO:PEI 0.1 ) eventually lead to a faster voltage rise relative to the ZnO/PEI ETL that leads to only limited electron penetration. Because in the case of the ZnO:PEI devices the location of this electron space charge is deep inside the HTL away from the QD interface, it does not appreciably quench the luminescence of the QDs and therefore does not affect the EQE. By contrast, the more mobile charges in the ZnO and ZnO/PEI devices might therefore be holes in the CBP HTL. Regardless of the specic polarity of the charges or their location, the delayed EL results clearly show that the ZnO:PEI 0.3 ETL signicantly alters charge distribution in the device (much more than the ZnO/PEI ETL) changing the nature of residual charges that remain unrecombined from ones that are more mobile to ones that are more strongly trapped. It is also important to point out that the ZnO/ PEI device shows an additional delayed EL spike at the beginning of the forward bias pulse. That only this device shows this spike suggests that residual charges in the PEI layer may be involved in its appearance. (For example, holes that reach the PEI layer and get trapped into it during the forward bias pulse then get detrapped and pulled back towards the QD layer upon applying the reverse bias pulse where they recombine with residual electrons.) Conclusions In conclusion, we investigated the effect of adding PEI to ZnO to form a blended ZnO:PEI ETL instead of using it in a separate layer on the performance of inverted QDLEDs. Results show that the ZnO:PEI blended ETL improves device efficiency by more than 50% compared to the QDLED with only the ZnO ETL. The efficiency improvement is on par with that produced by the ZnO/PEI ETL. More remarkably however, the ZnO:PEI ETL has a signicant advantage in terms of improving device stability. A device with a ZnO:PEI ETL containing 0.3 wt% PEI exhibits an LT50 of 153 735 hours (for L 0 of 100 cd m À2 ), almost 5Â longer than a device with a ZnO/PEI ETL and 10Â longer than a ZnO ETL control device. Tests on devices that contain a luminescent marking layer reveal that the ZnO:PEI ETL results in a deeper penetration of electrons into the HTL in comparison to ZnO/PEI or ZnO ETLs, likely due to changes in electric eld distribution that also facilitate hole injection from the HTL to the QD and reduce hole accumulation at the QD/HTL interface. Results from electron-only devices and delayed EL measurements show that the ZnO:PEI ETL alters charge distribution in the HTLs changing the nature of residual charges that remain unrecombined in the device from ones that are more mobile to ones that are more strongly trapped, corroborating the conclusion that the stability enhancement is associated with reduced charge accumulation at the QD/HTL interface. The ndings show that ZnO:PEI ETLs can be used for enhancing both the efficiency and stability of QDLEDs. They also provide new insights into the importance of managing charge distribution in the charge transport layers for realizing high stability QDLEDs and new approaches to achieve that. Author contributions D. S. C. designed and conducted the experiments and D. S. C. and H. A. wrote the paper. All authors contributed to the data analysis and scientic discussion. Conflicts of interest The authors declare no competing nancial interest.
2021-08-27T17:23:01.511Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "80cccb8aeae7739ed646704eff66fe73c589f001", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/na/d1na00561h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea2a0ee845f5c1fd2c854713c66687ba02dd54bf", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
258765021
pes2o/s2orc
v3-fos-license
Association of fructose consumption with prevalence of functional gastrointestinal disorders manifestations: results from Hellenic National Nutrition and Health Survey (HNNHS) The study aimed to assess the total prevalence of functional gastrointestinal disorders (FGID), and separately, irritable bowel syndrome (IBS) among adults and to determine their potential association with fructose consumption. Data from the Hellenic National Nutrition and Health Survey were included (3798 adults; 58·9 % females). Information regarding FGID symptomatology was assessed using self-reported physician diagnosis questionnaires the reliability of which were screened using the ROME III, in a sample of the population. Fructose intake was estimated from 24 h recalls, and the MedDiet score was used to assess adherence to the Mediterranean diet. The prevalence of FGID symptomatology was 20·2 %, while 8·2 % had IBS (representing 40·2 % of total FGID). The likelihood of FGID was 28 % higher (95 %CI: 1·03–1·6) and of IBS 49 % (95 %CI: 1·08–2·05) in individuals with higher fructose intake than with lower intake (3rd tertile compared with 1st). When area of residence was accounted for, individuals residing in the Greek islands had a significantly lower probability of FGID and IBS compared with those residing in Mainland and the main Metropolitan areas, with Islanders also achieving a higher MedDiet score and lower added sugar intake, comparatively to inhabitants of the main metropolitan areas. FGID and IBS symptomatology was most prominent among individuals with higher fructose consumption, and this was most conspicuous in areas with a lower Mediterranean diet adherence, suggesting that the dietary source of fructose rather than total fructose should be examined in relation to FGID. Gastrointestinal symptoms (1) are quite common, often qualifying as functional gastrointestinal disorders (FGID), due to the frequent recurrence and chronic nature of the complaints mainly attributed to the pharynx, oesophagus, stomach, biliary tract, intestines or anorectal area (2) .These health conditions include various symptoms, such as heartburn, gastroesophageal reflux disease, dyspepsia/indigestion, nausea and vomiting, gas, bloating and irritable bowel syndrome (IBS) (3) ; all of which cause major discomfort and frequently result in work absenteeism (4) .FGID is a serious issue for health providers and a major burden on health services across the globe since reports have shown that an average of 40 % of the total human population is affected (5) .At the same time, in West European countries, 20-50 % of FGID symptomatology is attributed to IBS (2) , leading to an intensive search for the main risk factors contributing to total FGID symptomatology and IBS specifically, with diet having been implicated to FGID symptomatology.Of course, gut dysbiosis, the condition of reduced bacterial diversity and imbalance of the gut microbiota, (6) stands among the other dietary, genetic, lifestyle, psychological and environmental factors, and FGID occurrence, as a potential cause or consequence of the FGID symptomatology, with a clear distinction between them yet to be discovered (7,8) .Consequently, gut dysbiosis, which may lead to gut dysmotility and visceral hypersensitivity, is often believed to be part of the pathophysiological mechanism of the FGID occurrence and is targeted by pharmaceutical and dietary interventions (9) . Overall, fructose consumption has been associated with many chronic diseases, such as nonalcoholic fatty liver disease, CVD diabetes (10,11) and FGID, with great attention being given to the latter (12) due to its high prevalence.Latest studies have focused on fermentable oligosaccharides, disaccharides, monosaccharides and polyols and FGID onset or relief (4,(13)(14)(15) with results being highly controversial with respect to disease onset.Fructose is widespread in fresh and processed foods; it is found in small quantities in fruits, vegetables and pulses, in conjunction with fibre, and in large quantities in processed food, such as sugar sweetened beverages.Some studies have correlated fructose intake with IBS (16,17) mostly due to fructose malabsorption, whereas other studies failed to show such an association (18,19) .A case-control study that assessed differences in the habitual diet between individuals with IBS and healthy controls found that cases had a higher fat and lower fructose and fibre intakes (20) compared with controls, although it was not clear whether findings were due to reverse causality, meaning that cases may had removed fructose-and fibre-containing foods from their diet due to their symptoms.Another case-control study failed to find any differences among consumers of various food groups in patients with total FGID compared with healthy controls (21) .Further adding to this controversy, a recent study showed that a higher adherence to the Mediterranean Dietrelatively high in fructose from fruits, vegetables and pulsesresulted in a decrease in FGID prevalence (18) .The Western diet is a dietary pattern also relatively high in fructose, but it differs significantly from the Mediterranean diet, since it contains a large percentage of highly processed food, which are the main sources of fructose in this dietary pattern (22) .Specifically, the Western diet is characterised by high fructose corn syrup sweetened beverages and fruit drinks (often sweetened with apple juice, which has a fructose to glucose ratio higher than high fructose corn syrup -2•2:1), juices or nectars, which provide large proportions of free and added fructose (23) , whereas fructose from the Mediterranean diet is derived from whole fruit and vegetables and legumes (24,25) .At this point, it is important to mention that high fructose corn syrup (characterised as isoglucose or glucose-fructose syrup in the EU) is commonly used in the USA with a fructose to glucose ratio exceeding 1:1 going sometimes over the generally recognised as safe 55 % fructose (26,27) , whereas in the EU glucose-fructose syrups contain significantly lower amounts of fructose ranging from 5 % to 50 % (28)(29)(30) .However, EU has recently relaxed prior restrictions on import of high fructose corn syrup, and hence the higher ratio can be the one used in food manufacturing (31) .Western diet foods have been implicated as proinflammatory that may increase risk of IBS (32) .These results raise the question whether total fructose intake induces and/or enhances FGID symptoms or whether the foods that are rich sources of it also play a role. Therefore, the aim of the present study was to primarily examine FGID prevalence and its association with fructose consumption using data from a national nutrition and health study of the Greek population.Prevalence of IBS symptomatology, a specific FGID, was further evaluated in relation to fructose intake. Study design Data from the Hellenic National Nutrition and Health Survey (HNNHS) were used to define prevalence and associated dietary and socio-economic factors of gastrointestinal disorders in general with an additional focus on IBS symptomatology.The HNNHS followed a multistage stratified design, based on age, sex and area of residence provided by the Hellenic Statistical Authority (2011 Census) (33) .It took place from 1 September 2013 to 31 May 2015 and collected health and dietary data of noninstitutionalised individuals ≥ 6 months, living in Greece.Non-Greek-speaking citizens, pregnant and lactating women, servants of the armed forces, institutionalised individuals and people unable to provide informed consent (unless a first-degree relative assisted) were excluded from the study.Details regarding sampling and design have been already described elsewhere (33) .Briefly, a multistage sampling stratification was performed by region, sex and age group.The final sample was representative by geographical area as follows: mainland (21•8 % of sample), islands (9•6 % of sample) and the two major municipal centres (Attica and Central Macedonia; 68•6 % of the sample).Data collection was performed via standardised inhome interview by trained personnel.Sampling details and distributions have been previously published (33) .For the current study, data from a total of 3•798 (40•6 % males) Greek adults were used. The Ethics Committee of the Department of Food Science and Human Nutrition of the Agricultural University of Athens and the Hellenic Data Protection Authority approved the study, and in addition, all staff members signed confidentiality agreements and all adult participants signed a consent form to participate. Dietary assessment The methods used in HNNHS for the dietary assessment are in accordance with the European Food Safety Authority's recommendations for the harmonisation of data across member state countries of the European Union (34) .In summary, two 24 h recalls of non-consecutive days were aimed to be collected from participants using the USA Department of Agriculture's (35) Automated Multiple Pass Method (36) .The first was derived through Computer Assisted Personal Interview method and the second by telephone with the help of validated food atlases and photographs of standardised household measures (cups, grids and plates).These were used to accurately determine portion sizes, since the pictures corresponded to specific grams of food reported and selected, during the first interview and a copy was given to the participants to use during the phone interview that followed, with details are described elsewhere (33) .In summary, the nutritional value of all foods and drinks consumed was calculated, using the Nutrition Data System for Research, a Windows-based dietary analysis program designed by the Nutrition Coordinating Center at the University of Minnesota, for the collection and analyses of 24-hour dietary recalls, food records, menus and recipes (37) , and the mean intake of the two days was then calculated for the estimation of total energy and macronutrient intakes.Extreme over-and under-reporters were excluded (n 102; < 600 kcal per day and > 6000 kcal per day, respectively).Total sugars were differentiated from mean carbohydrate intake.Total sugars are defined as the sum of all free mono and disaccharides' including glucose, fructose, galactose, lactose as well as sucrose and maltose.For the purpose of this paper, total fructose was subtracted from total sugar intake in order to assess the effect of their intake on FGID (and IBS specifically), while adjusting for other sugar intake (21) .Added sugar, defined as 'all sugars and syrups added to foods during processing or preparation, excluding those naturally found in food' was also computed.All macronutrients were computed in relation to individual mean energy consumption (% total energy).Finally, the MedDiet score was calculated to address the influence of the Mediterranean diet on FGID.The MedDiet score includes eleven food components that well describe the Mediterranean diet composition.The final score ranges from 0 to 55, with 0 being no adherence and 55 being perfect adherence to the Mediterranean diet pattern (38) .Details regarding the MedDiet score calculation have been provided elsewhere (39) . Gastrointestinal symptoms Data were collected through a valid self-reported Medical History Questionnaire (33,40) .Specifically, individuals were asked whether they had been previously diagnosed with any FGID condition by a physician, such as gastroesophageal reflux disease, IBS or any other abdominal discomfort.The Greek official translation of the ROME-III questionnaire for adults designed for clinical practice and research was used in a random sample of the study population that consented to assess reliability of self-response.ROME III is a validated method used in a clinical setting to determine the presence of functional gastrointestinal disorders (41) .ROME III and not IV was used because the latter had not yet been developed when HNNHS study was conducted (42) .ROMEIII, however, remains valid since a recent meta-analysis found that this may be less suitable for epidemiological surveys, due to the more restrictive criteria it employs (43) .The Greek version of the ROME III questionnaire was developed following the ROME foundation's official guidelines and can be accessed through the foundation's official website (https://theromefoundation.org/).Results from ROME III were categorised as percent individuals with any FGID, and percentage of IBS specifically. To assess the validity of self-reporting FGID status, a sensitivity analysis was conducted (Fig. 1) comparing results obtained from both questionnaires: ROMEIII questionnaire and questions pertaining to having been diagnosed (test of detected v. reported FGID condition).All individuals that were detected by ROME III questionnaire as having at least one FGID had also reported having been diagnosed by their physician, while a remaining 8•3 % of the total population had reported being diagnosed by a physician, but were not detected by ROMEIII (Fig. 1).Based on the high correlation between physician diagnosis and ROMEIII categorisation, data from self-reported condition were used in the analysis to estimate FGID prevalence and its association with fructose intake accounting for other socio-demographic, lifestyle and dietary parameters (reported below).Prevalence of IBS specifically was evaluated separately due to high prevalence of this bowel disorder (21,43) . Socio-demographic and lifestyle data Various types of socio-demographic and lifestyle characteristics were collected through Computer Assisted Personal Interview, including sex, age, marital and employment status.Information on lifestyle data including sleeping (mean hours per day) and smoking habits (current-, ex-, never-smoker) and coffee and alcohol consumption frequency were reported.Mental health was assessed by evaluating the presence of depressive symptoms using the Patient Health Questionnaire-9, and further details have been provided elsewhere (44) .Reported weight (in kg) and height (in metres) were used to calculate BMI using the equation weight/height 2 (kg/m 2 ), and the participants weight status was categorised according to the WHO categorisation (45) .Due to the borderline low (none < 17 kg/m 2 ) and very small percentage of underweight individuals, these were grouped with those of normal weight and are used to describe the sampled population.For further analysis, overweight and obese individuals were also grouped creating a binary variable for weight status (adults with healthy weight and those with overweight or obesity) (46) .The International Physical Activity Questionnaires adapted for adults and for elderly (47) was used to estimate the physical activity (PA) level of the participants.According to the questionnaire's results, all participants were categorised in four categories (sedentary, light, moderate or high PA).Sedentary activity included individuals not meeting the light PA category. Statistical methodology Survey design analysis was used to present socio-demographic and lifestyle characteristics, weighted by area (according to sampling structure and the 2011 Hellenic population census), and categorical variables are presented as relative frequencies.Continuous variables were tested for normality of their distribution using P-P and k-density plots and were presented as mean ± standard deviation (35) for normally distributed and as median and quartiles (50 (25 -75)) for those skewed.Betweengroup differences were tested using Wilcoxon rank-sum test (for skewed variables), two-sample t test (for normally distributed variables) or Pearson χ 2 test (for categorical variables).Pearson χ 2 tests were used to determine between category distribution differences, and adjusted Wald test was performed post hoc to determine within-groups differences.Although multiple tests were performed, raw p values have been included.Multiple logistic regression models were used to estimate the odds of at least one FGID and IBS specifically with tertile of fructose consumption.Specifically, two models were used: one minimally adjusted for age and sex and another fully adjusted for all a priori known variables associated with FGID and those found to differ significantly between presence or non-presence of FGID condition during preliminary analysis.The final model included fructose consumption, sex, age (per 5-year increase), marital status, saturated fats consumption, depression and smoking status derived from the preliminary analysis and MedDiet score categories and energy intake as important factors intervening to the outcome.Post estimations of IBS manifestation were performed by area, due to significant geographical variations (main metropolitan areas, Islands & Crete and remaining Mainland).Multicollinearity was checked for all the predictors in all models via variance inflation factor and Spearman's rank correlation.The absence of multicollinearity between the predictors can be accepted when the variance inflation factor is < 10, and no moderate or strong correlation is found between covariates (48)(49)(50) (hence all r ≤ 0•39).Predicted probability of IBS was depicted in relation to mean MedDiet score and added sugar intake for each area (descriptive analysis, secondary to aims).Significance was set at alpha = 5 %.Database cleaning and statistical analysis were performed using the statistical package STATA 17.0 (StataCorp, Texas ltd.). Results Of 3•798, 765 participants, 20•1 % experienced an FGID condition, 8•2 % of whom were diagnosed with IBS specifically (40•1 % of those with FGID) (Table 1).Self-reports on physician's diagnosis regarding the presence of FGID were used in conjunction with the results based on ROME III questionnaire as the two methods were found to be highly correlated, as explained in Methods (Fig. 1).A higher prevalence of FGID and/ or IBS was detected in females, in adults aged 50 years or more years, and among divorced/separated or widowed individuals (P for all < 0•001).Also, individuals living in the two main Metropolitan areas of Greece (Attica and Central Macedonia) had the highest proportion of FGID and IBS conditions whereas individuals living in islands the lowest.Lastly, significant difference in FGID (and IBS specifically) was found within total family salary status, with individuals reporting at least two salaries having significantly lower prevalence. Table 2 presents the weight status and lifestyle characteristics of the study population by total FGID symptoms and IBS status.A significant higher prevalence of any FGID symptom and IBS specifically was found in individuals (i) with sleeping problems, (ii) with chronic stress, (iii) with depression and (iv) in current and ex-smokers (P for all < 0•001).When FGID prevalence was assessed for each one of the aforementioned statuses, it was found two to three times higher among those experiencing the addressed conditions (35•9 % for individuals with sleeping problems compared with 15•6 % for those without; 43•1 % for individuals with chronic stress compared with 14•5 % for those without and 30•9 % for individuals with depression compared with 16•3 % for those without).On the other hand, individuals with high PA levels had fewer FGID symptoms compared with those having sedentary or low PA levels. Macronutrient intake by FGID and by IBS specifically is presented in Table 3. Mean fructose intake was higher among individuals with both FGID and IBS with no observed difference within each tertile in terms of FGID symptomatology.Added sugars intake was higher among those with IBS than those without, within the 1st tertile of added sugars consumption.Mean total PUFA and total sugars intake (excluding fructose) were also higher in individuals with FGID symptomatology than those without.Sugars specifically were also higher within the 2nd tertile of their consumption for individuals with IBS than those without (P for all < 0•050).No other 'raw' significant differences were observed between groups.Finally, in Fig. 2, the results of the minimally and fully adjusted logistic regression models, for those with FGID compared with those without, and those with IBS specifically, in relation to tertile of fructose intake, are depicted.The covariates used in the minimally adjusted model were age and sex and in the fully adjusted models were age (per 5-year increase), sex, marital status, saturated fats consumption, depression, smoking status, MedDiet score categories and energy intake, along with fructose consumption tertiles.These covariates were all found to be weakly (0•2 ≤ r ≤ 0•39) or very weakly correlated (0 ≤ r ≤ 0 •19) to each other, and the mean variance inflation factor value was lower than 5•06 for all models used in the analyses, indicating the absence of multicollinearity between variables (48)(49)(50) .Overall, FGID likelihood was 1•28 times higher in individuals in the higher fructose consumption tertile (Q3) compared with the lowest (Q1), in the fully adjusted model (OR 1•28; 95 % CI: 1•03, 1•60); no association was found in the minimally adjusted.For IBS, the likelihood was higher in both models with the likelihood increasing from 1•38 times in the minimally adjusted to 1•49 times in the fully adjusted model among those with higher fructose intakes (OR 1•38:95 % CI: 1•03, 1•85 and OR:1•49; 95 % CI: 1•08, 2•05, respectively).Other factors that were included and were independently associated with the outcome included sex, depression, marital and smoking status.Being male, having no reported depressive symptoms, being married and never having smoked were associated with lower likelihood for any FGID manifestation (protective).The same was observed for IBS excluding age which had no significant effect.Furthermore, increased MedDiet score adherence decreased likelihood for FGID symptoms between female participants).† Existence of at least one FGID or IBS symptom, based on Rome III diagnostic questionnaire (n 168) and self-reports in HNNHS (n 3630).‡ Significant differences within groups for individuals with FGID symptomatology.§ Significant differences within groups for individuals with IBS symptomatology.||Summary of the educational level of a family as a whole. IBS.The odds ratios for the likelihood of any FGID or IBS manifestation by minimally and fully adjusted model are presented in Fig. 2. Discussion The main finding of the present study is that dietary fructose intake was associated with increased likelihood of any FGID and IBS specifically, irrespective of sex, depressive symptomatology and lifestyle habits, only among adults with higher intakes in relation to total energy consumption.In addition, it was also observed that one in five of the adult population experienced at least 1 FGID condition previously diagnosed by a physician, representing approximately 1•8 million Greek adults, 40 % of whom had IBS.FGID affected mostly females residing in main Metropolitan areas and separated or divorced adults and those of higher education.The findings are of importance since worldwide FGID affects an average 30 % of the adult population residing in Western Countries, with a 20-50 % attributed to IBS (2) .Moreover, 40 % of the global population has been found to have suffered from an FGID at some point in their lives causing a great burden for their countries' economies and health systems and having a great impact on each individual's quality of life (5,51) .It is noteworthy to mention that healthcare costs for patients with IBS are estimated to $2 billion per year for China and £45•6-£200 million per year in the UK (52) .This is an area that needs to be addressed since in the present study, a high prevalence of FGID was observed, especially among individual was related with those with higher fructose intake. It has been proposed that high amounts of fructose intake or high concentrations can disrupt metabolic processes and trigger organ malfunction and as a result can contribute to FGID symptoms (14,53) .It is noteworthy that only 50 % of healthy individuals can fully absorb 25 g of fructose as a 10 % solution causing large variations in how high fructose consumption is translated at an individual level (54,55) .Unabsorbed free fructose can react with incompletely digested proteins and may form advanced glycation end products (dAGEs) in the intestine (56)(57)(58) .It has been hypothesised that these dAGEs are associated with inflammatory diseases and gastrointestinal, respiratory and tissue distress, (59,60) and it has been proposed that these products may trigger the mechanism behind these malfunctions.In addition, recent studies have included potential genetic factors to fructose malabsorption.For example, the carbohydrate response element-binding protein might play a possible role in fructose metabolism and tolerance, further adding to the variation of potential malabsorption (55,61) .The complexity of the involved mechanisms and the inherent uncertainty has led to the suggestion that fructose restriction may be a dietary solution for IBS symptomatology and relief from FGID symptoms (16,(62)(63)(64) , although a paucity remains in the literature.Based on these reactions and potential fructose-malabsorption mechanism, the fructose concentration of processed food in the market and widely consumed should also be considered.Fructose content varies among sweetened beverages and processed foods in general, with a beverage of 100 % apple juice having an average fructose to glucose ratio of 2:1 (56) , but commonly remains higher per portion than the amount found in fruits and vegetables (35) .Natural foods are characterised by small amounts of fructose, at an average level of about 5-8 % of their weight (e.g.4•05 % fructose content in pineapples and 8•65 % in green grapes) accounting sometimes for free fructose to glucose excess (65) .For example, distinctive fruits providing unpaired fructose are apples (∼4•3 g EFF/medium-sized apple), pears (∼5•9 g EFF/ medium-sized pear), mangoes (∼4•4 g EFF/medium-sized mango), watermelon (∼2•8 g EFF/1 diced 8-oz cup) and green grapes (∼1 g EFF/100 g) (35) .In any case, when consuming raw natural foods that contain fructose, this is delivered along with water, fiber, antioxidants and various other whole food constituents which combined result to a slow gastrointestinal absorption following consumption and a minor increase in circulating fructose (35,66) . Expanding on the effects that fructose source may have, the potential preventive effect of the Mediterranean diet on FGID should also be considered.The present study showed that although the likelihood of any FGID was associated with higher fructose intake, the predicted probability decreased with higher adherence to the Mediterranean diet.In particular, Islanders had a lower probability of IBS and higher adherence to the Mediterranean diet compared with the individuals residing in the main Metropolitan areas and the mainland.The association of fructose with increased FGID symptomatology may therefore be related to fructose intake which is ingested mainly from processed food.Although this cannot be directly revealed from this analysis, it is recommended that future studies separately address natural v. processed fructose sources, based on this study's observations.It is noteworthy in the past several years, the traditional Mediterranean diet in Greece is transitioned to a more Westernized one, (21,46) and this may partly explain the association observed between FGID and higher fructose intake. Another explanation of the association between fructose consumption and FGID could be the potential malabsorption observed with high dietary fructose intakes mostly from processed foods, as other studies have reported (16,67) .Specifically, higher free fructose intake could lead to lower abundance of beneficial microbes for carbohydrate metabolism (68) in the gut microbiota and may trigger general gastrointestinal discomfort, including that of IBS, due to its slow absorption leading to increased osmotic load and fermentation, especially in the presence of visceral hypersensitivity (12,69,70) .The specific mechanisms, however, have not been addressed by this study. Other factors from our study that were found to be correlated with FGID, include age, sex, smoking, PA, sleep disturbances, stress and other psychological factors (2) .The results of our study are in accordance with the results from another study, conducted in Mexico, which observed that women were at a 50 % higher risk of having an FGID (71) .The correlation of sex and FGID symptomatology was observed, along with other factors, in other studies too (52,72) .A very interesting study of 27 949 French adults from the general population reported that IBS patients were more likely to be current smokers, younger, single, with low income and following a healthier diet (73) .Regarding stress, depression and sleeping quality, a recent study by Hwang S-K and co-workers indicated significant associations between depression, anxiety, stress and bad sleeping quality with the severity and occurrence of FGID symptoms (74) . Limitations and strengths Results must be interpreted with caution since ROME III questionnaire, although used as a screening tool in epidemiological studies, it was not designed to detect structural disorders and abnormalities, hence alone is an insufficient tool for diagnosis.Another limitation includes the potential for Type I error due to the multiple tests that were performed, despite the associations found in the adjusted analysis.Results must therefore be carefully interpreted s; a P value of < 0•001 is not precise to conclude.Also, this study is of a retrospective design with both exposure and outcome being assessed simultaneously and, therefore, does not allow to extract true causal effects to the outcome (total FGID and IBS symptomatology).Longitudinal data could upgrade the power of the current study's results as well as the examination of the type of fructose (unpaired, paired free and paired in sucrose), in relation to FGID/IBS.Also, even though there was a significant difference in FGID and IBS prevalence between females and males, data analysis was not stratified by sex, due to the limited number of ROME III response rate with the long structure of the questionnaire being the main reason for the low response rate.The limited number of individuals in subgroup analysis led to a wide CI, and although a difference was detected, the true value may vary.This, however, does not compromise the strength of the study, which combined ROMEIII results with reported information.This study has many strengths, as it employed a validated questionnaire to assess presence of FGID symptoms.All individuals identified were aware of their status, and some who were not classified had reported having been previously diagnosed.This underlines the importance that multiple assessment methods to increase the sensitivity of identifying individuals with FGID symptoms.Another strength was that a national representative study was used to examine not only fructose but also its relation to the Mediterranean diet, a well-established dietary pattern.This is essential to differentiate whether overall high fructose intake may enhance FGID symptoms, and, specifically IBS, or fructose from other processed sources, in the context of a more Western type of diet. Conclusion This study adds to the knowledge regarding the association between FGID symptomatology and fructose consumption, showing that the main determinants are largely modifiable, including high fructose intake in areas with lower Mediterranean diet adherence.Population-specific programmes examining specific contributing food sources to fructose consumption, to other dietary patterns, may help decrease the prevalence of FGID, and IBS specifically. Fig. 1 . Fig. 1.Hierarchy graph of FGID distribution by ROME III and/or reported by diagnosis.FGID by ROMEIII: percent individuals found to have at least 1 functional gastrointestinal disorder (FGID) based on the ROME Association criteria.FGID by reported diagnosis: percent of individuals that reported having been diagnosed with at least one type of FGID by a physician.% Reported but not detected by ROMEIII: percent individuals that reported having been diagnosed by a physician but not detected by ROMEIII. Fig. 3 . Fig. 3. Predicted probability of IBS by area of residence in males and females, with information on mean MedDiet score by area.IBS, irritable bowel syndrome. Table 1 . Socio-demographic characteristics in the study population by gastrointestinal (GI) and IBS status Values are weighted for population distribution and living area with primary sampling unit (PSU): household (family or another bond under the same household).Weighted percentages (%) are depicted for total population and percentages by variable of interest.P values are based on Pearson χ 2 tests for within groups total difference.Total n of participants for each category not shown due to the use of the svy: stata command to obtain shown percentages.Significance set at alpha = 5 %.Raw P values are provided.Type I error following Bonferroni correction, at 5 % level, is n/0•05, which is approximately < 0•0007; a P value < 0•001, although significant, is not precise to conclude.* Existence of at least one FGID or IBS symptom, based on Rome III diagnostic questionnaire (n 168) and self-reports in HNNHS (n 3630) between each specific category (e.g., % Table 2 . Anthropometric and lifestyle characteristics of the study population by gastrointestinal (GI) status
2023-05-19T06:17:40.949Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "f1cf8f4fbda9bb368363752c91f4499d2ce03b37", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1CB1CDFD3BB2185BED81C166A5B738A2/S0007114523001198a.pdf/div-class-title-association-of-fructose-consumption-with-prevalence-of-functional-gastrointestinal-disorders-manifestations-results-from-hellenic-national-nutrition-and-health-survey-hnnhs-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8d269aa84912f682b513f236052f1b3b3470ca6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46760563
pes2o/s2orc
v3-fos-license
Multimode optical feedback dynamics in InAs/GaAs quantum dot lasers emitting exclusively on ground or excited states: transition from short- to long-delay regimes The optical feedback dynamics of two multimode InAs/GaAs quantum dot lasers emitting exclusively on sole ground or excited lasing states is investigated. The transition from longto short-delay regimes is analyzed, while the boundaries associated to the birth of periodic and chaotic oscillations are unveiled to be a function of the external cavity length. The results show that depending on the initial lasing state, different routes to chaos are observed. These results are of importance for the development of isolator-free transmitters in short-reach networks. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (140.5960) Semiconductor lasers; (250.0250) Optoelectronics; (190.3100) Instabilities and chaos. References and links 1. C. F. Lam, H. Liu, and R. Urata, “What devices do data centers need?” in Optical Fiber Communications Conference and Exhibition (OFC) of 2014 OSA Technical Digest Series (Optical Society of America, 2014), paper M2K.5. 2. Cisco white paper, “The Zettabyte Era: Trends and Analysis” (Cisco, 2016). 3. D. Bimberg, “Quantum dot based nanophotonics and nanoelectronics,” Electron. Lett. 44, 390 (2008). 4. G. Eisenstein and D. Bimberg, eds., Green Photonics and Electronics (Springer, 2017). 5. M. T. Crowley, N. A. Naderi, H. Su, F. Grillot, and L. F. Lester, “GaAs-Based Quantum Dot Lasers,” in Advances in Semiconductor Lasers, J. J. Coleman, A. Bryce, and C. Jagadish, eds. (Academic Press, 2012), pp. 371–417 6. M. Grundmann, ed., Nano-Optoelectronics, NanoScience and Technology (Springer, 2002). 7. K. Nishi, K. Takemasa, M. Sugawara, and Y. Arakawa, “Development of quantum dot lasers for data-com and silicon photonics applications,” IEEE J. Sel. Topics Quantum Electron. 23, 1–7 (2017). 8. A. Y. Liu, S. Srinivasan, J. Norman, A. C. Gossard, and J. E. Bowers, “Quantum dot lasers for silicon photonics,” Photonics Res. 3, B1 (2015). 9. S. Chen, W. Li, J. Wu, Q. Jiang, M. Tang, S. Shutts, S. N. Elliott, A. Sobiesierski, A. J. Seeds, I. Ross, P. M. Smowton, and H. Liu, “Electrically pumped continuous-wave III-V quantum dot lasers on silicon,” Nat. Photonics 10, 307–311 (2016). 10. Ranovus Inc., “Ranovus announces availability of world’s first quantum dot multi-wavelength laser and silicon photonics platform technologies to create a new cost and power consumption paradigm for DCI market,” (Ranovus, 2016), http://ranovus.com/worlds-first-quantum-dot-multi-wavelength-laser-and-silicon-photonics-platformtechnologies-for-dci-market/. 11. Y. Urino, N. Hatori, K. Mizutani, T. Usuki, J. Fujikata, K. Yamada, T. Horikawa, T. Nakamura, and Y. Arakawa, “First demonstration of athermal silicon optical interposers with quantum dot lasers operating up to 125 ◦C,” J. Lightw. Technol. 33, 1223–1229 (2015). 12. N. Zhuo, J.-C. Zhang, F.-J. Wang, Y.-H. Liu, S.-Q. Zhai, Y. Zhao, D.-B. Wang, Z.-W. Jia, Y.-H. Zhou, L.-J. Wang, J.-Q. Liu, S.-M. Liu, F.-Q. Liu, Z.-G.Wang, J. B. Khurgin, and G. Sun, “Room temperature continuous wave quantum dot cascade laser emitting at 7.2 μm,” Opt. Express 25, 13807–13815 (2017). Vol. 26, No. 2 | 22 Jan 2018 | OPTICS EXPRESS 1743 #318053 https://doi.org/10.1364/OE.26.001743 Journal © 2018 Received 19 Dec 2017; accepted 8 Jan 2018; published 16 Jan 2018 13. A. Spott, E. J. Stanton, N. Volet, J. D. Peters, J. R. Meyer, and J. E. Bowers, “Heterogeneous integration for mid-infrared silicon photonics,” IEEE J. Sel. Top. Quantum Electron. 23, 1–10 (2017). 14. D. O’Brien, S. Hegarty, G. Huyet, J. McInerney, T. Kettler, M. Laemmlin, D. Bimberg, V. Ustinov, A. Zhukov, S. Mikhrin, and A. Kovsh, “Feedback sensitivity of 1.3 μm InAs/GaAs quantum dot lasers,” Electron. Lett. 39, 1819 (2003). 15. K. Mizutani, K. Yashiki, M. Kurihara, Y. Suzuki, Y. Hagihara, N. Hatori, T. Shimizu, Y. Urino, T. Nakamura, K. Kurata, and Y. Arakawa, “Optical I/O core transmitter with high tolerance to optical feedback using quantum dot laser,” in 2015 European Conference on Optical Communication (ECOC) (2015), paper 0263. 16. D. Arsenijević and D. Bimberg, “Quantum-dot lasers for 35 gbit/s pulse-amplitude modulation and 160 gbit/s differential quadrature phase-shift keying,” Proc. SPIE 9892, 9892 (2016). 17. C. Wang, B. Lingnau, K. Lüdge, J. Even, and F. Grillot, “Enhanced dynamic performance of quantum dot semiconductor lasers operating on the excited state,” IEEE J. Quantum Electron. 50, 723–731 (2014). 18. Z.-R. Lv, H.-M. Ji, X.-G. Yang, S. Luo, F. Gao, F. Xu, and T. Yang, “Large Signal Modulation Characteristics in the Transition regime for two-state lasing quantum dot lasers,” Chinese Phys. Lett. 33, 124204 (2016). 19. B. J. Stevens, D. T. D. Childs, H. Shahid, and R. A. Hogg, “Direct modulation of excited state quantum dot lasers,” Appl. Phys. Lett. 95, 061101 (2009). 20. D. Arsenijević, A. Schliwa, H. Schmeckebier, M. Stubenrauch, M. Spiegelberg, D. Bimberg, V. Mikhelashvili, and G. Eisenstein, “Comparison of dynamic properties of groundand excited-state emission in p-doped InAs/GaAs quantum-dot lasers,” Appl. Phys. Lett. 104, 181101 (2014). 21. F. Grillot, B. Dagens, J.-G. Provost, H. Su, and L. F. Lester, “Gain compression and above-threshold linewidth enhancement factor in 1.3-μm InAs-GaAs quantum-Dot lasers,” IEEE J. Quantum Electron. 44, 946–951 (2008). 22. F. Zubov, M. Maximov, E. Moiseev, A. Savelyev, Y. Shernyakov, D. Livshits, N. Kryzhanovskaya, and A. Zhukov, “Observation of zero linewidth enhancement factor at excited state band in quantum dot laser,” Electron. Lett. 51, 1686–1688 (2015). 23. C. Mesaritakis, C. Simos, H. Simos, S. Mikroulis, I. Krestnikov, E. Roditi, and D. Syvridis, “Effect of optical feedback to the ground and excited state emission of a passively mode locked quantum dot laser,” Appl. Phys. Lett. 97, 061114 (2010). 24. A. Röhm, B. Lingnau, and K. Lüdge, “Ground-state modulation-enhancement by two-state lasing in quantum-dot laser devices,” Appl. Phys. Lett. 106, 1–6 (2015). 25. F. Grillot, N. A. Naderi, J. B. Wright, R. Raghunathan, M. T. Crowley and L. F. Lester, “A dual-mode quantum dot laser operating in the excited state,” Appl. Phys. Lett. 99, 1110–1113 (2011). 26. J. D. Walker, D. M. Kuchta, and J. S. Smith, “Wafer-scale uniformity of vertical-cavity lasers grown by modified phase-locked epitaxy technique,” Electron. Lett. 29, 239-240 (1993). 27. H. Huang, D. Arsenijević, K. Schires, T. Sadeev, D. Bimberg, and F. Grillot, “Multimode optical feedback dynamics of InAs/GaAs quantum-dot lasers emitting on different lasing states,” AIP Adv. 6, 125114 (2016). 28. A. Kovsh, N. Maleev, A. Zhukov, S. Mikhrin, A. Vasil’ev, E. Semenova, Y. Shernyakov, M. Maximov, D. Livshits, V. Ustinov, N. Ledentsov, D. Bimberg, and Z. Alferov, “InAs/InGaAs/GaAs quantum dot lasers of 1.3 μm range with enhanced optical gain,” J. Cryst. Growth 251, 729–736 (2003). 29. O. Stier, M. Grundmann, and D. Bimberg, “Electronic and optical properties of strained quantum dots modeled by 8-band k·p theory,” Phys. Rev. B 59, 5688 (1999). 30. A. Schliwa, M. Winkelnkemper, and D. Bimberg, “Few-particle energies versus geometry and composition of InxGa1x As/GaAs self-organized quantum dots,” Phys. Rev. B 79, 075443 (2009). 31. N. Schunk and K. Petermann, “Stability analysis for laser diodes with short external cavities,” IEEE Photon. Technol. Lett. 1, 49–51 (1989). 32. J. Ohtsubo, Semiconductor Lasers: Stability, Instability and Chaos, Springer Series in Optical Sciences (Springer, 2010). 33. J. P. Toomey, D. M. Kane, C. McMahon, A. Argyris, and D. Syvridis, “Integrated semiconductor laser with optical feedback: transition from short to long cavity regime,” Opt. Express 23, 18754 (2015). 34. N. Gavra and M. Rosenbluh, “Behavior of the relaxation oscillation frequency in vertical cavity surface-emitting laser with external feedback,” J. Opt. Soc. Am. B 27, 2482–2487 (2010). 35. M. Stubenrauch, G. Stracke, D. Arsenijević, A. Strittmatter, and D. Bimberg, “15 Gb/s index-coupled distributed feedback lasers based on 1.3 μm InGaAs quantum dots,” Appl. Phys. Lett. 105, 011103 (2014). Introduction The transfer of massive amounts of information is no longer limited to optical long-distance transoceanic links or backbone networks. Today data through-put in shorter reach networks is larger. Metropolitan and access networks and finally fiber-to-the-home systems show huge growth rates. In data centers and supercomputers most amounts of information are exchanged between servers. Intra-chip and inter-chip interconnects are coming next [1,2]. New requirements in particular of the energy consumption showing trade-offs with data rates must be now carefully considered in the design and operation of new generations of photonic devices [3,4]. Owing to their truly discrete energy states, InAs/GaAs quantum dot (QD) lasers offer superior continuouswave properties as compared to their quantum well (QW) counterparts [3][4][5][6][7]. The lower threshold current and the higher temperature stability make QD lasers much better candidates for reducing the power consumption [3] being vital for silicon photonic integration [8][9][10][11]. Let us stress that recent works have also reported the possibility to extend the heterogeneous silicon platforms to the mid-infrared window hence paving the way of novel types of sensor applications [12,13]. Commonly, QD lasers are engineered to operate on the ground-state (GS) transition because of its lower threshold current density. Owing to the strong damping of the relaxation oscillations, GS lasing emission commonly exhibits a higher resistance to external optical feedback which is desired for laser stability and isolator-free applications [14,15]. However, it is known that this strong damping of GS QD laser limits their modulation capabilities at room temperature [16][17][18][19][20]. In order to increase the speed, prior studies have proposed to take advantage of the stimulated emission originating from the first excited state (ES) transition [18][19][20]. Owing to a faster carrier capture as well as twice larger higher saturated gain, ES QD lasers are more promising for high-speed applications. For instance, the twice larger degeneracy of the ES translates into a larger maximum gain and differential gain and smaller nonlinear gain compression. For instance, it was shown that ES lasers exhibit a much smaller K-factor as compared to GS ones which is of first importance for maximizing the possible bandwidth of high-speed transmitters [16]. The first experimental demonstration performed at the link level was achieved with 1.3 µm InAs/GaAs QD lasers emitting on the first ES transition, for which modulation capabilities up to 25 Gbps (OOK) and 35 Gbps (PAM) have been successfully reported [16,20]. In addition, it was shown that ES QD lasers can exhibit a near-zero linewidth enhancement factor (LEF), which is crucial for a multitude of applications [21,22]. This work reports on comparative experiments dealing with the multimode optical feedback dynamics [14] of two InAs/GaAs QD Fabry-Perot (FP) lasers having identical active regions but emitting from different energy states. The present QD lasers do not exhibit a two-state lasing dynamics where ES and GS lasing can take place simultaneously [23,24], but instead they emit either exclusively on the GS or the ES. In practice, the ES emission can be selected by multiple ways, e.g. by shortening the cavity length, using proper facet coatings, or directly through a dichroic mirror [17,20]. An alternative approach presents a DFB laser, for which the grating pitch is adjusted to the ES transition [25]. Here, the ES selection is simply obtained by exploiting the natural wavelength dispersion of the photoluminescence (PL) peak across the entire wafer [26]. The fabrication yield is relatively high, and the wafer uniformity is very good with variations between material parameters of the order of 1%. In order to avoid problems which might arise from some GS-ES interplay dynamics, for this work independent devices for ES and GS emission were processed, rather than a single section device emitting on both energy states [23,25]. As opposed to our previous studies [27], which were concentrated only on long-delay feedback, this paper goes an important step beyond by analyzing the full transition from short-to long-delay regimes. Boundaries associated to the birth of both periodic and chaotic oscillations are unveiled and are shown to depend on the external cavity length. The experiments show richer feedback dynamics in the ES QD laser as compared to GS lasers in which no chaotic pulsations are observed. The present study brings now a detailed understanding of the nonlinear dynamics of multimode QD lasers and is of paramount importance for the development of feedback resistant transmitters. Experimental configuration The active region of both devices, one emitting solely on GS and the other solely on ES, is based on a dot-in-well structure, including 10 InAs dot sheets grown by molecular beam epitaxy (MBE), and embedded in InGaAs quantum wells [28]. The dot lateral extension is around 30 nm, with a dot density of 3 ∼ 5×10 10 cm −2 . Lasers are left as-cleaved and cavity lengths are both 1 mm long while the ridge waveguide etched through the active region is 2 µm wide [28]. Figure 1 displays the LIs curves of both lasers. For the GS laser, the threshold current I th is of 16.5 mA, the external efficiency is 21%, and the gain peak wavelength is ∼1300 nm. For the ES one, the threshold current I th is of 88.5 mA, the external efficiency is 11%, and the gain peaks is at ∼1220 nm. The insets of Fig. 1 display the optical spectra taken at 1.75 × I th , the insets (II) highlight the center of the emission marked by the red rectangle boxes in insets (I). Most interestingly the ES QD laser does exhibit a modulated optical spectral envelope in contrast to the GS QD laser [27]. Qualitatively, these peculiar properties can be attributed to a competing number of allowed excitonic transitions from the ES state based both on the ES electron doublet splitting as well as on the variety of energetically close laying hole states [29,30]. The experiments described hereinafter are performed at room temperature (298K). As shown in Fig. 1(b), to avoid power roll-over, the bias current was fixed at 160 mA (∼ 1.75 × I th ) to maximize the ES QD laser output power. The same bias-to-threshold ratio was kept for the whole investigation of the GS laser dynamics. The operation points are indicated in Fig. 1 with black dots. Figure 2 depicts the free-space optical feedback setup. The QD lasers are mounted on a suspended optic table to minimize the environmental perturbations. The free-space external cavity is located on the left side of the laser, the lasing emission from the rear facet is coupled by an AR coated lens and reflected by a movable mirror which allows us to adjust the external cavity length L ext . The latter is varied from 2 cm up to 50 cm, which corresponds to a ratio f RO / f ext ranging from 0.2 to 10 with f RO the relaxation oscillation frequency of the solitary laser and f ext = c/2L ext the frequency of the external cavity. Such a tuning allows continuously probing the laser dynamics both within short-and long-delay regimes corresponding to the cavity lengths where the ratio f RO / f ext is <1 or >1, respectively [31]. For each L ext , the focus of the lens is readjusted in order to collimate safely the coupled light onto the mirror. The feedback strength r ext , defined as the ratio of the returning power to the laser output power, is controlled by a free-space variable optical attenuator (VOA). The feedback strength r ext , which takes into account the coupling loss between the facet and the external cavity is calculated with an accuracy better than 0.01%. Due to the different output beam divergence, the range of feedback strength r ext is not exactly the same for both devices. For the GS laser, r ext ranges from 0.04% to ∼ 75%, while it ranges from 0.04% to ∼ 55% for the ES one. Emission from the front facet is then coupled by an AR coated lens-end fiber and isolated for further analysis. On the detection path, an optical spectrum analyzer (OSA) and an electrical spectrum analyzer (ESA) are connected to monitor the dynamics simultaneously. Figures 3(a) and 3(b) depict the radio-frequency (RF) spectra of both QD lasers operating under free-running conditions and short-delay feedback with L ext = 3 cm ( f ext =5 GHz). Without feedback, the free-running RF spectra remain flat with a level of RF power comparable to that of the noise floor. In order to get a more complete overview of the dynamics, Figs. 3(c) and 3(d) also display the RF spectral mapping with respect to the feedback ratio, assuming the same experimental conditions. The color bar represents the RF power measured by the photo-detector. The green dashed lines correspond to the frequency of the external cavity ( f ext = 5 GHz) while the orange ones indicate the level of the relaxation oscillation frequencies f RO of the free-running lasers. Let us stress, that the relaxation oscillation frequencies f RO are not directly extracted from the setup presented Fig. 2. Instead, in order to reduce the loss, the output of the laser was directly sent to the photodiode and the ESA allowing better accuracy of the measured values. At 1.75×I th , f RO of the GS QD laser is about 2.4 GHz, while the ES QD laser exhibits a smaller value of 1.6 GHz. The difference between these values can be attributed to larger thermal effects in the ES QD lasers driven at 5 times larger current, as shown in the Fig. 1(b). In addition, the inhomogeneous broadening due to much larger energy level dispersion (see inset II of Fig. 1(b)) must be taken into account as well. Overall, both lasers remain perfectly stable for low value of r ext (a few percent), then Hopf bifurcation with the undamping of the relaxation oscillations through period one oscillations arise above ∼ 40% for the GS laser and ∼ 20% for the ES one. Under optical feedback, characteristic frequencies are observed which differ, depending on the nature of the lasing transition. For instance, Fig. 3(a) shows that for the largest feedback ratio (i.e. r ext =75.9%), the GS QD laser is driven by periodic dynamics. In this regime, the dominant contribution peaking at about 2.5 GHz results from the relaxation oscillations, as seen in Fig. 3(c). Interestingly, this device emitting exclusively on the GS transition does not exhibit a clear route to chaos. Thus, the dynamics evolves from a stable solution to periodic oscillations without any chaotic pulsations, whatever the feedback level is. For the QD laser emitting on the ES transition, Fig. 3(b) unveils a more regular route to chaos with periodic oscillations (blue) at 2.5 GHz (r ext =22%) followed by a chaotic regime (red) characterized by a high pedestal of noise level (r ext =54.5%). It is true that the periodic oscillation frequency observed in ES laser here is similar to that of the GS laser, however, as the corresponding r ext differs, the excited periodic oscillations do not behave the same way. As seen from Fig. 3(d), the dominant frequency slightly differs from the relaxation oscillation frequency of the ES laser, which can be attributed to the increase of the refractive index due to thermal effects at this higher bias level. Results and discussion Although the two lasers are based on the same active medium, our results show non-symmetrical responses to optical perturbations. Indeed, because the stimulated emission of the first QD laser exclusively originates from the GS transition, the carrier dynamics involves transport, capture and relaxation, unlike for the ES. In other words, the ES QD laser alone exhibits richer optical feedback dynamics. The extra carrier transport required for emission on the GS leads to a larger damping rate γ D preventing chaotic oscillations even at the highest feedback ratios. According to the data from our previous work, the damping factor γ D of this ES QD laser was estimated to be much smaller of about 0.6 GHz, as compared to that of the GS QD one being above 18 GHz [27]. Besides, it has to be noted that the shapes of the optical spectra may also affect the sensitivity to optical feedback. As shown in the insets (II) of Fig. 1(a) and 1(b)), the ES QD laser suffers from a stronger modal competition as compared to the GS laser [27,29,30]. In what follows, the RF spectral mappings are now used to extract the boundaries associated with the periodic and chaotic states both in short-and long-delay regimes. To do so, the following criteria are used. First, a threshold of the periodic oscillation is defined as the excited peak being 5 dB above the free-running noise level. Second, the threshold of the chaotic oscillation is defined as the noise level of the RF spectrum being more than 10 dB above the free-running noise spectrum. Based on these two criteria, Fig. 4 depicts the boundaries extracted at 1.75 ×I th as a function of the external cavity length L ext . The separation between the short-and long-delay regimes is marked by the vertical line (orange) corresponding to the condition f RO / f ext = 1. For improved precision, measurements are performed by varying the external cavity length every centimeter close to the transition while larger steps are taken above this area. Due to the absence of chaos in the GS QD laser, only boundaries associated to the transition between fixed points and periodic oscillations (blue) are reported in Fig. 4(a), while for the ES QD one those from periodic to chaotic oscillations (red) are present in Fig. 4(b). Because the GS QD laser is strongly overdamped, the lower limit of the periodic boundaries is always found at larger feedback levels. In addition, within the short-delay regime ( f RO / f ext < 1), boundaries show some residual undulations. This effect directly results from inferences between internal and external cavity modes [32], which means that the extrema correspond to situations where the laser is either stable or unstable. In long-delay regime ( f RO / f ext > 1), the feedback ratio delimiting the boundaries keeps decreasing, until it progressively becomes rather independent of the external cavity length. Hence the feedback phase exhibits no undulations. This smooth transition arising between the short and long-delay regimes is different from what typically occurs for single-frequency lasers for which a sharper transition is usually observed [33]. In order to further understand the impact of the optical feedback, our results are now described by a standard rate equation model. To do so, let A, φ, and N be the amplitude, the phase and the carrier density respectively such as [32]: with τ ext = f −1 ext and θ(t) = ω 0 τ ext +φ(t) − φ(t − τ ext ) the respective round-trip time and feedback phase in the external cavity, Γ the confinement factor, a the linear gain coefficient, N t the carrier density at transparency, τ p the photon lifetime, α H the LEF, I the pump current, q the electron charge, V the active region volume, and τ c the carrier lifetime. The coefficient κ in the above equations is linked to the feedback ratio r ext through the expression: with τ in and R the internal photon round-trip time and the facet reflectivity subjected to the optical feedback. Assuming small perturbations of the steady-states A s , φ s and N s such as A s + δ Ae γt , φ s + δφe γt and N s + δNe γt , the Jacobian matrix can be extracted such as: The characteristic equation is then given by the determinant D(γ) [32] as: where f RO = ω RO /2π is the relaxation oscillation frequency of the free-running laser. Taking γ = i2π f and assuming a weak feedback configuration (i.e. (2π f RO ) 2 κ 2 ), the excited periodic frequency associated to the stability boundaries is expressed as follows [32]: Equation (5) tells that under weak feedback, the excited periodic frequencies f P oscillate from either side of the free-running relaxation oscillation frequency f RO with respect to the external cavity length. It is possible to find some positions of the external cavity for which the laser stability is further enhanced. Figure 5 shows the extracted f P (blue points) associated to the stability boundary between fixed points and periodic oscillations for both QD lasers. As previously mentioned, in order to reach a better precision, each point is obtained by considering a fine tuning step of the external cavity length especially close to the transition. The vertical lines (orange) corresponding to f RO / f ext = 1 is once again indicating the separation of regimes. The lines in red give the frequency branches of the three first external cavity modes (ECM) with f n = n × f ext with n=1,2,3 respectively. In the short-delay regime and for some positions slightly above f RO / f ext = 1, the excited frequency continuously decreases with the external cavity length with some oscillations located on either side of f RO . This effect is notably enhanced when the external frequencies are larger than the relaxation frequency, in qualitative agreement with Eq. (5) and with the evolution of the boundaries shown in Fig. 4. However, it turns out that the observed oscillations are not perfectly symmetric on both sides of f RO whatever the lasing transition is. This discrepancy is attributed to the stronger feedback conditions (r ext >2% and κ >6.4 GHz > f RO ) used in the experiments, hence the domain of validity of Eq. (5) is on verge. This experimental limitation, which is enhanced in the short cavity regime, is a result of the nature of the QD lasers, hence making the κ-factor always larger than the relaxation oscillation frequency. Interestingly, from Fig. 5, for some positions of the external cavity length, the maxima tend to coincide with the external cavity mode frequencies. In other words, when the maxima perfectly match with the external branches, the relaxation oscillations are undamped because the damping factor is decreased as the length of the external cavity increases [34]. As such, it becomes more favorable for the system to switch to the next frequency branch with a higher damping. Then, when QD lasers enter into the long-delay regime, the sensitivity to the feedback phase is lost, and the excited frequency converges towards f RO . However, the convergence is not perfect for the ES lasing transition, probably because of the very low damping rate observed in this laser. Lastly, with the increase of optical feedback at a fixed external cavity length, the laser oscillation evolves to a periodic state after crossing the boundary. However, when the external cavity length satisfies the condition L ext = mc/2π f RO , the laser can constructively couple with the external cavity, meaning that a larger fraction of feedback light is required to destabilize it. The stable area is relatively larger at this location, but the laser can become unstable after the feedback exceeds the critical point corresponding to the birth of chaotic oscillations (see Fig. 4). Conclusion This work provides fundamental insight on the multimode optical feedback dynamics of InAs/GaAs QD lasers emitting on different lasing states. Although the two lasers are made from the same active medium, their responses to the external perturbation are found not much alike. The GS laser displays a strong resistance to optical perturbations without any chaotic pulsations whatever the measured external cavity length and feedback strength. In contrast, the ES laser exhibits richer nonlinear dynamics with both periodic and chaotic oscillations. Such a difference is attributed to the very large damping factor of the GS laser preventing any chaotic oscillations even at the largest feedback ratios. Lastly, the evolution of the extracted boundaries and excited periodic frequency unveils a clear dependence on the external cavity length of oscillations in the short-delay regime, while in the long-delay regime the system becomes rather independent of the feedback phase. However, in this case, the transition from short-to long-delay regime is not as sharp as that is usually observed in single-mode lasers. As a conclusion, these results provide useful guidelines for featuring quantum dot laser based system solutions with low energy consumption, which is of prime importance for short reach networks. Since quantum dots are touted to be very promising for silicon photonics [9], this work provides important information for the realization of on-chip isolator-free active-components. For instance, a recent work has reported a 25 Gbps QD silicon transmitter operating without optical isolator [15]. Our future work will extend these investigations to single mode InAs/GaAs distributed feedback lasers [35]. Modeling taking into account the fundamental features of quantum dots as well as both single mode and multimode configurations will be performed.
2018-04-03T00:49:46.889Z
2018-01-22T00:00:00.000
{ "year": 2018, "sha1": "3805310a3668d05d07f653eb60ea912f3098cad3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.26.001743", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f69a75f638abb3959ea820b158ac81aab34abdc3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
238742273
pes2o/s2orc
v3-fos-license
Melatonin as a Therapeutic Agent for the Inhibition of Hypoxia-Induced Tumor Progression: A Description of Possible Mechanisms Involved Hypoxia has an important role in tumor progression via the up-regulation of growth factors and cellular adaptation genes. These changes promote cell survival, proliferation, invasion, metastasis, angiogenesis, and energy metabolism in favor of cancer development. Hypoxia also plays a central role in determining the resistance of tumors to chemotherapy. Hypoxia of the tumor microenvironment provides an opportunity to develop new therapeutic strategies that may selectively induce apoptosis of the hypoxic cancer cells. Melatonin is well known for its role in the regulation of circadian rhythms and seasonal reproduction. Numerous studies have also documented the anti-cancer properties of melatonin, including anti-proliferation, anti-angiogenesis, and apoptosis promotion. In this paper, we hypothesized that melatonin exerts anti-cancer effects by inhibiting hypoxia-induced pathways. Considering this action, co-administration of melatonin in combination with other therapeutic medications might increase the effectiveness of anti-cancer drugs. In this review, we discussed the possible signaling pathways by which melatonin inhibits hypoxia-induced cancer cell survival, invasion, migration, and metabolism, as well as tumor angiogenesis. Introduction Cancer is a major cause of morbidity and mortality worldwide [1]. Although genetic mutations have a decisive role in cancer development, many cancers are a consequence of environmental risk factors such as diet, smoking, pollutants, stress, inflammation, etc. [2]. Several features of cancer cells pave the way for tumor development, including persistent proliferation and insensitivity to growth suppressors, constant DNA replication, evasion of both apoptosis and immune surveillance, impaired energy metabolism, sustained angiogenesis, invasion, and metastasis [3]. Metastasis is the most common event that makes the treatment of cancer challenging. During tumorigenesis, some cancer cells readily undergo metastasis; this process begins with the dissociation of the cell's tumor mass, and the invasion into the tumor microenvironment [3]. These invasive cells pass across the endothelial wall and enter into the blood and/or lymphatic circulatory systems, a process known as intravasation. Some of these circulating cells may escape the circulation (extravasation) and initiate growth at a distant site to produce subsets of the original tumor. If this new colony continues the proliferation it can form a secondary metastatic tumor [4]. In some cases, continued chemotherapy leads to treatment resistance. Chemoresistance occurs often with recurrent cancers. The recurrence of cancer is a result of surviving cancer stem cells; these cells play a central role in tumor regrowth [5]. Neoangiogenesis is a notable feature of tumors in which new vessels sprout from pre-existing blood vessel networks to provide vital nutrients and oxygen for cancer cell growth and proliferation [3]. It has been shown that the disruption of pro-angiogenic and anti-angiogenic regulators could lead to uncontrolled angiogenesis [5]. Hypoxia (oxygen tension less than 7 mmHg), which is sensed by hypoxia-inducible factors (HIFs), induces overexpression of the growth factors and cellular adaptation genes which subsequently promote angiogenesis, cancer cell survival, proliferation, and energy metabolism [6]. The newly created vessels are immature and leaky, and therefore the oxygenation and drug delivery are sometimes diminished in these vessels; accordingly, hypoxic tumors are usually resistant to chemotherapy. The hypoxic state in the tumor microenvironment may provide new therapeutic approaches to selectively destroy the hypoxic cells. In this regard, two distinct approaches have been proposed, including "bioreductive prodrugs" and "molecular target inhibitors" [7]. Moreover, targeting the pro-angiogenic factors or their receptors is considered a valuable strategy for limiting the growth and metastasis of tumors [8]. Melatonin (N-acetyl-5-methoxytryptamine), a multifunctional molecule, is produced in and released from the pineal gland and likely synthesized in the mitochondria of all other cells, where it is used locally and not released into the blood [9]. Many functions have been reported for melatonin, including the regulation of circadian rhythms and annual cycles of reproduction, antioxidant actions, and immune system regulation [10,11]. Additionally, melatonin has multiple anti-cancer properties such as anti-proliferation, anti-angiogenesis, immune system modulation, and apoptotic activities [12][13][14][15]. More interestingly, studies have demonstrated that melatonin modulated hypoxia-induced tumorigenesis [16][17][18], and co-administration of melatonin in combination with other therapeutic compounds increased the effectiveness of those treatments [19][20][21]. This review aims to describe the pathways involved in hypoxia-induced cancer development and more importantly explain how melatonin can possibly inhibit hypoxia-mediated tumor progression. Moreover, the current study provides possible mechanisms involved in the inhibition of hypoxia-induced tumor progression by melatonin. Hypoxia and Cancer (Tumor) Progression Hypoxia occurs in many solid tumors and plays a role as a selective agent throughout metastatic transformation and progression [22]. Although hypoxia negatively affects tumor proliferation in some conditions, it mainly allows tumor cells to adapt to insufficient oxygen and nutrients and consequently enhances the activity and aggressiveness of cancer cells. Moreover, genomic changes occurring in the tumor cells under low oxygen conditions can make it feasible for them to survive. In turn, the excessive proliferation of cancer cells exaggerates the hypoxic state. As a result, a vicious circle of hypoxia and tumor progression develops [23]. Hypoxia is also associated with genomic instability and induces malignant phenotypes such as apoptosis resistance [24]. Furthermore, poor vascularity reduces tumor cell exposure to drugs during chemotherapy and oxidative damage during radiotherapy; thus, it is common for tumors to develop resistance to chemotherapy and radiotherapy under hypoxic conditions [23]. Hypoxia Induces Cancer Cell Survival The oxygen state determines whether a cell will or will not undergo apoptosis [26]. Moreover, based on the duration of exposure to hypoxia, the response of cancer cells can vary from death to survival. The cycling hypoxia-induced high production of reactive oxygen species (ROS) is associated with tumor cell survival and progression [27]. However, there are sometimes atypical actions regarding the role of the hypoxia-induced HIF pathway in cancer cell survival. For example, HIF-1 can either prevent cell death or induce apoptosis [28]. It is also reported that HIF-1 regulates insulin-like growth factor 2 (IGF-2), a crucial survival factor, in hypoxic tumor cells [29]. Hypoxia-related pathways including PI3K/AKT/mTOR, ERK, and the NF-kB are also involved in cancer cell proliferation and survival [30]. Hypoxia can lead to autophagy via HIF-1α and NF-κB. It is well-established that autophagy is a pro-survival process that generates nutrients and biomolecules required by rapidly growing cells, and this also protects the cells from apoptosis via Bcl-2 subfamilies such as BNIP3 (Bcl-2/adenovirus E1B 19 kDa interacting protein 3) and BNIP3L (Bcl-2/adenovirus E1B 19 kDa interacting protein 3-like) [31]. Hypoxia can also downregulate caveolin-1 (Cav-1), and studies have demonstrated that loss of Cav-1 up-regulates TIGAR (TP53-induced glycolysis and apoptosis regulator) which protects cells against oxidative stress and apoptosis [31]. In summary, it can be postulated that hypoxia, at least in the short term, induces cancer cell survival by activating autophagy, suppressing apoptosis, and inducing metabolic adaptation [32]. Hypoxia Induces Tumor Angiogenesis One of the most significant effects of hypoxia is the induction of neoangiogenesis in the tumor [33]. Angiogenesis is a critical step in cancer progression that provides nutrients and oxygen [34]. For this purpose, the tumor forms a prerequisite vascular network not only by recruiting the host vessels, but also by forming new microvessels. The newly formed vasculature displays various irregularities in structure and function which result in abnormal blood flow and inefficient oxygen delivery to the tumor cells, and consequently, the development of the hypoxic status [23]. Additionally, the enlargement rate of the tumor exceeds the growth of new blood vessels which also causes a relative hypoxic area, especially near the center of the tumor [35]. In a growing tumor, oxygen demand is increased but its availability decreased, which may help the hypoxia-angiogenesis cycle. Hypoxia induces a cascade of proangiogenic factors, including VEGF, angiopoietin 2 (Ang-2), platelet-derived growth factor (PDGF), and basic fibroblast growth factor (bFGF), while also reducing angiogenic inhibitors such as thrombospondin through HIF-1 [36]. There is some evidence that HIF-2α plays a role in the up-regulation of VEGF and its receptor [37]. VEGF and Ang-2 are the most prominent regulators of angiogenesis which are induced by hypoxia [38]. In this regard, Olaso et al. [39] have demonstrated that hepatic stellate cells existing in hypoxic conditions release VEGF during the formation of micrometastases. The development of macrometastases can be possible after the endothelial cells accumulate and form a sustainable stable vasculature. Moreover, hypoxia up-regulates extracellular matrix (ECM) proteins such as lysyl oxidase and matrix metalloproteinases (MMPs), which have a role in angiogenesis [40]. The MMP-inducers such as ECM metalloproteinase inducer (EMMPRIN/CD147) promote angiogenesis not only by acting as a protease, but also by increasing levels of the soluble VEGF isoforms [41]. Furthermore, membrane-type 1 matrix metalloproteinase (MT-MMP) is present in some cancer cells, and has a central role in the release of Sema4D, a tumor-inducing angiogenesis factor under hypoxic conditions [42]. Additionally, hypoxia down-regulates the soluble receptor of VEGF (known as sFlt-1, a VEGF antagonist), and thus increases VEGF activity [43,44]. Hypoxia-induced HIF-1α can also up-regulate the Notch signaling pathway which, along with Wnt signaling, determines the vascular density [45]. Finally, hypoxia promotes angiogenesis by stimulating proangiogenic factor IL-8 via activation of NF-κB [46]. The above-mentioned findings clearly show the role of hypoxia in tumor angiogenesis. However, further studies are required to define the underlying mechanisms and mediators that are involved in these processes. Hypoxia Induces Invasion and Migration of Cancer Cells The first step in metastasis is the invasion of cancer cells between endothelial cells that allow them to enter the lymphatic or cardiovascular system for further spreading. Generally, cancer cell invasion begins with the degradation of the extracellular matrix by MMPs and the destruction of integrin adhesion [47]. The potential of cancer cells to alter extracellular matrix remodeling and digestion of the basement membrane also contributes to tumor progression and invasion [38,47]. Hypoxia leads to the detachment of tumor cells by downregulating cell adhesion molecules, and by up-regulating the molecules involved in the degradation of integrin and cell attachment components such as MMP-9 and urokinasetype plasminogen activator receptor (uPAR) [48,49]. Hypoxia, by stabilizing microtubules and facilitating integrin localization in the cell membrane, also stimulates the cell motility which is needed for invasion and migration [47]. Hypoxia-induced NF-κB also up-regulates cyclooxygenase-2 (COX-2) and consequently the expression of some essential cell surface and cytoskeletal proteins required for tumor invasion, including matrix metalloproteinase-2 (MMP-2) and urokinase-type plasminogen activator (uPA) [23,50]. Moreover, the Rho family member A (RhoA) which is required for the activation of MT1-MMP is increased in the hypoxic microenvironment [47]. Furthermore, hypoxic macrophages can indirectly stimulate the secretion of MMPs [51]. Hypoxia Regulates the Metabolism of Cancer Cells The ATP source in normal cells is mitochondrial oxidative phosphorylation, whereas in tumor cells it is cytosolic glycolysis in both normoxic (Warburg effect) or hypoxic (Pasteur effect) conditions [23]. The tumor cells change their glycolytic pathway to reduce oxygen consumption by increasing the rate of glucose uptake and lactic acid fermentation [52]. It has been reported that there is a correlation between lactate production and the metastatic spread of tumors [53,54]. This glycolytic processing is likely regulated by the hypoxic inducible factor (HIF-1) to increase transcription of genes encoding glucose transporters (GLUT1 and GLUT3), VEGF, and glycolytic enzymes (lactate dehydrogenase A, LDHA) [55]. For example, LDHA, a target of HIF-1, catalyzes the conversion of pyruvate to lactate which is crucial for tumor initiation, maintenance, and progression [56]. Moreover, HIF-1 increases pyruvate dehydrogenase kinases (PDK) 1 and 3 which reduce mitochondrial uptake of pyruvate and divert it for conversion into lactate by LDH [57]. Hypoxia can also increase glycogen synthesis as a survival strategy under harsh conditions; this process is carried out by HIF-1 and HIF-2 via up-regulation of glycogenesis enzymes, including phosphoglucomutase 1 (PGM1), glycogen synthase 1 (GYS1), glucose-1-phosphate uridylyltransferase (UTP), and 1,4-α glucan branching (GBE1) [52]. These collective data show that hypoxia induces several metabolic changes in favor of providing high energy for cancer cells. Melatonin Definition and Physiological Roles Melatonin (N-acetyl-5-methoxytryptamine) has attracted a great deal of attention in various medical contexts. Although this molecule is produced and secreted by the pineal gland, especially at night, all cells likely produce melatonin where it is used locally [58,59]. In vertebrate cells, melatonin synthesis happens in mitochondria, which contain much higher concentrations of this molecule relative to other organelles. Importantly, these high levels of melatonin are maintained even after pinealectomy [60]. Mitochondria as a source of melatonin are also supported by the observation that isolated mitochondria from oocytes could synthesize melatonin [61]. The roles of this molecule in the regulation of the sleep-wake cycle, circadian and circannual rhythms, seasonal adaptations, reproduction, and immune response have been well documented. Over the last four decades, numerous reports have confirmed that melatonin acts as an endogenous oncostatic agent for many cancer types [59,[62][63][64]. The anti-cancer effects of melatonin are often mediated by both receptor-dependent and independent mechanisms [65,66]. The receptor-dependent mechanisms involve the G-protein receptor-related family of melatonin receptors, MT1 (Mel1a) and MT2 (Mel1b), which inhibit the MAPK and PI3K signaling pathways. The receptor-independent mechanisms are mediated via direct inhibition of calmodulin and cAMP-related pathways by melatonin [67,68], and are related to its ability to modulate oxidative homeostasis [69]. Melatonin acts as an anti-tumor factor by interfering with different properties of cancer cells such as growth, proliferation, metastasis, angiogenesis, immune evasion, and cellular metabolism [70]. Many of these data have been elegantly summarized by Hill and colleagues [70]. Melatonin as a Proposed Therapeutic Factor for the Inhibition of Hypoxia-Induced Tumor Progression As discussed in previous sections, hypoxia is an important factor in tumor progression that positively affects survival, angiogenesis, invasion, migration, and the metabolic status of cancer cells (see Section 2. Hypoxia and cancer (tumor) progression). Hypoxia also contributes to the radioresistance and chemoresistance of the tumor. Melatonin is a potent anti-tumor agent that likely inhibits various hypoxia-induced signaling pathways in cancer cells. Thus, we propose that a way by which melatonin inhibits cancer growth and progression and also improves therapeutic efficacy is the inhibition of hypoxia-induced survival, angiogenesis, migration, and invasion ( Figure 2). The following section describes how melatonin prevents hypoxia-induced properties of cancer cells. Melatonin Inhibits the Hypoxia-Induced Survival of Cancer Cells Accumulating evidence has confirmed that hypoxia down-regulates apoptotic elements, including caspase-3, -8 and -9, cytochrome complex (Cyt c), Fas/FasL, and Bax in cancer cells and therefore supports these cell's survival [14,71]. On the contrary, melatonin inhibits the survival of cancer cells by up-regulating/activating apoptotic components. Furthermore, melatonin down-regulates/inactivates Bcl-2 and Bcl-xL in hypoxic cancer cells [72,73]. Melatonin also blocks the cell cycle and up-regulates p21/WAF1 and p53, which subsequently inhibit the proliferation of hypoxic tumor cells [74]. Melatonin also decreases the expression of cyclin A and cyclin D in hypoxic cells, thereby regulating the cell cycle. Moreover, it has been shown that melatonin could reduce the proliferation of hypoxic pancreatic stellate cells [14]. Different hypoxia-induced signaling pathways may be targets for melatonin to inhibit cancer cell survival. For instance, hypoxia stimulates the adenylyl cyclase (AC)/cAMP/ protein kinase A (PKA) signaling pathway to provide suitable microenvironmental pH for cancer cell survival [75]. In addition, hypoxia mediates overexpression of the carbonic anhydrase IX (CA IX) gene in an HIF-1α-dependent manner which acts as a pH regulator in the tumor [76]. Conversely, melatonin modulates cAMP-related pathways as well as CA IX expression and activity, and thereby makes the condition less suitable for cancer cells [77]. Moreover, it has been demonstrated that melatonin could increase and decrease the phosphorylation of respectively p38 and JNK in pancreatic stellate cells in hypoxic conditions, leading to a decrease proliferation of the cells [78]. Melatonin has been found to induce apoptosis by sensitizing the hepatocellular carcinoma cells to sorafenib and modulating autophagy through the PERK-ATF4-Beclin1 signaling pathway [79]. Another study showed that melatonin inhibited the proliferation of gastric cancer cells via the IRE/JNK/Beclin1 signaling pathway [80]. status of cancer cells (see Section 2. Hypoxia and cancer (tumor) progression). Hypoxia also contributes to the radioresistance and chemoresistance of the tumor. Melatonin is a potent anti-tumor agent that likely inhibits various hypoxia-induced signaling pathways in cancer cells. Thus, we propose that a way by which melatonin inhibits cancer growth and progression and also improves therapeutic efficacy is the inhibition of hypoxia-induced survival, angiogenesis, migration, and invasion ( Figure 2). The following section describes how melatonin prevents hypoxia-induced properties of cancer cells. (4) inhibiting carbonic anhydrase IX (CA IX) expression and activity and cAMP-related pathways to make an unsuitable environmental pH. Melatonin inhibits hypoxia-induced angiogenesis by (1) suppressing the activity of vascular endothelial growth factor (VEGF), angiopoietin-2 (Ang-2), stromal-derived factor 1 (SDF-1), matrix metalloproteinase-2 and -9 (MMP-2 and -9), angiopoietin-1 and -2 (ANGPT-1 and -2), (2) inhibiting the expression of lipoxygenase (LOX) via interacting with RZR/RORα nuclear receptor, and (3) blocking the hypoxia-induced tumor-associated macrophages (TAMs) and membrane-type 1 matrix metalloproteinase (MT1-MMP) activity and subsequently reducing Semaphorin-4D (Sema4D). Melatonin inhibits the hypoxia-induced invasion and migration of cancer cells by (1) decreasing levels of proteases including Cathepsin C (CTSC), MMP-2, MMP-9, MT1-MMP, and urokinase-type plasminogen activator (uPA), (2) up-regulating the adhesion proteins, such as integrin and E-cadherin, (3) suppressing oxidative-stress-induced detachment of cancer cells via overexpression of the β1 integrin and down-regulation of ROS-αvβ3 integrin-FAK/Pyk2 (focal adhesion kinase/proline-rich tyrosine kinase 2) signaling pathway, and (4) blocking hypoxia-induced microtubule organization and rearrangement via blocking the Rhokinase 1 (ROCK1) signaling pathway. Melatonin disturbs hypoxia-induced cancer cell metabolism by (1) reducing reactive oxygen species (ROS) and down-regulating hypoxia-inducible factor-1 (HIF-1), VEGF and glycolysis-related enzymes such as glucose transporter 1 (GLUT1) and progestins activate 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase 3 (PFKFB3), and (2) competing with glucose in binding to GLUT1, and 3) inhibition of 3-phosphoinositide-dependent protein kinase 1 (PDK-1) signaling pathway. One of the survival strategies against chemotherapy in hypoxic cancer cells is HIF-1-induced chemoresistance. The main player in this process is truncated VDAC1-∆C (voltage-dependent anion channel 1) which acts as a channel to maintain ATP and inhibit apoptosis [85]. Kristinina et al. [86] revealed that co-treatment of melatonin and retinoic acid down-regulated VDAC1 and the activity of the electron transport chain complexes in HL-60 cells; therefore, it can be postulated that melatonin also puts the survival of chemoresistant cancer cells in danger. Hypoxia increases ECM proteins such as LOX which is associated with angiogenesis [40,91], and, on the contrary, melatonin suppresses LOX expression via interacting with the RZR/RORα nuclear receptor [92]. Moreover, melatonin suppresses the production of Sema4D, an important angiogenic factor released by MT1-MMP and TAMs, by blocking the hypoxia-induced TAM activity [93]. It can be concluded that melatonin, directly and indirectly, inhibits hypoxia-induced angiogenesis in tumors by modulating HIF-1-induced angiogenic factors and HIF-1 levels/activity. Melatonin Inhibits the Hypoxia-Induced Invasion and Migration of Cancer Cells Hypoxia helps cancer cells to invade and migrate to other parts of the body. In fact, the hypoxic condition makes the invasion and migration of cancer cells possible by both the down-regulation of cell adhesion molecules and the up-regulation of proteases [48]. Hypoxia-induced HIF-1 mediates the up-regulation of ECM degradation enzymes (e.g., CTSC, MMP-2, MMP-9, MT1-MMP, uPA [81]. On the other hand, melatonin inhibits the migration and invasion of cancer cells by decreasing levels of several proteases including CTSC, MMP-2, MMP-9, MT1-MMP, and uPA [94]. Furthermore, melatonin has the potential to inhibit cancer cell migration via up-regulating the adhesion proteins, such as integrin and E-cadherin [74]. Melatonin also suppresses oxidative-stress-induced detachment of cancer cells via overexpressing the β1 integrin and down-regulation of ROS-αvβ3 integrin-FAK/Pyk2 signaling pathway [95,96]. HIF-1α overexpresses RhoA and Rho kinase 1 (ROCK1) leading to actin-myosin contraction and cell motility [17]. Moreover, Rho triggers the focal adhesion kinase (FAK) signaling pathway and consequently induces motility and an invasive phenotype of hypoxic cancer cells [97]. Interestingly, melatonin blocks hypoxia-induced microtubule organization and rearranges the microtubules via the ROCK1 signaling pathway [66,98]. Moreover, Doganlar et al. [99] showed that melatonin could suppress the invasion of human glioblastoma tumor spheroids by the regulation of angio-miRNAs and subsequently blocking the HIF1-α/VEGF/MMP9 signaling pathway. The published evidence suggested the inhibitory effect of melatonin on hypoxia-induced cancer cell invasion and migration. Further studies are required to clarify the potential of melatonin in inhibiting the invasion and its underlying mechanisms. Other Effects of Melatonin on Hypoxia-Mediated Tumor Progression Hypoxia changes the metabolic activity of cancer cells toward lower oxygen demand and elevated glucose uptake and lactic acid fermentation [54]. HIF-1 plays a major role in this scenario by up-regulating glycolytic enzymes (e.g., LDHA), GLUT1, GLUT3, and VEGF [55]. Moreover, the expression of PDK-1 and PDK-3, regulators of aerobic glycolysis, are increased by HIF-1 leading to proliferation and chemo-resistance of tumor cells. Conversely, melatonin as a regulator of redox homeostasis reduces ROS levels and consequently down-regulates HIF-1 and glycolysis-related enzymes such as GLUT1 and PFKFB3 [83,100]. In this regard, it was shown that melatonin treatment limits the expression of GLUT1 in breast cancer cells [101]. Sanchez et al. [102] also demonstrated that melatonin inhibited the Warburg effect in Ewing sarcoma cells by decreasing the glucose uptake and LDH activity. The inhibitory effect of melatonin on Warburg-type metabolism was also reported by Reiter and co-workers [103]. Another mechanism by which melatonin may influence glucose uptake into cancer cells is competition with glucose in binding to GLUT1 [104]. Hypoxia also stimulates levels of free intracellular Ca 2+ and calmodulin (CaM) activity as well as the Ca 2+ /CaM signaling pathway [105]. Melatonin probably exhibits oncostatic actions by regulating Ca 2+ signaling pathways via interacting with GPCR or modulating voltage-gated Ca 2+ channels and also binding to CaM, tubulin, and retinoic acid receptors [67,106]. Moreover, melatonin regulates the Ca 2+ signaling pathway via its ROS-scavenging activity [107]. Conclusions It is well documented that hypoxia is involved in tumor progression via various mechanisms, including the induction of cancer cell invasion and migration, tumor angiogenesis, and modification of cell metabolism. On the contrary, melatonin can act as an anti-tumor agent partly through the inhibition of hypoxia-induced pathways. Herein, we discussed the possible signaling pathways by which melatonin inhibits hypoxia-induced cancer cell survival, invasion, migration, metabolism as well as tumor angiogenesis. The accumulated data overwhelmingly supported the idea that melatonin is an anti-cancer agent, independently or in combination with other chemotherapeutic agents. Considering melatonin efficacy and safety, it should be considered as part of the therapeutic regimen to treat certain types of cancer. Additional studies would further clarify the mechanisms by which melatonin acts as an oncostatic agent including the details of the proposed outline in this report. Plastic, Acknowledgments: This study did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Conflicts of Interest: The authors declare no conflict of interest.
2021-10-14T05:34:19.155Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "a0ba73ce7a25e755e8ced63a62f8995004a13d8b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/19/10874/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0ba73ce7a25e755e8ced63a62f8995004a13d8b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
57390433
pes2o/s2orc
v3-fos-license
Analysis of long non-coding RNAs in glioblastoma for prognosis prediction using weighted gene co-expression network analysis, Cox regression, and L1-LASSO penalization Purpose This study focused on identification of long non-coding RNAs (lncRNAs) for prognosis prediction of glioblastoma (GBM) through weighted gene co-expression network analysis (WGCNA) and L1-penalized least absolute shrinkage and selection operator (LASSO) Cox proportional hazards (PH) model. Materials and methods WGCNA was performed based on RNA expression profiles of GBM from Chinese Glioma Genome Atlas (CGGA), National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), and the European Bioinformatics Institute ArrayExpress for the identification of GBM-related modules. Subsequently, prognostic lncRNAs were determined using LASSO Cox PH model, followed by constructing a risk scoring model based on these lncRNAs. The risk score was used to divide patients into high- and low-risk groups. Difference in survival between groups was analyzed using Kaplan–Meier survival analysis. IncRNA-mRNA networks were built for the prognostic lncRNAs, followed by pathway enrichment analysis for these networks. Results This study identified eight preserved GBM-related modules, including 188 lncRNAs. Consequently, C20orf166-AS1, LINC00645, LBX2-AS1, LINC00565, LINC00641, and PRRT3-AS1 were identified by LASSO Cox PH model. A risk scoring model based on the lncRNAs was constructed that could divide patients into different risk groups with significantly different survival rates. Prognostic value of this six-lncRNA signature was validated in two independent sets. C20orf166-AS1 was associated with antigen processing and presentation and cell adhesion molecule pathways, involving nine common genes. LBX2-AS1, LINC00641, PRRT3-AS1, and LINC00565 were related to focal adhesion, extracellular matrix receptor interaction, and mitogen-activated protein kinase signaling pathways, which shared 12 common genes. Conclusion This prognostic six-lncRNA signature may improve prognosis prediction of GBM. This study reveals many pathways and genes involved in the mechanisms behind these lncRNAs. Introduction Glioblastoma (GBM), grade IV glioma, is the most common and aggressive type of brain cancer characterized by high morbidity and mortality and dismal prognosis. 1,2 Reportedly, the median survival of patients with newly diagnosed GBM is approximately 15 months. 3 Despite the development of medical interventions such as surgical resection, radiological therapy, and chemotherapeutic therapy, the survival rate remains largely unchanged over the past years. 4 Long non-coding RNAs (lncRNAs) are defined as transcripts greater than 200 nucleotides that do not code proteins. 5 With the development of genome-wide expression profiling, a huge amount of novel lncRNAs have been discovered. These lncRNAs are known to play key roles in a broad range of biological processes such as cell differentiation, human diseases, and tumorigenesis. 6 Unraveling potential roles of lncRNAs in GBM has emerged as a leading edge of GBM research. 7 For instance, Han et al 8 revealed that ASLNC22381 and ASLNC2081 may engage in recurrence and progression of GBM through conducting lncRNA and mRNA profiling. In addition, Zhang et al 9 reported a set of lncRNAs that have prognostic value for GBM by lncRNAs bioinformatics analysis in The Cancer Genome Atlas (TCGA). Moreover, a recent study identifies an immune-related lncRNA signature for prognostic prediction based on TCGA data of GBM patients. 10 Despite these valuable findings, the majority of lncRNAs in GBM remains poorly understood. In comparison with previous studies that identified prognostic lncRNA signatures based on the limited microarray data from TCGA, 9,10 we carried out a comprehensive analysis on all publicly available gene expression data of GBM from Chinese Glioma Genome Atlas (CGGA), National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), and the European Bioinformatics Institute (EBI) ArrayExpress repositories through a series of bioinformatics approaches. We searched for GBM-related key modules through constructing a weighted gene co-expression network analysis (WGCNA). Based on the lncRNAs contained in these key modules, we acquired a panel of lncR-NAs as prognostic biomarkers by univariate Cox regression analysis, in combination with Cox proportional hazards (PH) model based on the L1-penalized least absolute shrinkage and selection operator (LASSO) estimation. Subsequently, a prognostic scoring system was constructed based on these prognostic lncRNAs to evaluate the death risk due to GBM. In addition, lncRNA alterations in GBM compared to normal samples were analyzed using metaDE method. Furthermore, pathway enrichment analysis using Gene Set Enrichment Analysis (GSEA) was conducted to give some insights into the underlying mechanisms of these predictive lncRNAs. Data resource The data sets in this study were derived from three sources. First, the gene expression data of 325 glioma samples, named "Part D", 11 was downloaded from the CGGA (http://cgga.org.cn/), including 144 GBM samples that were selected as the training set in this study (platform: Illumina HiSeq 2000 RNA Sequencing). Survival information was available for 138 patients with GBM, of whom, 92 were dead, while 46 were alive with a median survival time of 13.22±11.44 months. Second, the data sets were searched in the NCBI GEO (http://www.ncbi.nlm.nih.gov/geo/) and the EBI ArrayExpress (https://www.ebi.ac.uk/arrayexpress/) repositories for publication of human GBM with no less than 40 samples. As a result, three data sets of GSE51062, GSE36245, and E-TABM-898 were obtained, including 52 samples, 46 samples, and 56 samples, separately. The platform for all the three data sets was Affymetrix-GPL570. We also searched for human GBM data sets that had no less than 50 samples and simultaneously available survival information in NCBI GEO and EBI ArrayExpress. Two data sets, the GSE74187 (n=60) and GSE83300 (n=50), meeting the criteria were included in this study. The platform for both of them was Agilent-014850. In addition, we needed human GBM gene expression data sets that had both GBM samples and paired normal tissue samples, with the total number of samples greater than 40. Through exploring NCBI GEO and EBI ArrayExpress, the GSE22866 (including 40 GBM samples and six normal samples; platform: Affymetrix-GPL570), GSE50161 (including 34 GBM samples and 13 normal samples; platform: Affymetrix-GPL570), and GSE4290 (including 77 GBM samples and 23 normal samples; platform: Agilent-014850) were acquired. Third, RNA-seq data set comprising 154 GBM samples and 18 normal samples was downloaded from the TCGA (https://gdc-portal.nci.nih.gov/). There were 152 samples available with survival information, including 102 dead and 50 live samples. Data preprocessing For the data sets downloaded from Affymetrix-GPL570 platform, raw data (CEL files) were background corrected and normalized 12 using the oligo package (version 1.41.1, http://www.bioconductor.org/packages/release/bioc/html/ oligo.html) in R language (version 3.4.1). With respect to the data sets from Agilent-014850 platform, raw data (TXT files) underwent log2 transformation to yield approximately normal distribution with the limma 13 software (version 3.34.0, https://bioconductor.org/packages/release/bioc/html/limma. html), followed by standardization using the median method. CGGA and TCGA data were subject to quantile normalization using the preprocessCore package 14 159 Prognostic lncrnas for glioblastoma Next, according to platform annotation files, the probes in all data sets that had RefSeq transcript ID and annotation information as non-coding RNA in the Refseq database were chosen. Moreover, the platform sequencing data were aligned to human genome (GRCh38 version) by using the Clustal2 (http://www.clustal.org/clustal2/). 15 The acquired lncRNAs combining with the annotated lncRNAs in the Refseq database 16 were extracted for further analysis. Wgcna The WGCNA (version 1.61, https://cran.r-project.org/web/ packages/WGCNA/index.html) 17 was applied to build a WGCNA to mine GBM-related preserved modules. For this network analysis, the CGGA data were referred to as the training set, while the GSE51062, GSE36245, and E-TABM-898 as validation sets. Initially, comparability between the four sets was analyzed using correlation analysis. A WGCNA was constructed in accordance with a previous study. 18 Briefly, using scale-free topology criterion, the soft threshold power of β was established, through which the weighted adjacency matrix was developed. The modules with size $150 and minimum cut height of 0.99 were selected using dynamic tree cut algorithm, and the preserved modules were determined using the module preservation function of WGCNA package. In addition, the possible biological functions of the significantly preserved modules were studied using userListEnrichment function of WGCNA package. selection of prognosis-related lncrnas Based on the lncRNAs in the preselected preserved WGCNA modules and the corresponding survival information, univariate Cox regression analysis was used to identify the lncRNAs that were significantly correlated with prognosis (logrank P,0.05) by using survival package (version 2.4, https://cran.r-project.org/web/packages/survival/index.html) in R language (version 3.4.1). 19 construction of prognosis scoring model based on lncrnas The identified prognosis-related lncRNAs were used to fit a Cox PH model based on the LASSO estimation 20 to select the optimal panel of prognostic lncRNAs. The optimal value for penalization coefficient lambda was selected by running cross-validation likelihood (cvl) 1,000 times. Subsequently, the Cox PH coefficients and expression levels of these prognostic lncRNAs were extracted to calculate the risk score as a measure of survival risk for each patient using the following formula: where β lncRNAn represents Cox PH coefficient of lncRNAn and expr lncRNA represents expression level of lncRNAn. All samples in the CGGA set were dichotomized into high-and low-risk groups by risk score, with median risk score as the threshold. Then, three independent sets with concomitant survival information (TCGA set, GSE74187, and GSE83300) were utilized to evaluate the effectiveness and robustness of the abovementioned risk scoring model. As mentioned above, the three data sets contained all available GBM data with survival information in TCGA, NCBI GEO, and EBI ArrayExpress. In the same manner, samples in each set were categorized by risk score into predicted highand low-risk groups. Survival difference between different risk groups in each set was analyzed using the Kaplan-Meier curve in combination with the Wilcoxon logrank test. Pathway enrichment analysis We built lncRNA-mRNA networks with the selected prognostic lncRNAs and their correlated mRNAs in WGCNA modules. GSEA is a powerful approach for annotating gene expression data that are characterized by focusing on gene set with common biological function, chromosomal location, or regulation (http://software.broadinstitute.org/gsea/ index.jsp). 23 We performed pathway enrichment analysis for the lncRNA-mRNA networks using GSEA. Pathways with nominal (NOM) P-value ,0.05 were considered significant. GSEA-enriched results were shown by normalized enrichment score (NES) that was calculated as previously described. 24 Results Wgcna co-expression network construction and module mining between any two sets of the four data sets, CGGA data set (training set), GSE51062, GSE36245, and E-TABM-898 (validation sets), were performed. As shown in Figure 1, the correlation coefficients ranged from 0.5 to 1 (P-values,1e-200), suggesting that the expression of common RNAs among the four data sets was coincident. Initially, WGCNA of RNAs was built for the training set (CGGA set). According to the scale-free topology criterion, the soft threshold power of β was set as 5 when scale-free topology model-fit R 2 =0.9. The phylogenetic tree mined nine co-expression modules (module size, $50; cut height, $0.99) in the WGCNA ( Figure 2A). As shown by the color bands underneath the phylogenetic tree, nine modules were represented by branches of different colors (M1, black; M2, blue; M3, brown; M4, green; M5, gray; M6, pink; M7, red; M8, turquoise; M9, yellow). Moreover, these modules were validated in E-TABM-898, GSE51062, and GSE36245 ( Figure 2B and D). In the three validation sets, genes were colored in the same manner as in TCGA set. As can be seen from a multidimensional scaling (MDS) for gene expression data of the nine modules ( Figure 3A), genes in yellow and red modules showed similar expression and genes in brown and black modules exhibit similar expression. Hierarchical clustering analysis of modules found that the yellow and red modules were on the same branch ( Figure 3B). These observations illustrate that the yellow and red modules possess similar gene expression patterns. Module preservation analysis found that among the nine nodules, eight modules had Z-score .5 (Table 1). The eight modules were ranked in a descending order of Z-score. Top three modules were yellow module (Z-score=34.5011), red module (Z-score=34.3040), and black module (Z-score=24.5504), which were highly overlapped across all datasets. This observation indicates that the three modules may provide important information concerning the pathological mechanisms of GBM. With regard to functional annotation, the yellow module (84 lncRNAs) was related to biological adhesion, the red module (26 lncRNAs) was associated with immune response, and the brown module (eight lncRNAs) was possibly involved in synaptic transmission ( Table 1). Identification of prognosis-related lncrnas There were 188 lncRNAs in the eight overlapped WGCNA modules. Based on the survival information of CGGA set, 32 lncRNAs were identified to be significant Figure 4, among the 32 prognosisrelated lncRNAs, 11 were in the yellow module, eight in the red module, and eight in the turquoise module. As aforementioned, the yellow and red modules had similar gene expression patterns. Moreover, the two modules were functionally related to biological adhesion and immune response, which were critical for GBM pathogenesis. 25,26 Therefore, the 19 lncRNAs in the yellow and red modules were selected for further analysis. Development of a six-lncrna prognostic scoring system Expression of the 19 lncRNAs in the yellow and red modules were used as input for LASSO Cox PH model. When the cvl was maximized to be -466.2711, the optimal lambda value was 18.0151. As a result, a panel of six lncRNAs was selected as predictive factors for survival, including C20orf166-AS1, LINC00645, LBX2-AS1, LINC00565, LINC00641, and PRRT3-AS1 (Table 2). For predicting each individual patient's survival probability, risk score was calculated for each patient with the following formula: Prediction of overall survival (Os) of gBM patients The aforementioned lncRNA-based risk scoring system was applied to the CGGA set. With the median risk score as cutoff, all patients in the CGGA set were categorized into a highrisk group (n=69) and a low-risk group (n=69). The results showed that the low-risk group had significantly longer OS compared to the high-risk group (16.61±14.22 months vs 9.83±6.17 months, logrank, P=0.000127; Figure 5A). The predictive capability of this prognostic scoring system was tested in TCGA set, GSE74187, and GSE83300, and the risk score and risk group categories were similar for each of them. As shown in Figure 5B, for TCGA set (n=152), when compared to the high-risk group, a notably better survival was observed in the low-risk group (14.93±12.54 months vs 9.19±6.65 months, logrank P=0.0001195). Consistent results were also found for GSE74187 (n=60; 22.47±10.14 month vs 15.83±10.11 month, log-rank p=0.02568, Figure 5C). For GSE83300, the low-risk group had a longer OS compared to the high-risk group, with marginally significant difference (logrank P=0.09198; Figure 5D). It may be attributed to the relatively small sample size (n=50) of GSE83300. These findings offer strong evidence for the prognostic power of the six-lncRNA prognostic scoring system. establishment of lncrna-mrna networks To explore the relationships between the six prognostic lncRNAs and genes in the yellow and red modules, lncRNA-mRNA networks were constructed for the two modules, respectively ( Figure 7A and B). For the red module, the lncRNA-mRNA network was composed of two lncRNAs (C20orf166-AS1 and LINC00645) and 206 genes, of which, five were downregulated DERs and 72 were upregulated ( Figure 7A). For the yellow module, the lncRNA-mRNA network contained four lncRNAs (LBX2-AS1, LINC00641, PRRT3-AS1, and LINC00565) and 217 genes, of which, four were downregulated DERs and 97 were upregulated ( Figure 7B). Discussion Increasing evidence indicates that a growing number of lncRNAs are associated with various cancer types. 27 This discovery leads to a growing interest in the study of lncRNAs in GBM. Based on the gene expression data of GBM from CGGA, NCBI GEO, and EBI ArrayExpress, we identified a prognostic signature of six lncRNAs (C20orf166-AS1, LINC00645, LBX2-AS1, LINC00565, LINC00641, and Notes: a round node stands for a gene, while a square node stands for an lncrna. a regular triangle represents an upregulated gene, while an inverted triangle represents a downregulated gene. green or red link signals negative or positive association, respectively, between two nodes. Abbreviation: lncrna, long non-coding rna. PRRT3-AS1) through a combination of WGCNA, univariate Cox regression analysis, and LASSO PH model. Moreover, a six-lncRNA-based risk scoring system was constructed and capable to classify GBM patients into two risk groups with significantly different survival rates. The prognostic performance of the risk scoring model was successfully validated in two independent sets. It indicates that the six lncRNAs are promising prognostic biomarkers for GBM and may play important roles in tumorigenesis of GBM. LINC00645 is an endometrial cancer-specific lncRNA. 28 Emerging studies have proved that C20orf166-AS1 is aberrantly expressed in prostate cancer and bladder cancer. 42,43 However, the involvement of LINC00645 and C20orf166-AS1 in GBM has not been reported yet. In the present study, C20orf166-AS1 was identified as an important lncRNA of prognostic value for GBM. Moreover, pathway enrichment analysis showed that C20orf166-AS1 was significantly related to antigen processing and presentation and CAMs HLA-DRB1, and HLA-DQB1 are major histocompatibility complex class II molecules that are mainly expressed on antigen-presenting cells and play an important role in immune response. The protein encoded by CD2 gene is a CAM located on the surface of T cells and NK cells, and it acts as a specific marker for these cells. 29 SIGLEC1 protein is a member of siglecs that are predominately expressed on the surface of immune cells and bind to glycans enclosing sialic acids. 30 Interactions between siglecs and glycans are implicated in cell adhesion and cell signaling. These findings reveal that C20orf166-AS1 might participate in immune response and cell adhesion in GBM through the regulation of these genes in antigen processing and presentation and CAM pathways. Recent studies report that upregulation of LBX2-AS1 has been observed in lung cancer. 31,32 Interestingly, LBX2-AS1 is significantly upregulated with the increase of tumor grade in GBM, 33 suggesting that this lncRNA probably has an important regulatory role in GBM prognosis. Alterations of LINC00641, PRRT3-AS1, and LINC00565 in cancer have been scarcely reported. The current study provided evidence that the four lncRNAs had predictive value for survival of GBM patients. Notably, the study uncovered that they were significantly linked to focal adhesion, ECM receptor interaction, and MAPK signaling pathways. These pathways involved 12 common genes, including LAMB1, COL5A2, TGFB1, ITGA5, PDGFRB, TNFRSF12A, DUSP6, LAMC1, LAMC3, TNFRSF1A, and MYL9. Increasing evidence has established that MAPK pathway is involved in regulating GBM cell migration and proliferation. 34,35 LAMB1, LAMC1, and LAMC3 are members of ECM glycoproteins. COL5A2 encodes an alpha chain for fibrillar collagen, a major component of ECM proteins. 36 TGF-β1 is a member of TGF β superfamily and plays a role in the regulation of growth, proliferation, and differentiation of glioma cells. 37 Integrin alpha-5 protein encoded by ITGA5 belongs to the integrin alpha chain family that is critical for cell adhesion. 38 Recently, it is found that PDGFRB is elevated in GBM microvascular proliferation compared to GBM tumor cells and selectively expressed PDGFRB protein in pericytes. 39 DUSP6 protein belongs to the dual-specificity protein phosphatase subfamily that acts as a negative regulator over MAPK members. 40 Besides, it has been found that DUSP6 is upregulated in GBM and promotes the development of GBM. 41 These results imply that the involvement of LBX2-AS1, LINC00641, PRRT3-AS1, and LINC00565 in GBM may be involved in focal adhesion, ECM receptor interaction, and MAPK signaling pathways. These common genes might be potential therapeutic targets for GBM. Conclusion Based on the comprehensive analysis of publicly accessible GBM data in CGGA, NCBI GEO, and EBI ArrayExpress, this study identifies a novel six-lncRNA signature for GBM prognostic prediction. This study also highlights the pathways and genes involved in the regulatory mechanisms underlying these prognostic lncRNAs. Further studies are warranted prior to the application of this lncRNA signature in clinical practice. Availability of data and material The raw data were collected and analyzed by the authors, and they are not ready to share their data because the data have not been published. Author contributions RL and YQZ participated in the design of this study, and they both performed the statistical analysis. GZ and BZ carried out the study and collected important background information. HZ and MW drafted the manuscript. All authors read and approved the final manuscript. All authors contributed toward data analysis, drafting and revising the paper and agree to be accountable for all aspects of the work. Disclosure The authors report no conflicts of interest in this work.
2019-01-22T22:25:56.328Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "44284552a8d3a6c515dc803f7b148c28dd9099dc", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=47129", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44284552a8d3a6c515dc803f7b148c28dd9099dc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54873344
pes2o/s2orc
v3-fos-license
Relative performance of students by gender in public examinations (Biology) A case of selected urban secondary schools in Benue State, Nigeria . Relative performance of students by gender in public examinations was assessed using questionnaires and standardized test on one hundred and eighty students from some selected urban centered schools in Benue State. Casual comparative design and correlation design were used and the results showed that socio-economic status of parents affects the performance of their children/wards (students), and there is no significance difference in the level of performance between boys and girls in public examinations. Parent should be enlightened through seminars and workshops on the importance of educating themselves and their children/ward. Governments, non-governmental organization (NGO’S) and individuals should establish more schools for more education of both sexes. These were the suggestions advanced. INTRODUCTION Students are mandated to register, write and pass external examination(s) with regulated bodies saddled with the responsibility of conducting examinations in the region of West Africa and Nigeria in particular. Kanno [2000] opined that students' performance in public examinations especially senior secondary school certificate examinations conducted by WAEC and NECO is one criterion for measuring and establishing the effectiveness of Nigerian secondary school system. Gender roles have stirred up a lot of issues in the imagination of people in the society. The word gender when used in grammar simply refers to the grammatical grouping of words like noun and pronoun into masculine and feminine and neuter classes. But the meaning of the word has changed since the movement of women liberation especially the fourth World Conference held in Beijing, China in 1996 [Orhungur, Agbe, and Egbe-Okpenge, 2003]. Studies have established relationship between access to education, particularly for women and increase in level of development, [UNESCO, 2003]. This shows that the higher the level of women educational status, the more developed the nation will be. This supports the saying that, "if you educate a man, you educate an individual, but if you educate a woman, you educate a nation" Adeyemo [2014]. The Population of women in Nigeria as revealed by the Population census report, [2006] indicates that women constitute 68,293,633 with 70% of them being illiterate. The EFA Global monitoring Report [2008] also revealed that more than 22 million people in Nigeria are illiterates and 65 % of them are women. The high level of illiteracy among women has been attributed to cultural, religious, social and economic factors. The candidates need credits in five subjects, including Mathematics and English Language, to gain admission into many of the tertiary institutions in the country. Recent fall in academic performance in secondary schools in Benue state is a matter of concern to parents, administrators, and school counselors. This fall in the standard of education is evident in the poor academic performance released by NECO/WAEC on written and oral examinations taken by secondary and even students of tertiary institutions [Omole, 2001]. This paper aims to assess whether gender affects the performance of students in public examinations (biology) in some secondary schools of Benue state. Area of study The area of study covered students from post-secondary schools within the three geopolitical zones (A, B, and C) of Benue State. Benue state coordinates are: Latitude 8 0 08' 00"N, 6 0 26' 00"S and Longitude 9 0 54' 00" E, 7 0 30' 00"W as shown in Figure 1. Method of data collection The researcher visited the sample schools personally to seek official approval and subsequent administration of questionnaires and standardized test to the sampled students. The data collected was handles properly and non-lost in transit. The research design is causal comparative design since the independent variable (gender) cannot be manipulated. Statistical Analysis Data collected was analyzed using Pearson Product Moment Correlation Coefficient to determine the relationship between parents' socio-economic status and students' performance. The chi-square ( 2 ) test was used to determine the relative performance of students by gender in public examinations (Biology). At probability level of 0.05, the critical value of χ 2 = 3.84 at 1 degree of freedom (2-1) (2-1) =1 RESULTS AND DISCUSSION The calculated value of 30.5 is greater than the critical value of 3.84, the null hypothesis is accepted. This implies that there is significant difference between parents socio-economic status and students performance. At probability level of 0.05, the critical value of χ 2 = 3.84 at 1 degree of freedom (2-1) (2-1) =1 The calculated value of 36.0 is greater than the critical value of 3.84, the null hypothesis is rejected and the alternative hypothesis is accepted. This implies that there is significant difference between parents' socio-economic status and students' performance in public examinations (Biology). At probability level of 0.05, the critical value of χ 2 = 3.84 at 1 degree of freedom (2-1) (2-1) =1 The calculated value of 57.7 is greater than the critical value of 3.84, the null hypothesis is rejected and the alternative hypothesis accepted. This implies that there is significant difference between parents' socio-economic status and students' performance in public examinations. The calculated value of 0.10 is less than the tabulated value of 0.361 at p< 0.05 level of significance, the null hypothesis that the level of boys and girls has no significant relationship in public examinations (Biology) is accepted. To ascertain the significance of the r-value, the calculated t-value is 1.67 with d.f. of 28 and significance level of 0.05, the critical value is 1.701, the calculated value is low enough to accept the null hypothesis. The calculated value of 0.30 is less than the tabulated value of 0.361 at p< 0.05 level of significance, the null hypothesis that the level of boys and girls has no significant relationship in public examinations (Biology) is accepted. The calculated value of 0.13 is less than the tabulated value of 0.361 at p< 0.05 level of significance, the null hypothesis that the level of boys and girls has no significant relationship in public examinations (Biology) is accepted. International Letters of Natural Sciences Vol. 48 To ascertain the significance of the r-value, the calculated t-value is 0.5 with d.f. of 28 and significance level of 0.05, the critical value is 1.701, the calculated value is low enough to accept the null hypothesis. Research on gender difference in educational achievements has been of considerable interest to education for many years [Becker, 1987]. ILNS Volume 48 Findings on the parents' socio-economic status and students' performance in public examinations, NECO/WAEC from 2004-2006 all indicate that the null hypothesis which states that there is no significant difference between socio-economic status of parents and students performance was rejected and the alternative hypothesis accepted. This shows that there is significant difference between socio-economic status of parents and parents and students performance in public examinations (Biology). Omole [2001] alluded to the facts that economic position of parents largely determined their ability to provide adequate education for their children based on their economic capabilities or status. Wealthy and elite parents send their children to private schools. These people believe that private schools prepare their students better for achievement on any instructional programme which result in the general assumption of the larger society that private schools have better schools academic performance and standards than public or government owned schools. The lack of exposure to letters of the alphabet by school entry among low socioeconomic status (SES) children delays their ability to acquire foundation-level literacy [Duncan and Seymour, 2000]. Results from Tables 1, 2, and 3 indicate that students whose parents are literate and government employed had the highest score while students whose parents are illiterate and government employed had the lowest score. This agrees with the study of Arnold [1994] that children with the most educated parents (who had degree-level or above qualifications) were on average about 12-13 months ahead of those with the least educated parents (who had no qualifications). Findings from table 4, 5 and 6 on the level of performance between boys and girls in public examinations (Biology) 2004, 2005 and 2006 respectively indicate that the level of performance of boys and girls has no significant difference in public examinations. Similarly, UNESCO [1999] agrees that the performance of male may not significantly differ from female but the roles assigned as those of women has been keeping them away from science and mathematics and not due to brain deficiency. These findings disagree with Agbe [2001] who reported that the sex differentials exist in education in the favor of males. The results of parents socio-economic status and students performance NECO/WAEC 2004-2006 can be summarized as follows: In 2004 table 1, out of 60 students examined, students whose parents are literate and government employed where 28 and obtained the highest score of 442; 9 of them were students of literate and self-employed parents with second core of 140; 5 students from literate parents and government employed with the lowest score of 66. This finding disagrees with the works of Aleile-Williams [1992] who assert that when economic constrains is an intervening variable, educating children of the same parents are in most cases in favour of boys and Okogie, [1995] in his study on gender gap in having access to education in Nigeria indicated that when there is financial stress in a family, boys are usually given preference over girls in all matters of schooling. Table 2 presents analysis of parents socio-economic status and students performance, NECO/WAEC 2005, out of 60 students examined, 26 students of literate and government employed parent obtained the highest score of 270; 7 students of illiterate and self-employed obtained the score of 30; and 1 student of illiterate and government employed parents obtained the least score of 14. Table 3 presents analysis of parent's socio-economic status and students performance, NECO/WAEC 2006, out of 60 students examined, 28 of literate and government employed parents obtained the highest score of 182; 4 students of literate and self-employed parents obtained the score of 88; and 3 students of literate and government employed parents obtained the lowest score of 43. The results show that the calculated value is greater than the tabulated value, the null hypothesis is rejected and the alternative hypothesis is accepted. So there is significant difference between parents' socio-economic status and students' performance. International Letters of Natural Sciences Vol. 48 Table 4 shows the correlated coefficient between the level of performance of boys and girls in public examinations (Biology) NECO/WAEC 2004. The critical value for the correlation for two tailed test at 0.05 level of significant against 28 degree of freedom is 0.361. Therefore, since the calculated value of 0.304 is less than the critical value of 0.361, the null hypothesis is accepted and the alternative hypothesis rejected. Table 5 shows the correlated coefficient between the level of performance of boys and girls in public examinations (Biology) NECO/WAEC 2005. The critical value for the correlation for two tailed test at 0.05 level of significant against 28 degree of freedom is 0.361. Therefore, since the calculated value of 0.30 is less than the critical value of 0.361, the null hypothesis is accepted and the alternative hypothesis rejected. Table 6 shows the correlated coefficient between the level of performance of boys and girls in public examinations (Biology) NECO/WAEC 2006. The critical value for the correlation for two tailed test at 0.05 level of significant against 28 degree of freedom is 0.361. Therefore, since the calculated value of 0.13 is less than the critical value of 0.361, the null hypothesis is accepted and the alternative hypothesis rejected. These results show that there is no significant difference between the performance of boys and girls in public examinations (Biology) and a positive step towards global focus of eliminating gender disparities in primary and secondary education by 2005 and achieving genders equality in education by 2015, with a focus on ensuring girls full and equal access to and achievement in basic education of god quality development (UNESCO, 2002) CONCLUSION Parents' socio-economic status plays a significant role in the performance of secondary school students in public examinations (NECO/WAEC) Biology. Students from parents of higher socioeconomic status performed better than those from low socio-economic level. Sex or gender difference between boys and girls has no impact on the performance of students in public examinations (Biology) at secondary school level. It is imperative that government should employed well qualified personnel and equip public secondary schools where students of parents with low socio-economic status could attend, thus, bridging the wide gap better the privileged and less privileged students vis-à-vis performance in public examinations (Biology).
2019-06-10T17:51:34.830Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "85e9a8e9254589d00c6c701076971144bc3717a5", "oa_license": "CCBY", "oa_url": "https://www.scipress.com/ILNS.48.1.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "8ccda1485529b339b6b5f5e47e6d73680f268608", "s2fieldsofstudy": [ "Biology", "Education" ], "extfieldsofstudy": [] }
259316256
pes2o/s2orc
v3-fos-license
A Survey on Graph Classification and Link Prediction based on GNN Traditional convolutional neural networks are limited to handling Euclidean space data, overlooking the vast realm of real-life scenarios represented as graph data, including transportation networks, social networks, and reference networks. The pivotal step in transferring convolutional neural networks to graph data analysis and processing lies in the construction of graph convolutional operators and graph pooling operators. This comprehensive review article delves into the world of graph convolutional neural networks. Firstly, it elaborates on the fundamentals of graph convolutional neural networks. Subsequently, it elucidates the graph neural network models based on attention mechanisms and autoencoders, summarizing their application in node classification, graph classification, and link prediction along with the associated datasets. I. Introduction The characteristic of deep learning is the accumulation of multiple layers of neural networks, resulting in better learning representation ability. The rapid development of convolutional neural networks (CNN) has taken deep learning to a new level [1,2]. The translation invariance, locality, and combinatorial properties of CNN make it naturally suitable for tasks such as processing Euclidean structured data such as images [3,4], At the same time, it can also be applied to various other fields of machine learning [5][6][7]. The success of deep learning partly stems from the ability to extract effective data representations from Euclidean data for efficient processing. Another reason is that thanks to the rapid development of GPUs, computers have powerful computing and storage capabilities, It can train and learn deep learning models in large-scale data sets, which makes deep learning perform well in natural language processing [8], machine vision [9], recommendation systems [10] and other fields However, existing neural networks can only process conventional Euclidean structured data. As shown in Figure. 1(a), Euclidean data structures are characterized by fixed arrangement rules and orders of nodes, such as 2D grids and 1D sequences. Currently, more and more practical application problems must consider non Euclidean data, such as Figure. 1(b), where nodes in non Euclidean data structures do not have fixed arrangement rules and orders, This makes it difficult to directly transfer traditional deep learning models to tasks dealing with non Euclidean structured data. If CNN is directly applied to it, it is difficult to define convolutional kernels in non Euclidean data due to the unfixed number and arrangement order of neighboring nodes in the non Euclidean data center, which does not meet translation invariance. Research work on graph neural networks (GNNs), At the beginning, it was about how to fix the number of neighboring nodes and how to sort and expand them, such as the PATCHY-SAN [11], LGCN [12], DCNN [13] methods. After completing the above two tasks, non Euclidean structured data is transformed into Euclidean structured data, which can then be processed using CNN. A graph is a typical non Euclidean data with points and edges, In practice, various non Euclidean data problems can be abstracted into graph structures. For example, in transportation systems, graph based learning models can effectively predict road condition information [14]. In computer vision, the interaction between humans and objects can be viewed as a graph structure, which can be effectively recognized [15]. Recently, some scholars have reviewed graph neural networks and their branches of graph convolutional neural networks [16][17][18]. The difference in this article is that it focuses on introducing the methods and models of graph neural networks in node classification and link prediction of citation networks. In citation networks, a typical classification task is to provide the content information and citation relationships between each article, and to classify each article into the corresponding domain. For example, in a semi supervised classification scenario of nodes, the attribute information of nodes includes the title or abstract information of the article, as well as the relationships referenced between nodes to form network information. Given a small amount of virtual data tables, the domain to which each node belongs in the network is divided through deep learning. In this task, GCN effectively modeled the node text attributes and reference network structure, achieving great success. Compared to directly using content information (such as MLP), using only structural information (such as DeepWalk [19]) and traditional semi supervised node classification methods on graphs, such as Planetoid [20], traditional methods have much lower classification accuracy than graph convolutional neural network algorithms represented by GCN. Among them, the Graph Attention Network (GAT) [21] performs better than the Planetoid model in classic citation network datasets. Therefore, this task is often seen as a benchmark task to measure the effectiveness of a graph convolutional neural network model. GCN [22], GAT [21], and GWNN [23] all used citation network classification tasks to verify the effectiveness of the model. II. Graph Neural Network A. Graph Structure Class 1) Edge Information Graph: In recent years, the concept of edge information graph has gained considerable attention in the field of graph theory. An edge information graph is defined as a graph structure in which different edges possess distinct structural characteristics. These characteristics may include the weight, direction, and heterogeneous relationships between nodes. For example, consider the complex structure of a social network graph. The relationships between nodes within this graph may take on a variety of forms, ranging from unidirectional "follow" relationships to bidirectional "friendship" relationships. Due to the complexity of such relationships, they can not be adequately represented by simple weight constraints. This highlights the importance of considering the full range of structural edge information in the analysis of graphs with complex relationships. 2) Spatio-Temporal Graph: A Spatio-Temporal graph is a type of property graph.Its characteristic is that the characteristic matrix X in the high-dimensional feature space f * will change with time. This structure is represented as G * = (V, E, A, X), where V, E, and A denote the vertices, edges, and adjacency matrix, respectively. With the introduction of time series, graph structure can effectively manage tasks that require handling of dynamic and temporal relationship types. Yan et al. [24] presented a method for skeleton motion detection based on Spatio-Temporal graph convolutional neural networks. B. Convolution Graph Neural Network Graph convolutional neural networks can be divided into two categories in terms of feature space: frequency domain and spatial domain. A graph convolutional neural network maps the data G = (V, E) of the original graph structure to a new feature spacef G → f * . Taking a singlelayer forward propagation graph convolutional neural network as an example, the features of the layer i neural network are denoted by w i . In computing each node v i in the graph structure, the output H l+1 of each layer of the neural network can be expressed by the nonlinear function f (·, ·), where A is the feature adjacency matrix. The graph convolutional neural network structure is implemented by the nonlinear activation function ReLU = σ(·) with the following layered propagation rule:f (H l , A) = σ(D −1/2ÂD−1/2 H l W l ) where = A + I denotes the adjacency matrix of the graph structure G = (V, E), I denotes the identity matrix,D =  ij denotes the diagonal matrix, and W l denotes the weight matrix of the layer l of the convolutional neural network . Through the hierarchical propagation rules, the graph convolution neural network introduces the local parameter sharing characteristics of the convolution neural network into the graph structure, so that the breadth of the sensing area of each node will be greatly improved with the increase of the number of propagation layers, so as to obtain more information from the neighboring nodes.Based on the existing GNN structure, a general GNN structure flowchart can be and summarized, as shown in Figure. 2. Figure. 2: General structure of graph neural networks C. Spatio-Temporal Graph Neural Network As an attribute graph network, The Spatio-Temporal graph neural network introduces the characteristics of time series. It can simultaneously obtain the characteristic information of time and space domains in the graph structure, and the characteristics of each node will change with time. We mainly discusses the Spatio-Temporal graph neural network structure that uses graph convolution to extract spatial feature dependence in the spatial domain. It is mainly divided into three time-domain feature acquisition methods: traditional convolution network, gated loop network and graph convolution network. Figure. 3 shows the network structure comparison between graph convolution neural network and Spatio-Temporal graph neural network (taking 1D-CNN+GCN structure as an example). The two network structures are constructed on the basis of graph convolution computing unit, where ϕ Is the element distance between matrix Z and Z T , and MLP full connection represents multilayer perceptron full connection neural network. Figure. 3: The spatial temporal graph of a skeleton sequence. III. Graph Neural Network Based on Attention Implementation The attention mechanism has shown strong capabilities in processing sequential tasks [25], such as in machine reading and learning sentence representation tasks. Its powerful advantage lies in allowing variable input sizes, and then utilizing the attention mechanism to only focus on the most important parts before making decisions. Some studies have found that the attention mechanism can improve convolutional methods, allowing for the construction of a powerful model, In dealing with some tasks, better performance can be achieved. Therefore, reference [21] introduced attention mechanism into the process of neighbor node aggregation in graph neural networks and proposed graph attention networks (GAT). In the traditional GNN framework, attention layers were added to learn the different weights of each neighbor node, Treat them differently. In the process of aggregating neighboring nodes, only focus on the nodes with larger effects, while ignoring some nodes with smaller effects. The core idea of GAT is to use neural networks to learn the weights of each neighboring node, and then use neighboring nodes with different weights to update the representation of the central node. Figure. 4 is a schematic diagram of the GAT layer structure. Figure. 4(a) shows the calculation of weights between node i and node j, Figure. 4(b) shows a node using a multi head attention mechanism in its neighborhood to update its own representation. The attention factor of node j relative to node i is solved as: where a i j represents the attention factor of node j relative to node i, W is a affine transformation for dimension reduction, α T represents the weight vector parameter,|| represents the vector splicing operation, and LeakyReLU (x ) = x , x > 0 λx , x ≤ 0 is the leak correction linear unit. Then, with the nonlinear activation function δ, the learned attention factor a i j can be used to update the central node i: In order to make the model more stable, the author also applied a multi head attention mechanism. Instead of using only one function to calculate attention factors, K different functions were set to jointly calculate attention factors. The results of each function can obtain a set of attention parameters, and can also provide a set of parameters for the weighted sum of the next layer. In each convolutional layer, K different attention mechanisms do not affect each other, Work independently. Finally, concatenate or average the results obtained from each attention mechanism to obtain the final result. If K different attention mechanisms are calculated simultaneously, we can obtain: || represents the concatenation operation, and a k ij is the attention factor obtained by the kth attention parameter function. For the last convolutional layer, if the multi head attention mechanism is used for solving, the average method should be used to solve: Reference [26] also introduced the multi head attention mechanism into the aggregation process of neighboring nodes, proposing gated attention networks (GAAN). However, unlike GAT, which uses averaging or concatenation to determine the final attention factor, GAAN believes that although using the multi head attention mechanism can gather information from multiple neighboring nodes of the central node, not every head of attention mechanism has the same contribution, A certain head of attention may capture useless information. Therefore, GAAN assigns different weights to each attention mechanism in multi head attention to aggregate neighboring node information and complete the update of the central node. Therefore, GAAN first calculates an additional soft gate between 0 (low importance) and 1 (high importance), assigning different weights to each head of attention. Then, combined with the multi head attention aggregator, You can obtain a gated attention aggregator: i , · · · , g where F C θ0 (·) means that the activation function is not applied after the linear transformation, is the connection operation, K is the number of attention mechanisms, and w (k) i,j is the k-th attention weight between node i and j, θ (k) v is the parameter of the k-th header used to query the vector. g (k) i is the threshold value of the k-th header of node i, Apply convolutional network Ψ g and take the center node feature x i and neighbor node feature z Ni to calculate the g i . Convolution network Ψ g . The convolutional network Ψ g can be designed according to its actual needs, and literature [27] adopts average pooling and maximum pooling for construction: where θ m represents mapping the feature vectors of neighboring nodes to the dimension d m , θ g represents mapping the concatenated feature vectors to the k-th gate. Finally, the author of reference [27] constructed a gated recursive unit using GGAN and successfully applied it to traffic speed prediction problems. In reference [26], it was proposed that although GAT has achieved good results in multiple tasks, there is still a lack of clear understanding of its discriminative ability. Therefore, the author of this paper conducted a theoretical analysis of the representation characteristics of graph neural networks using attention mechanisms as aggregators, and analyzed that such graph neural networks are always unable to distinguish all situations with different structures. The results show that, The existing attention based aggregators cannot preserve the cardinality of multiple sets of node feature vectors during aggregation, which limits their discriminative ability. The proposed method modifies the cardinality and can be applied to any type of attention mechanism.Zhang et al [28] developed a self attention graph neural network (SAGNN) based on attention mechanism for hypergraphs. SAGNN can handle different types of hypergraphs and is suitable for various learning tasks and isomorphic and heterogeneous hypergraphs with variables. This method can improve or match the latest performance of hypergraph learning, solving the shortcomings of previous methods, For example, it is impossible to predict the hyperedges of non-uniform heterogeneous hypergraphs. U2GNN [29] proposed a novel graph embedding model by introducing a universal self attention network, which can learn low dimensional embedding vectors that can be used for graph classification. In implementation, U2GNN first uses attention layers for calculation, Then, a recursive transformation is performed to iteratively remember the weight size of the vector representation of each node and its neighboring nodes in each iteration, and the final output sum is the final embedded representation of the entire graph. This method can solve the weaknesses in existing models, To generate reasonable node embedding vectors, the above models apply attention mechanism to spatial domain graph neural networks. In order to better utilize the local and global structural information of the graph, reference [30]first attempted to transfer attention mechanism from spatial domain to spectral domain, proposing spectral graph attention network (SpGAT). In SpGAT, graph wavelets are selected as spectral bases, And decompose it into low-frequency and high-frequency components based on indicators. Then, construct two different convolutional kernels based on low-frequency and high-frequency components, and apply attention mechanisms to these two kernels to capture their importance. By introducing different trainable attention weights to low-frequency and highfrequency components, local and global information in the graph can be effectively captured, And compared to the spatial domain, the attention spGAT greatly reduces learning parameters, thereby improving the performance of GNN. In order to better understand the application of attention mechanisms in graph neural networks and identify the factors that affect attention mechanisms, a series of experiments and models were designed in reference [31] to conduct in-depth research and analysis. Firstly, the graph isomorphism network (GIN) model [32] was used to conduct experiments on the dataset, but it was found that its performance was very poor, And it is difficult to learn attention subgraph networks. Therefore, the author combined GIN and ChebyNet networks to propose a ChebyGIN network model, and added attention factors to form an attention model. A weakly supervised training method was adopted to improve the performance of the model, Experiments were conducted on the models in color counting and triangle counting tasks, and four conclusions were drawn: 1) : The main contribution of the attention mechanism in graph neural networks to node attention is that it can be extended to more complex or noisy graphs, which can transform a model that cannot be generalized into a very robust model; 2) : The factors that affect the performance of attention mechanism in GNN include the initialization of attention model, the selection of GNN model, attention mechanism and the hyperparameter of GNN model; 3) : Weak supervised training methods can improve the performance of attention mechanisms in GNN models; 4) : The attention mechanism can make GNN more robust to larger and noisy graphs. We summarize the attention based graph convolutional neural network model mentioned above in Table 1: IV. Graph Neural Network Based on Autoencoder Implementation In the unsupervised learning task, the autoencoder (AE) and its variants play a very important role. It realizes implicit representation learning with the help of neural network model, and has strong data feature extraction ability. AE realizes effective representation learning of input data through encoder and decoder, and the dimension of implicit representation learned can be far less than the dimension of input data, The purpose of dimensionality reduction is achieved. AE is currently the preferred deep learning technology for implicit representation learning. When we input raw data with certain connections (x 1 , x 2 , · · · , x n ) into AE for reconstruction learning, we can complete the task of feature extraction. The application scenarios of autoencoders are very wide, and they are often used in tasks such as data denoising, image reconstruction, and anomaly detection. In addition, When AE is used to generate data similar to training data, it is called a generative model. Due to the above advantages of AE, some scholars have applied AE and its variant models to graph neural networks. Reference [33] first proposed a variational graph autoencoder (VGAE) model based on variational autoencoder (VAE), Apply VAE to the processing of graph structured data. VGAE uses hidden variables to learn interpretable hidden representations of undirected graphs, and implements this model using a graph convolutional network encoder and a simple inner product decoder. In this model, the encoder is implemented using a 2-layer GCN: , the average matrix of nodes is µ = GCN µ (I, A), and the variance of nodes is logσ = GCN σ (I, A). The GCN of layer 2 is: where, A D = D − 1 2 AD − 1 2 is the adjacency matrix of the symmetric specification. The generative model used to reconstruct the graph is calculated by using the inner product of implicit variables: , A i j are the elements of matrix A. Finally, the loss function is defined as: where, the first item on the right side of the equation represents the cross entropy function, and the second item represents the KL distance between the graph generated by the decoder and the input graph. Most of the existing network embedding methods represent each node by a point in the low dimensional vector space. Thus, the formation of the entire network structure is deterministic. However, in reality, the network is full of uncertainty in the process of formation and evolution, which makes these methods have some drawbacks. In view of the above drawbacks, Reference [34] proposed a deep variational network embedding (DVNE) method for embedding in Wasserstein space. Due to the fact that Gaussian distributions essentially represent uncertainty properties, DVNE utilizes a deep variational model to learn Gaussian embeddings for each node in Wasserstein space, rather than using a point vector to represent nodes. This allows for the learning of Gaussian embeddings for each node in Wasserstein space while maintaining network structure, Modeling the uncertainty of nodes. In the DVNE method, the second Wasserstein distance (W 2 ) is used to measure the similarity between distributions. A deep variational model is used to minimize the Wasserstein distance between model distribution and data distribution, thereby extracting the intrinsic relationship between the mean vector and variance term. In the implementation process of DVNE, the W 2 distance of two Gaussian distribution functions is defined as: where, N represents the Gaussian distribution. The loss function L of DVNE consists of two parts: one is the loss based on ranking that keeps the first order approximate L1 norm; The second is to maintain the second-order approximation L2 norm reconstruction loss. where D = {(i, j, k) | j ∈ N (i), k / ∈ N (j)} is a triple set, E ij is the W 2 distance between node i and j. C is the input feature, Q is the encoder, is the Hadamard product, G is the decoder, Z is the random variable. Finally, the parameters in the model are learned by minimizing the loss function. The method [35] introduced AE into the learning representation of vertices and proposed a structured deep network embedding (SDNE) method. Most existing network embedding methods use shallow models, which cannot capture highly nonlinear network structures, resulting in poor network performance. The SDNE method utilizes second-order approximation to capture global network structures, The network performance is not good enough. At the same time, the firstorder approximation is used to maintain the local network structure. Finally, the network structure is maintained by using first-order and second-order proximity in the semi supervised depth model, which can effectively capture the highly nonlinear network structure and maintain the global and local structures. Then the loss function of the model is: where L2 is the second order approximate loss function, L1 is the first order approximate loss function, and Lr is the regularization term to prevent overfitting. Each loss function is defined as: where Rule equivalence refers to the fact that vertices located in different parts of a network may have similar roles or positions, which is easily overlooked in research on network embedding. Reference [36] proposes a deep recursive network embedding (DRNE) to learn network embeddings with rule equivalency. The neighborhood of nodes is transformed into an ordered sequence, and each node is represented by a normalized LSTM layer, Aggregates their neighbor characteristics by recursion, and the loss function of DRNE is: where, X v and X u represent the embedded vector representation of node v and u, and Agg is a aggregate function implemented by LSTM. In a recursive step, the embedded representation of nodes can maintain the local structure of their neighborhood. By iteratively updating the learned representation, the learned embedded vector of nodes can integrate their structural information in a global sense, so as to achieve rule equivalence. When serializing neighborhood nodes, The most effective neighborhood ranking measure -degree -is used to rank them. Finally, regularization term is added as the loss function of the whole model to update the parameters. The method [37] applied AE to matrix completion in recommendation systems and proposed the graph convolution matrix completion method (GC-MC). GC-MC viewed matrix completion as a link prediction problem on the graph and designed a graph self coding framework based on bipartite interaction graph for differentiable information transmission. Its encoder was implemented using graph convolution, The decoder is completed by a bilinear function. Reference [38] proposes a new framework for combating graph embedding of graph data. In order to learn robust embedding, two countermeasures are proposed to combat regularization graph auto encoder (ARGA) and regularization variational graph auto encoder (ARVGA). In addition to the above methods, The graph neural network based on autoencoders also has the Graph2Gauss [39] method that can effectively learn node embeddings on large-scale graphs. Table 2 summarizes the graph neural network models based on autoencoders. A. GNN Classifier LetH 1 be the augmented node repersentation set by concatenatingH 1 with the embedding of the synthetics nodes, andṼ L be the augmented labeled set by incorporating the synthetic nodes into V L . We have an augmented graphG = {Ã,H} with labeled node setṼ L . The data size of different classes inG becomens balanced, and an unbiased Gnn classifier would be abel to be trained on that. Specifically, we adopt another GraphSage block, appended by a linear layer for node classification onG as: where H 2 represents node representation matrix of the 2nd GraphSage block, and W refers to the weight parameters. P v is the probability distribution on class labels for node v. The classifier module is optimized using cross-entropy loss as: We compare the some GNN-based models' performance. Table 3 shows the corresponding F 1 and M CC values for the two real-world datasets. B. Link Prediction Human trajectory prediction is one application of link prediction.Two metrics are used to evaluate model performance: the Average Displacement Error (ADE) [42] defined in equation 19 and the Final Displacement Error (FDE) [43] defined in equation 20. Intuitively, ADE measures the average prediction performance along the trajectory, while the FDE considers only the prediction precision at the end points. Since Social-STGCNN generates a bi-variate Gaussian distribution as the prediction, to compare a distribution with a certain target value, we follow the evaluation method used in Social-LSTM [43] in which 20 samples are generated based on the predicted distribution. Then the ADE and FDE are computed using the closest sample to the ground truth. This method of evaluation were adapted by several works such as Social-GAN [44] and many more. The performance of Social-STGCNN is compared with other models on ADE/FDE metrics in table 4 [48].
2023-07-04T06:42:14.990Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "ee7d7a3732360144aa635058e5870db6028625cb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee7d7a3732360144aa635058e5870db6028625cb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220647345
pes2o/s2orc
v3-fos-license
Dynamic renormalization group theory for open Floquet systems We develop a comprehensive Renormalization Group (RG) approach to criticality in open Floquet systems, where dissipation enables the system to reach a well-defined Floquet steady state of finite entropy, and all observables are synchronized with the drive. We provide a detailed description of how to combine Keldysh and Floquet formalisms to account for the critical fluctuations in the weakly and rapidly driven regime. A key insight is that a reduction to the time-averaged, static sector, is not possible close to the critical point. This guides the design of a perturbative dynamic RG approach, which treats the time-dependent, dynamic sector associated to higher harmonics of the drive, on an equal footing with the time-averaged sector. Within this framework, we develop a weak drive expansion scheme, which enables to systematically truncate the RG flow equations in powers of the inverse drive frequency $\Omega^{-1}$. This allows us to show how a rapid periodic drive inhibits scale invariance and critical fluctuations of second order phase transitions in rapidly driven open systems: Although criticality emerges in the limit $\Omega^{-1}=0$, any finite drive frequency produces a scale that remains finite all through the phase transition. This is a universal mechanism that relies on the competition of the critical fluctuations within the static and dynamic sectors of the problem. I. INTRODUCTION Critical dynamics is well-known to emerge at second order phase transitions of equilibrium many-body systems [1][2][3][4][5][6][7]. Out of equilibrium, the situation is even richer, giving rise to novel universal effects without equilibrium counterparts. Forms of genuine non-equilibrium criticality are realized in diverse physical systems, and range from turbulence in quantum [8][9][10][11] and classical [12][13][14][15] systems, over interface dynamics [16] like the spreading of fire fronts [17], to reaction-diffusion dynamics governing chemical reactions [18]. Non-equilibrium driving conditions can even lead to full blown selforganized criticality [19], where the system exhibits critical dynamics without a need for fine-tuning. In the present work, we focus on the nature of criticality in open Floquet systems. To this end, we investigate the interplay of critical dynamics with a drive implemented through periodically time-dependent couplings within a Keldysh-Floquet Renormalisation Group (RG) approach. As main physics result, already exposed in our previous letter [53], we uncover a new universal mechanism specific to Floquet systems: The drive generates a scale that remains finite even when the system approaches a symmetry breaking phase transition. This means that criticality is effectively suppressed by a rapid (although not infinitely rapid) periodic drive. Here, we present the broader field-theoretical framework behind these results, which may also be applied to other manybody Floquet problems, and include several new results and refinements (see Sect. I A). To embed our work in a broader perspective on driven systems, consider Fig. 1, where the vertical line signals driving frequency. Critical physics can only emerge in the two extreme limits of a vanishing (Ω = 0) and infinite (Ω −1 = 0) drive frequencies. The system is at Embedding of the present work in the spectrum of driven systems. Shown is the driving frequency range, with undriven (equilibrium) and infinitely rapidly driven (IRD, non-equilibrium) limits. Both extremes exhibit criticality. In their vicinity, criticality is masked: A slow drive cuts off asymptotic scaling via the Kibble-Zurek mechanism, a fast drive via the mechanism elaborated on in this work. thermal equilibrium when Ω = 0 (left-hand side), and exhibits equilibrium criticality at a temperature T = T c . When Ω is small but finite, the drive is adiabatic far away from the critical point, but introduces a new scale leading to the breakdown of criticality once the latter is approached: this is the essence of the Kibble-Zurek mechanism [113,114]. In the opposite Ω −1 = 0 limit, Infinitely Rapidly Driven (IRD) criticality takes place [115][116][117]. This limit is distinct from the opposite one physically by the absence of detailed balance, and operationally by a different universal speed at which the system looses coherence at the critical point. Here we explore the vicinity of this opposite extreme case, close to the IRD critical point. We find that, phenomenologically similar to the slowly driven regime, a finite scale emerges, suppressing scaling behavior. Despite this phenomenological similarity at first sight, the mechanism is vastly different, including on the level of observable consequences. In particular, while the Kibble-Zurek mechanism only probes the set of equilibrium critical exponents, the opposite limit hosts new and independent universal exponents, as we will demonstrate. A. Key results In this work, we develop a comprehensive RG approach to capture near critical, open Floquet systems. In particular, the peculiarities of such systems lead us to the construction of a dynamic version of the RG applied to Floquet systems [53]. By this, we refer to the following: Common static renormalization methods account for the rapid periodic drive through a renormalization of the static, time-independent description, such as the Floquet Hamiltonian (see e.g. [50,[118][119][120][121]). These powerful methods are able to capture fundamental phenomena, like the stabilization of the Kapitza pendulum [122,123]. A main technical insight of this work is, however, that while this scheme is appropriate far away from the critical point, it necessarily breaks down once pushing an open Floquet system close criticality: A reduction to an effective static, time-averaged description is not possible, and we crucially need to include the renormalization of the dynamic components and their interplay with the static parts, accounting for the coupling between the dif-ferent Floquet-Brillouin zones (FBZ). Such a dynamic RG scheme is developed here, which affords a controlled expansion for large drive frequencies exceeding the other scales in the problem. In this framework, and combining it with the -expansion in RG, we confirm and refine our previous results [53], most notably the fact that a finite drive frequency cuts off the divergence of the correlation length as the system crosses the phase transition. We implement this scheme for a d-dimensional gas of interacting bosons in contact with a bath that induces loss and gain of particles. The system is equipped with the usual bosonic U (1) symmetry that breaks spontaneously as the system is tuned through its critical point. We include the drive by allowing the couplings of this theory to periodically depend on time, and let the system undergo the phase transition with this drive switched on. Basic physical picture A hint of the main physics result is found by inspecting the form of the mass or gap term, which measures the distance from the critical point, and now is time dependent. A more thorough consideration shows that the relevant mass term is the imaginary part of this quantity, describing damping, see Sect. III B and Fig. 4a. In the notation below, it reads The time dependence of this coupling signals synchronization in the Floquet steady state. There are two limiting cases where the driving scale drops out: the undriven equilibrium limit Ω = 0, but also the IRD nonequilibrium regime Ω −1 = 0, where the higher Fourier modes average out, and the rotating wave approximation becomes exact. Formally, both cases are then characterized by a continuous time-translation invariance. Imposing this symmetry in the undriven equilibrium and the driven IRD limit then rules out mass terms other than µ I 0 . In contrast, giving up this symmetry allows for more mass terms, i.e. RG relevant couplings with the potential to modify and hamper the critical physics. From an RG viewpoint, this leads to Kibble-Zurek physics at slow drive, and our result at rapid drive. We come back to the fundamental difference between these cases at the end of this section. Our result at rapid drive lends itself to the following simple picture: Even when the system is critical, i.e. µ I 0 → 0, the synchronized mass still oscillates, as µ I (t) only vanishes on period average. The mass is then being rapidly dragged across the phase transition periodically. We find that this induces a finite scale even when µ I 0 = 0. We can think of this as a blurring of the phase boundary as a result of the non-vanishing µ I n =0 (see Fig. 7a, light red area). The importance of the mass oscillation also emphasizes the dynamic nature of the effect, not captured by a renormalization of the static sector alone. Weak drive expansion scheme We assume that the drive frequency Ω, is larger than all the other scales involved in the dynamics (but not infinite). We capture the limit of a rapid drive through an asymptotic expansion in powers of Ω −1 (see Sect. III C). This expansion reproduces the rotating wave approximation as the order zero contribution (or at Ω −1 = 0), and provides access to the effects of a large but finite drive frequency. The non-equilibrium critical physics in the IRD limit Ω −1 = 0 [115][116][117] is then recovered. Our result emerges from the inclusion of the Ω −1 -corrections. In the present work, we consolidate the previous picture [53] by pushing this expansion up to O(Ω −2 ). A direct expansion of the loop integrals that emerge in perturbation theory involves inverting Green functions that are not time-translation invariant. Even if the Floquet formalism turns this task into inverting nondiagonal infinite-dimensional matrices, this remains a formidable task that we circumvent in Sect. III B. Instead we find that a systematic weak drive asymptotic expansion in powers of Ω −1 can be obtained via a preliminary expansion in powers of the drive amplitude. Indeed, we show that the loop integrals actually depend on the drive amplitude E, through the ratio E/Ω. In the weak drive limit E/Ω = 0, the system is undriven and the loop integrals are easily computed. A systematic expansion in powers of Ω −1 then emerges by first expanding in powers of E/Ω, computing the expanded loop integrals, and finally re-expanding the obtained result in powers of Ω −1 . This procedure, provides an algorithm that can be applied to reach any order in an Ω −1 -expansion, and this order is directly tied to the order of the underlying weak drive E/Ω-expansion [see Sect. III D and App. C, where we go up to O((E/Ω) 2 ) and Sect. IV B, where include terms up to O(Ω −2 ) in the RG flow equations]. Dynamic RG In order to access the critical physics, we generalize the usual momentum shell 1-loop RG [124][125][126] to periodically driven systems. We develop a dynamic version of the RG, where all the Fourier modes of the periodic couplings are renormalized together. This means that not only the time-averaged static description is renormalized by the drive, but also the time-dependent parts (n = 0 Fourier modes) are renormalized. Our RG approach is constructed by introducing a running cut off scale k, and integrating out fluctuations on k-dependent momentum shells (see Figs. 2 and 5). The absence of energy conservation inherent to the periodic drive induces a direct coupling between the different FBZs. This leads to a tower of coupled momentum shells (as opposed to a single shell in the absence of drive) in momentum and frequency space, with the rungs separated by integer multiples of Ω, Fig. 2. This implies that, for any choice of rotating frame, there are rapid fluctuations that still contribute Similarly to traditional RG, we integrate out fluctuations on k-dependent momentum shells (represented here as gray hoops and for d = 2). In the absence of drive, there is a single momentum shell and the frequency is fixed to be ω ∼ k 2 (middle shell). The drive induces a coupling between different Floquet Brillouin Zones (FBZ) (i.e. frequencies that are separated by integer multiples of Ω), and forces us to include an a priori infinite tower of momentum shells. The choice of the zeroth FBZ (p1 − p2-plane) corresponds to the choice of a rotating frame. even when all but the large spatial scales are integrated out. This plays an especially important role at criticality, where the large-scale (and usually slow) modes take over. Our result is rooted in the fact that the system continuously absorbs and emits energy at frequencies that are integer multiples of Ω. Modes within different FBZs easily interact with each other. The periodic drive enables modes with frequencies being multiples of Ω to enter on an equal footing, in what concerns large scale fluctuations, with the slow fluctuations, and dramatically affect the critical physics even though the drive period is very short. In Sect. IV A we compute the RG flow of the periodically time-dependent couplings. We use a real-time representation of the flow equations [Eqs. (39) and (40)], where the entire time dependence of the couplings is renormalized. The periodicity of the drive is then included by making use of the Floquet formalism for the Green functions. This provides RG flow equations for all the Fourier modes of the periodically time-dependent couplings which are expressed in terms of loop frequency integrals [Sect. IV A 2 and Eq. (41)]. Although we work at large drive frequencies, we go beyond the common approaches based on the Floquet Hamiltonian and/or variants of the Magnus expansion (see e.g. [50,[118][119][120][121]) in two distinct ways: First, we include loop integrals (as opposed to tree-level processes) to account for fluctuations, and second, we include the renormalization of the dynamic sector, i.e. higher Fourier modes n = 0, without integrating them out. Indeed, focusing on the Floquet Hamiltonian alone (without including an analysis of the kick operators [118][119][120] and fluctuations on top of this Hamiltonian) only provides meanfield stroboscopic information on the system. Without further treatment, this is a tree-level approximation, that exclusively includes static renormalization effects. For example, the famous analysis of the Kapitza pendulum [122,123] provides an effective Floquet potential with a new local minimum when the drive is strong enough. The static potential is renormalized as a result of the drive. Including fluctuations into this analysis would enable one to ask which minimum is actually stable against fluctuations. The kick operators in turn provide a dynamic renormalization effects through the full periodic time dependence of the pendulum as it rests in each of these minima. Our approach includes both of these features combined. Indeed, loop perturbation theory is specifically designed to include such fluctuations and, although we do not resolve the time-dependent order parameter, we do not integrate out any of the periodic degrees of freedom. Absence of criticality Our main physics result is obtained by analyzing the RG flow in the presence of the periodic couplings close to the critical point. In the absence of drive, and also in the IRD limit, the critical physics is governed by the well-known Wilson-Fisher (WF) RG fixed point. This fixed point characterizes all the critical exponents together with the divergence of the correlation length. In the presence of the drive, additional couplings (proportional to Ω −1 ) become available and have to be included in the RG flow. Our result is a consequence of the fact that some of these couplings are relevant and thus destabilize the WF fixed point (see Fig. 6). This means that it is impossible to observe a diverging correlation length experimentally in a rapidly driven system, because this can only happen when the RG flow reaches its fixed point. In practice, we find that the correlation length grows (as it would without the drive) as the system approaches the phase transition, but then saturates at a value that depends on the ratio of the drive amplitudes and frequency (see Fig. 7a). As Ω −1 → 0, this emergent cut-off scale is removed, and we recover IRD criticality. On a more technical note, this shows that the rotating wave approximation -setting Ω −1 = 0 and thus discarding the dynamic effects of a rapid drive from the outset -necessarily breaks down in the vicinity of a critical point. Although we consider purely relaxational dynamics with a monochromatic drive Eq. (51) for simplicity, we show that this is a universal mechanism. Indeed, we expect that (except in fine tuned cases) criticality is regularized as soon as a periodic drive is switched on. As we discuss above, the drive allows for new relevant couplings. Actually, a new critical exponent is associated to each of these couplings [see Eq. (61)]. These exponents are independent from the IRD critical exponents, and are an original property of the periodically driven system. In the present work, we find that increasing the order of the truncation of the asymptotic Ω −1 -expansion produces additional couplings, and therefore gives access to the additional original exponents. Interpretation and relation to the slowly driven limit Our result can be interpreted within the framework of fluctuation induced first order transitions [127][128][129][130], see Sect. V. This means that the interplay of critical and dynamic fluctuations change the phase transition from a second to a first order one, without an explicit symmetry breaking as usually present in first order phase transitions with a critical endpoint of higher symmetry. Technically, what our scenario shares with other instances of fluctuation induced first order transitions is the presence of multiple near gapless modes, which interact with each other strongly. While these may be Goldstone or gauge modes in the traditional instances of this scenario, here they are realized by the poles of the different FBZs, all reaching criticality jointly as dictated by the Floquet theorem (see Fig. 4a). Indeed, we also find the RG phenomenology common to such scenarios: the drive enables the RG flow to take the system to an unstable range of parameters (negative interaction parameter). This signals that higher order couplings (that are not included here) are responsible for the system's stability and opens the door for φ 6 phenomenology, where a first order transition is expected. This discussion shows that the mechanism established here for a rapid drive is very different from the Kibble-Zurek scenario for a slow drive (cf. Fig. 1) -even operationally: In the Kibble-Zurek scenario, the drive provides information on the underlying equilibrium critical point via the set of equilibrium critical exponents. Instead, here we obtain new, independent exponents, see Sects. I A 4 and IV C. This is rationalized from the fact that in the former case, we deal with an infrared modification of the critical physics (a slow driving scale is introduced, so slow that the periodic functions in Eq. (1) can be expanded in powers of Ω and the periodicity is never probed on the accessible time scales), while in the latter case the modification is in the ultraviolet (a fast driving scale is introduced, and the periodicity is crucial) [see also the additional mass scales in Eq. (1)]. It is a very basic insight of RG theory [131] that it is such ultraviolet scales that are able to modify and add critical exponents to the observable phenomenology. 6. Plan of the paper Our paper is organized as follows. Our model, physical setup and formalism are defined in Sect. II. Next, we briefly review the Floquet formalism, and how it relates to our formalism in Sect. III. In particular, our weak drive asymptotic expansion is described in Sect. III C. Our main result is derived in Sect. IV. We first discuss the dynamic RG, before we derive perturbative RG flow equations in Sect. IV A. In particular, the most general form of the RG flow equations is given in Eq. (41). These equations are then simplified and asymptotically expanded in powers of Ω −1 in the following sub-sections. This ultimately leads to Eqs. (49) and (52), which describe the flow of the Fourier modes of the couplings to O(Ω −2 ). The critical physics is analyzed in Sect. IV C, and we finally derive and analyze our main result in Sect. IV C 2. Sect. V is devoted the interpretation of our result in terms of a fluctuation-induced first order transition. II. SET-UP In this section, we define our physical model and setup. Moreover, we introduce the Keldysh field theory that will constitute our theoretical framework. For definiteness and in order to make contact with the current discussion of Floquet systems, we first consider a fully quantum mechanical 'parent model'. We note, however, that for the discussion of criticality in driven open systems, a semi-classical description is appropriate, because decoherence takes place close to the critical point [115][116][117]. Our main result is eventually derived from the dynamics of Eq. (6), which captures the mesoscopic physics close to the critical point. We consider driven open quantum many-body Floquet systems described by a Lindblad quantum master equation withρ the many-body density matrix. Microscopically, our system is characterized by a Hamiltonian (ψ andψ † are bosonic field operators) coupled to an external bath through jump operatorŝ that model single-body pumping, loss and twobody losses respectively (with κ i (t) the corresponding drive/dissipation rates). See e.g. [116,132] for additional information on this model without explicit time dependence. The periodic time dependence (with period T = 2π/Ω) can occur through an explicit time dependence of the Hamiltonian H(t + 2π/Ω) = H(t) and/or through periodic excitations of the external bath κ i (t + 2π/Ω) = κ i (t). The system is invariant under the internal U (1) symmetry, i.e. under the phase rotation ψ → e iαψ . When the U (1) symmetry is spontaneously broken, ψ acts as an order parameter. A Floquet steady state is realized at asymptotically long times, when the information on the initial conditions is lost and the dynamics is synchronized with the drive. The steady state is determined by the time dependence of the couplings alone. In particular, this implies that the system is not invariant under continuous time-translations. Then, observables, which depend on relative-times as is the case in any steady state, depend on the absolute (or mean) time periodically and with the same period as the external drive (see Sect. III A). The drive frequency Ω, appears as a parameter that can be tuned (see Fig. 1). In the trivial case Ω = 0, the system is undriven and the dissipation ensures that the system is at thermal equilibrium. In the opposite IRD limit Ω −1 = 0 (rotating wave approximation) the periodicity of the dynamics also disappears, and the system becomes generically invariant under time-translations. It violates however the conditions of detailed balance, characteristic of thermal equilibrium [133]. In this work, we focus on the dynamics at large Ω and elucidate the effect of a rapid drive on the stationary IRD critical physics (see Fig. 7, wherex = 0 corresponds to the IRD system). We use the Schwinger-Keldysh formalism, which describes the Floquet steady state in terms of an action functional instead of the equivalent Lindblad description [116,132,[134][135][136][137][138]. With Eq. (3), the dissipative dynamics comes in the form of one-body pumping and losses as well as two-body losses. Then the Keldysh action can be written as 2 Φ = (φ,φ) contains the 'classical' field φ, as well as the 'response' or 'quantum' fieldφ that is inherent to the dynamical functional formalism. The field operator expectation value is the order parameter in the field theory ψ = φ . The retarded, advanced and Keldysh inverse propagators are 2 Here and in the following, we use the short-hand notatioń p is the d dimensional momentum and p = p 2 is its norm. The combination of coherent and dissipative dynamics is encoded in the couplings K, µ, g, which are complex valued. Their real parts account for the coherent dynamics inherited from the underlying Hamiltonian, and the imaginary parts of µ and g emerge from κ i (t) and have an interpretation as incoherent one-and two-body pumping and losses of particles resulting from the coupling to the bath [116,132,139]. In the absence of an explicit time dependence, the system is stationary and its U (1) symmetry can be spontaneously broken. It then undergoes a second-order phase transition, close to which large-scale fluctuations dominate the physics. This regime allows for an effective mesoscopic description of the system, where the fluctuations on short temporal and spatial scales are integrated out. There, φ is interpreted as a fluctuating order parameter field, and the order parameter is obtained as its expectation value φ . Physically, one finds that decoherence takes place close to the critical point [115][116][117]. Then the corresponding mesoscopic model, describes the semi-classical non-conservative dynamics of the order parameter, 3 which is a scalar complex field. The small-scale fluctuations together with most of the microscopic details are encapsulated in effective couplings K, µ, g, which are complex valued, and retain their interpretation as real and imaginary parts, accounting for the coherent and dissipative dynamics, respectively. This mesoscopic description inherits the periodic nature of the microscopic drive in its most general form. Indeed, the non-linearity of the model make the coarse graining of the small scales a complex non-linear procedure that affects all the effective couplings without distinction. Then, the effective couplings are all time-dependent in the Floquet steady state 4 µ = n e −inΩt µ n , g = n e −inΩt g n . Additionally to the bosonic model of Eq. (2), this mesoscopic model can also be applied for example, to superfluids in solid state systems. Indeed, the order parameter has the same U (1) symmetry and may not be conserved as a result of the coupling to a bath of phonons. A way to recover semi-classical thermal equilibrium dynamics in Eq. (6) is to turn off the time-dependent drive and choose purely imaginary couplings. Indeed, in this case we obtain the relaxation dynamics of the 2component order parameter (model A of [6], see App. E). This vanishing of the real parts of the couplings is a consequence of the full decoherence that emerges on large scales, where the dynamics is purely dissipative. Even when all the couplings are real in the microscopic theory describing Hamiltonian reversible evolution, imaginary parts are generated by the coarse graining (and ultimately take over near the critical point). Focusing on the mesoscopic description, we can use these imaginary parts to determine the phase structure of the system's stationary state close to the critical point. Im(µ) and Im(g) represent the one-and two-particle loss rates respectively. When Im(µ) is large and positive, the system can not sustain a condensate and is in the symmetric (or disordered) phase. As Im(µ) is lowered and becomes negative it becomes a single-particle injection rate, and the driven-dissipative steady state is only stabilized by the two-particle losses Im(g). The U (1) symmetry spontaneously breaks at a zero crossing of the (properly renormalized) Im(µ), and a condensate is established. More generally, it was shown in [115][116][117] that the critical physics of the IRD stead-state (at Ω −1 = 0) is governed by an RG fixed point with purely imaginary couplings that is identical to the equilibrium WF fixed point. Nevertheless, the approach to the fixed point, hosting the set of critical exponents, is different depending on whether the system is at global thermal equilibrium or not. In both cases however, decoherence effectively takes place at the critical point. We emphasize that the Floquet steady state that we describe is far from equilibrium. In particular, this means that many high-frequency modes are macroscopically occupied in the symmetry broken phase. Indeed, the kinetic part of the action can be written as where (φ n (ω),φ n (ω) = Φ n (ω) = Φ(ω + nΩ) is only defined for |ω| < Ω/2, and G −1 nm is obtained by combining Eqs. (11) and (14). In the presence of the periodic drive, the U (1) symmetry breaks spontaneously (by continuity to the IRD limit, see Sect. V) and generates a Floquet condensate, where the order parameter is an oscillating function of time [65,[140][141][142][143][144][145][146][147][148][149]. This means that all the fields Φ n are macroscopically occupied. These characterize the different Fourier modes of the order parameter in the steady state, We see that the order parameter synchronizes with the drive, i.e. it has components oscillating at all of its harmonics. This in turn enables the strong effect of the drive that we find at large values of Ω. Energy can be exchanged between any pair of Floquet bands since they are all macroscopically occupied (see Fig. 4b). III. FLOQUET FORMALISM AND GREEN FUNCTIONS In this section, we show how the Green functions can be computed (Sect. III A), and approximated (Sects. III B and III D) within the Floquet formalism. Moreover, we discuss our asymptotic Ω −1 -expansion in Sect. III C. A. Floquet formalism Here, we discuss the Floquet formalism and the computation of the single-particle Green functions. These are the essential elements of perturbation theory (see Fig. 5). The two-point correlation functions are defined as 5 with correlation functions involving onlyφ vanishing. In the absence of interactions, the elements of G are computed by inverting the kinetic part of the action Eq. (6), The time dependence of µ complicates the computation of G(t, t ) since we can not simply convert to Fourier space and take the algebraic inverse. For this reason we resort to the Floquet formalism [150], where the above inversion can be performed by first re-casting it as a matrix inversion of the Floquet representation of the Green functions [see Eqs. (14) and (15), and Fig. 3], before the result is re-converted to its real-time representation. See [118,119,151,152] and references therein for an introduction and [66,[153][154][155][156][157] for the combination of the Floquet and Keldysh formalisms. In the Floquet steady state, the Green functions take the following form, They are periodic in the mean time t a = (t+t )/2 and can be represented in terms of the Wigner Green functions 6 [156], 5 Here and in the following, we suppress the spatial dependence when possible. 6 Here and in the following, we use the short-hand notatioń which encode the periodicity in t a with a discrete index and the standard relative-time dependence with a continuous frequency. The Wigner Green functions are directly related to the real-time Green functions through Eq. (12). The periodic drive produces an infinite set of Green functions that encode the time-periodicity of the single-particle sector of the system. We now introduce the Floquet Green functions. These are defined by introducing the FBZ −Ω/2 < ω < Ω/2, and folding the frequency dependence of G n (ω) onto itself. This produces a second index that compensates for a restricted frequency dependence. Precisely, the Floquet Green functions are [156], which are two-index Green functions constructed from the single-index Wigner Green functions. These definitions are directly applicable to the inverse propagators (5). The Floquet Green functions provide a means to compute the Green functions from the inverse propagators. Indeed, G is computed from the latter through Eq. (11), which contains functional inverses. The Floquet representation (14), has the great advantage that it turns functional inverses into matrix inverses. In other words, the following statements are equivalent, Eq. (14) provides an alternative representation for G n (ω) that can be inverted in a straightforward way. In summary (see Fig. 3), the Green functions are computed from their inverse in the following way: First the Wigner inverse propagators, G −1 R/A;n (ω) are computed from their real-time representations with Eq. (13). Second, G −1 R/A;n (ω) are converted to the Floquet inverse propagator [Eq. (14)]. The Floquet Green functions are then obtained by taking the matrix inverse of G −1 R/A;nm (ω). The last step is to convert the Floquet Green functions back to their Wigner representation. Eq. (14) provides a one-to-one mapping that directly provides G nm (ω) from G n (ω). The inverse mapping is not as straightforward. For n even, we can use m is the number of times Ω has to be subtracted from ω such that |ω − mΩ| ≤ Ω/2. When n is odd, ω is shifted by an additional half-integer multiple of Ω and Eq. (16) is applied with m → m ± 1/2. The direction of the shift is determined by the sign of ω − mΩ. Finally, G(t, t ) is obtained by inserting the above equation back into Eq. (12). In this work, we will mainly focus on the Wigner Green functions. We treat the Floquet representation as a tool to compute G n (ω). The latter is indeed easier to interpret since G n (τ ) =´ω G n (ω)e −iωτ provides the discrete Fourier modes of the Green function with respect to the mean time t a . G n (ω) is also closer to the Green functions that are often used in perturbation theory, in that their frequency dependence in unbounded. In particular, we can use the residue theorem and benefit from our understanding of the pole structure of G n (ω) in this representation. B. Pole structure In this sub-section, we compute G R n (ω) exactly for a monochromatic drive. This provides valuable insights into the analytic structure of the Green functions (see Fig. 4). Specifically, we find that G R n (ω) has infinitely many poles with real parts separated by integer multiples of Ω and identical imaginary parts. This is a consequence of Floquet's theorem for linear periodic differential operators. This will be relevant in the following sections, where we perform perturbation theory. The main result of this sub-section is Eq. (24), where the sum is constrained so that only terms where m has the same parity as n contribute. In this sub-section, we choose a monochromatic drive, which enables the exact evaluation of G R n (ω). The semi-classical nature of our mesoscopic system enables a representation of Eq. (6) as a Langevin equation [158][159][160][161][162], ξ is a Gaussian white noise, which has correlation , with γ > 0, and vanishes on average. This makes it possible to compute the single-particle retarded Green function in real time by solving Eq. (18) with g = 0, is the real-time representation of the right-hand side of the equation of motion, with its Fourier modes given by Only the n = 0 Fourier mode depends on the momentum p, because K does not depend on time. We introduce the notation E n , for the drive amplitude, to clearly separate the dynamic sectors n = 0 from the static one, n = 0. The Fourier modes of E coincide with those of M (t) for n = 0. In real-time, it is defined as . This definition extends beyond the monochromatic drive that we use in this sub-section (see Sect. III D). We now exploit the relation . . . f is the average over a shifted noise ξ = ξ + f , which is still Gaussian and has the same variance as ξ, but does not average to zero. Instead its average is We see that the retarded Green function can be interpreted as a linear response to a weak noise with nonvanishing average. Then, taking the average of Eq. (19) and differentiating with respect to f provides We have sent t 0 → −∞ at the end of the calculation 7 because the initial conditions drop out in the Floquet steady state. Inserting the periodic time dependence of M and using the Jacobi-Anger expansion provides with J m (x), the m-th Bessel function of the first kind. Finally we partially convert G R (t, t ) to its Wigner form and obtain 7 The terms containing t 0 drop out because because Im(M 0 ) >0. This seems to indicate that the poles are spaced by halfinteger multiples of Ω. This is however not the case because only half of the terms of the above sum contribute to the Wigner Green function. Indeed, an expansion of the Bessel function together with a Fourier transform of each term provides, The Fourier transform produces a constraint that can only be satisfied if n + m is even. This implies that n and m must have the same parity so that half of the terms in the above sum vanish. The constraint is enforced through Eq. (24) provides an exact expression for the singleparticle retarded Green function that elucidates the analytic structure of the Wigner Green functions (see Fig. 4a). This is clearly visible in Eq. (23): is an infinite sum of poles that all have the same imaginary parts and are spaced by integer multiples of Ω. This means that all the poles produce divergences simultaneously as the system becomes critical. Moreover, the residues of the poles are smooth functions of E ±1 /Ω and are suppressed by increasing powers of Ω −1 as |n| grows. In our system, ω can be shifted to any value by an appropriate choice of rotating frame. This amounts to shifting the real part of µ. Although this implies that there is not absolute meaning to large and small frequencies, large and small values of n can be defined relatively to a choice of rotating frame. For example, setting Re(µ) = 0, provides a definition of the position of the central pole in Eq. (23), relatively to which all the other poles are placed. Here, we are able to compute G R;n (ω) because we chose a simple monochromatic drive Eq. (17). Our dynamic RG approach (see Sect. IV) will however require us to handle general drive protocols. In that case, the analytical calculation of this sub-section become much more involved and is too complex to be of any use. For this reason we resort to an asymptotic Ω −1 -expansion, which is detailed in the following sub-section. C. Asymptotic Ω −1 -expansion We now explain how an asymptotic Ω −1 -expansion can be obtained for perturbation theory. Indeed, we will ultimately be interested in the case of a fast drive. We will asymptotically expand our problem in powers of Ω −1 and truncate the expansion to second order. This expansion provides corrections on top of the well-known rotating wave approximation, which is valid when Ω −1 = 0. It must be carried out at the level of the real-time Green functions or in loop integrals such as Eq. (41), where the frequency variable of the Green functions (or a product thereof) is integrated out. A straightforward expansion of the Wigner (or Floquet) Green functions is not applicable because there is no way to ensure that ω Ω in our problem. Indeed, the loop contributions come with indefinite integrals over the domain ω ∈ (−∞, ∞), so that ω/Ω can not be neglected. The asymptotic Ω −1 -expansion is a double expansion (in powers of the drive amplitude E and Ω −1 ). We see from Eq. (24) that the Green functions are composed of an infinite series of poles. The amplitude of the drive and (more importantly for us) its frequency Ω, appear however in the residues of the different poles with different powers. This produces an ordering principle and enables us to expand the flow equations in powers of Ω −1 . For a weak drive, the Green functions can be expanded in powers of E before the loop integrals are performed. Indeed, the coefficients of such an expansion are all functions of ω that produce convergent loop integrals. This is because this expansion (unlike a direct Ω −1 -expansion) preserves the pole structure of G R,n (ω) (Fig. 4a). It is clear from Eq. (24) that any term of order O(E n ) will come with a corresponding factor of Ω −n . The expansion in powers of E is actually a weak drive expansion in powers of E/Ω, which can be truncated when E Ω. Then, the terms of O(E n ) can only produce terms of order O(Ω −n ) or higher in the loop integrals. We can therefore construct an expansion truncated to order O(Ω −n ) by first carrying out the weak drive expansion to order O(E n ), then performing the loop integrals and finally re-expanding the result to O(Ω −n ). This produces an expansion where all the terms up to O(Ω −n ) are systematically taken into account. In summary, the expansion is carried out in three steps:(i) The Green functions are computed up to a given order in powers of E [see Eqs. (27) and (28)]. (ii) The obtained expressions are inserted into the loop integrals Eq. (41), which can now be evaluated analytically. (iii) The integrals are re-expanded in powers of Ω −1 up to the same order as for the E-expansion. The preliminary E-expansion provides a great simplification that renders the loop integrals analytically tractable. This in turns makes it possible to expand the end result in powers of Ω −1 . We perform this procedure in the following. We expand the Green functions in powers of E below and insert them in the loop integrals in Sect. IV B. See in particular Eq. (49), where we obtain the RG flow equations, and where the different contributions from each expansion are identified. Additional details can also be found in App. D 2. The end result is an asymptotic expansion that is useful when Ω is larger than all the Fourier coefficients of M (t) [defined below in Eq. (20)]. Indeed, the first is a weak drive expansion that can be truncated when Ω is large enough, i.e. when E Ω. It contains however no assumption on the magnitude of the n = 0 Fourier mode of M (t). The second expansion can be truncated when M 0 Ω. This will be automatically the case in our work, because we focus on the critical physics where M 0 is tuned to zero. The full asymptotic Ω −1 expansion has three important advantages:(i) It greatly simplifies the final RG flow equations. Compare Eq. (45) to (D18), where the second step (Ω −1 -expansion) is omitted. (ii) It provides a single expansion parameter Ω −1 , which controls the effects of the drive. (iii) It contains the rotating wave approximation as the limiting case Ω −1 = 0. In this work, we are interested in the critical regime, where M 0 Ω. It is however possible to look at the opposite regime within the weak drive expansion, i.e. E Ω M 0 . An expansion in powers of M −1 0 can be obtained in the same way as our asymptotic Ω −1expansion. First, the weak drive expansion is performed. Then the loop integrals are computed, and, finally the result is expanded in powers of M −1 0 . In this regime, we find that, after the loop integrals are computed [see e.g. Eqs. (D19) and (D18)], the terms of order E n come with a power of M −n−α 0 , where α is the power of M −1 that appears in the O(E 0 ) term. Then, taking the limit M 0 → ∞ effectively turns off the drive since the elements of the weak drive expansion are suppressed. This means that far away from the phase transition, where there is a large mass, the drive only has a weak, entirely perturbative effect. D. Weak drive expansion of the Green functions In this sub-section, we show how the Green functions are expanded in powers of the drive amplitude. The obtained expressions will be used later on to derive approximate RG flow equations for fast drives. Although the calculation of Sect. III B provides a physical picture, Eq. (24) can not be used directly because it was obtained for a monochromatic drive and is not easily generalized. Here, we write the Wigner Green functions Eq. (13), as a perturbative series in E n . We start with G R;n (ω). The Floquet inverse propagator is written as a sum of two matrices , and can be inverted straightforwardly, while the effect of the drive is included in We now truncate this expression to a finite order in E and convert the outcome to the Wigner representation order by order. To first order in E we have, The advanced Green function is obtained through the relation We stop at the first order here, but the second-order expansion, which we use in Sect. IV B, is given in Sect. C. IV. DYNAMIC RG In this section, we detail our RG approach to Floquet dynamics. Our main result, which is explained in Sect. IV C, is that the periodic drive actually destroys criticality. Sects. IV A and IV B are dedicated to deriving the appropriate RG flow equations with a very general expression being obtained in Sect. IV A [see Eq. (41)]. The asymptotic Ω −1 -expansion is carried out in Sect. IV B. We start however by comparing our dynamic RG to the traditional RG procedure and commenting on their conceptual differences and technical similarities. Both approaches are based on the idea of a mesoscopic effective description with an Ultraviolet (UV) cut-off in momentum space. Eq. (6) provides this description, which is only valid on spatial scales larger than the inverse of the UV momentum cut-off Λ. For example, 1/Λ is interpreted as a discrete lattice spacing or a small interaction range below which the physics is more complicated. An RG transformation is implemented by changing Λ (i.e. integrating out high-momentum shells) and reabsorbing this change in effective couplings. Lowering Λ to an effective running cut-off scale k, amounts to coarse graining the small-scale degrees of freedom. The RG flow equations enable the computation of an effective action at any value of the running cut-off, starting from Eq. (6) at k = Λ. This change is obtained via a differential equation, where the derivative of the couplings with respect to the running cut-off is given by loop integrals (see Fig. 5). The cut-offs are a momentum scales and limit the momenta available in the loop integrals. As we discuss below, there is no need to implement a cut-off in the frequency integrals, which are unbounded. The difference between traditional and dynamic RG lies in the handling of the renormalization of the n = 0 Fourier modes. In static RG, only the effect of the drive on the renormalization of the Fourier modes µ 0 and g 0 is accounted for, while in dynamic RG all the Fourier modes µ n and g n , are treated on an equal footing and renormalized together. Fundamentally, the difference resides in the handling of the frequency integrations. In both cases, these are performed by closing the integral path in the complex plane and using the residue theorem. For this reason the poles of the Green functions play an important role. As we discuss in Sect. III B, these are located along a line in the complex plane with identical imaginary parts and real parts separated by integer multiples of the drive frequency, ω n = −Kk 2 − µ 0 + nΩ, see Fig. 4a. We see that these poles dictate the frequencies that play an important role in the loop integrals. In the RG approach to undriven systems, there is only one pole (with n = 0). Then, the high frequencies do not play an important role since only frequencies close to ω 0 = −Kk 2 − µ 0 contribute to the loop integrals. In the dynamic RG however, there are infinitely many poles in the first place, and arbitrarily high frequencies contribute to the loop integrals. The Floquet formalism forces us to consider all the poles on equal footing and there is no cutoff on the frequency axis. The theory is coarse grained on a spatial scale given by k, but fast scales (∼ Ω) remain in the game. Although this is technically very similar to undriven RG (find the poles and use residue theorem), it is physically very different. Frequencies are only defined modulo Ω in Floquet systems because their energy is not conserved. For this reason arbitrarily high frequencies (separated by integer multiples of Ω) contribute. In particular, this plays an important role in the critical physics. As we see from Fig. 4a, the loop integrals are convergent as long as Im(µ 0 ) < 0. Fluctuations play an increasingly important role as |Im(µ 0 )| decreases, and the critical point is reached as Im(µ 0 ) → 0. When the system is undriven this phenomenon takes place on large space and time scales. Only the small momenta and frequencies contribute significantly. In the presence of a periodic drive however, the Floquet formalism forces the large frequencies to play a role at criticality. More precisely, all the poles contribute to the loop integrals with the same degree of divergence, but different residues. This explains how the effect of a very fast scale such as Ω is not averaged out at large scales, and can still affect the critical physics. The IRD system is recovered if the limit Ω −1 → 0 is taken before the system is sent to its critical point. Indeed, a rapid periodic drive has a small quantitative effect as long as the system is gapped, because all the poles introduced by the drive are subject to the same gap, and are suppressed as Ω −1 . In the current work, we take the periodic drive into account within our dynamic RG approach. In addition, we use a well-known loop expansion similar to [163]. This provides access to the critical physics of a generic interacting system in 4 − spatial dimensions. This is a systematic expansion in powers of = 4−d that is equivalent to the loop expansion at criticality. In particular, the RG fixed point and its properties smoothly depend on [and are simple when = 0, see Eq. (58)]. For this reason we extend the applicability our our results down to three spatial dimensions ( = 1). In contrast with static approaches to RG e.g. [164][165][166][167], we include the renormalization of all the Fourier modes of the couplings. We therefore have a direct access to the time-dependent physics (as opposed to averages over the drive period, which are encapsulated in the n = 0 Fourier modes). Accounting for these dynamic effects is actually an essential feature of the mechanism that we uncover. Indeed, we expect that, although Φ 0 develops a finite expectation value continuously as the phase transition is crossed, the transition is actually first order because the higher-order Fourier modes Φ n =0 , abruptly jump from being zero in the symmetric phase to being finite. The renormalization of periodically driven systems was already undertaken in different situations. In particular the RG was used as a resummation-tool for 0d systems such as quantum dots [168][169][170], where non-linearities were treated nonpertubatively, and single molecules [171], where the RG was used to treat secular terms. These approaches did not focus on critical physics. Alternatively, disordered 1d periodically driven critical systems were studied in [164][165][166][167]. There the RG was implemented either exactly [164,165] or through a Schrieffer-Wolff [172,173] transformation [166,167]. Both cases involve a static approach to RG where the Floquet Hamiltonian is renormalized and used to infer the critical properties of the system. Finally the periodically driven bosonic φ 4 model was renormalized in [163] within a momentum shell 1-loop approach. This is an early dynamic RG approach where a very general timedependent interaction was included, and its effect on the renormalization of the couplings is computed in a simplified way. This produces a set of RG flow equations that lead to a break down of the underlying approximation, which is interpreted as the signature of a crossover to a state dominated by the drive. In this work, we extend the analysis performed in [53] by going to the next order in the asymptotic expansion in powers of Ω −1 . This has enabled us not only to check that our result persists when the additional O(Ω −2 ) contributions are included but also to further develop the formalism put forward in [53]. Moreover we investigate the following key questions: (i) Is there a Floquet RG fixed point which drives a phase transition from the IRD phase to another far from equilibrium phase (see e.g. [95,121,166,167,[174][175][176])? Such a fixed point would implicate the drive in an essential way and would be fundamentally impossible to observe at (or close to) thermal equilibrium. Including O(Ω −2 ) terms provide an opportunity for competition between these and the O(Ω −1 ) terms, which could yield a new fixed point. We do not find any new fixed point within the 1-loop approximation. We interpret this as a strong hint that such a new fixed point does not exist in the regime of large driving frequencies. A. General RG flow equations In this sub-section, we perform perturbation theory to 1-loop order in the presence of a periodic drive. The main result is Eq. (41), where loop integrals are given in terms of the Wigner Green functions. The only approximation leading to Eq. (41) is the loop expansion (in turn controlled by the -expansion). The effects of the drive are not approximated yet. Our theory is defined through its microscopic action Eq. (6), which can be used to compute correlation functions such as, A represent a generic observable, which is typically a product of fields φ andφ. and the momentum integrals of Eq. (6) are cut off at a large momentum scale Λ, which we denote as the UV cut-off. We start by defining the flowing effective action Γ k which encodes the coarse graining inherent to the RG. Γ k functions just like the microscopic action S, although with the momenta p > k being integrated out. Γ k is define precisely in Eq. (32), but can be represented schematically througĥ I.e. Fourier modes with momenta up to k < Λ remain part of the functional integration, and the rest is incorporated in Γ k . k is the running cut-off that decreases along the RG flow, thus including fluctuations on increasingly larger spatial scales. In order to compute Γ k , we use an exact representation of the RG flow as a starting point. We emphasize however that the RG flow equations Eq. (41) can be equivalently obtained in a traditional momentum shell loop expansion [124][125][126], of course taking into account the Floquet structure. See App. E, where we recover the WF fixed point to leading order in the = 4 − d expansion from Eq. (41). In this exact representation, the renormalization of S (i.e. the change of Γ k with k) is given by Wetterich's flow equation [177] Γ k is identified with S at large values of k, Γ Λ = S, and Γ (2) k is the second field derivative of Γ k which has two sets of (field, space and time) indexes and depends on the fields φ andφ. The trace as well as the inversion and multiplication by k∂ k R k in Eq. (32) are functional. They contain a sum over the discrete indexes and integrals over the continuous ones (see App. D 1). R k is a cut-off operator that we will specify below. It is diagonal in Fourier and Floquet spaces, and is a function of momentum that is large for p k and small otherwise. The role of R k is to suppress fluctuations on momentum scales smaller than k. Eq. (32) is a non-linear functional differential equation for Γ k . It does not contain any approximation and (together with the initial condition Γ Λ = S), it provides an alternative, equivalent functional differential representation complementing the functional integral representation of Eq. (29). In order to solve Eq. (32) we must pick an approximation scheme and a cut-off operator. We choose a sharp cut-off operator. I.e. R k is zero for p > k and infinity otherwise. Moreover, we resort to traditional perturbation theory to 1-loop order. The corresponding RG flow equation is obtained from Eq. (32) by neglecting the derivative of Γ (2) k with respect to the running cut-off on the righthand side [178][179][180] and including a single momentum shell in the trace, The Tr{. . . } k contains a loop integration over all frequencies and discrete field indexes, but the momenta have a fixed modulus given by p = k, see Eq. (D7). This restriction is a result of the sharp cut-off operator. Eq. (33) can be interpreted as integrating out momentum shells one after the other as k is lowered from Λ to zero. Indeed, it can be recast as for dk infinitesimal. The similarity to 1-loop perturbation theory is now apparent. Indeed, the right-hand side of this equation can be rationalized within perturbation theory, by integrating out fluctuations within a momentum shell [k − dk, k], however with the RG improvement S → Γ k in the argument of the trace-log term, turning the perturbative formula into an exact equation [177]. RG flow equations are obtained for the different couplings by expanding Eq. (33) in powers of φ andφ and identifying the expansion coefficients on both sides. We focus on the terms of order 2 and 4. The flowing inverse retarded Green function is defined as 8 and the flowing two-body coupling is The RG flow of G −1 R (t, t ) and Γ (4) (t 1 , t 2 , t 3 , t 4 ) are obtained by taking two and four derivatives of the righthand side of Eq. (33) respectively, and evaluating the result at zero field, and similarly for k∂ k Γ (4) (t 1 , t 2 , t 3 , t 4 ). The right-hand side of the above equation is the trace of the product of the matrix of Green functions G = 1/Γ (2) [Φ = 0], [see Eq. (6), where Γ (2) is the matrix in the first term of Eq. (11)] with the second field derivative of Γ (2) , and is evaluated at zero field Φ = 0. There is no term with three derivatives of Γ k , because these vanish when Φ = 0. The flow of G −1 R (t, t ) and Γ (4) (t 1 , t 2 , t 3 , t 4 ) are represented diagrammatically in Fig. 5, with the lines representing different elements of the G matrix and the vertices representing the interaction. The RG flow of µ and g can be extracted from these equations by integrating them over their relative space and time variables (or equivalently setting the external momenta and frequencies to zero). The flowing dissipative mass µ k , is defined as which coincides with µ(t) at k = Λ. The RG flow of µ k is then obtained by taking the derivative of Eq. (38) with respect to the running cut-off k, and inserting Eq. (37) on the right-hand side. The RG flow of the interaction parameter g k (t) is obtained in a similar way (see App. D 1 for additional details). We see that the flow of G −1 R (t, t ) [right-hand side of Eq. (37)] is a (non-linear) function of the couplings µ and g. Indeed, the right-hand side of Eq. (37) is obtained from G, which is the inverse of Γ (2) , G −1 = Γ (2) . For µ and g given, Γ (2) is readily obtained, because it takes the same form as the second field-derivative of the action S, Eq. (6) (at 1-loop), and is thus linear in µ. Then computing the flow of G −1 R (t, t ) is a matter of inverting Γ (2) to obtain G. This is where the Floquet formalism [and Eq. (15)] will become very useful. For given timedependent couplings µ(t) and g(t), the loop integrals are computed in three steps: (i) Γ (2) is computed from µ(t) [see Eq. (38)]. (ii) G is computed from Γ (2) with the Floquet formalism [and Eq. (5)]. (iii) Everything is assembled according to the right-hand side of Eq. (37) (or Fig. 5) and the frequency integrals are performed. This is where g(t) enters. Real-time representation At 1-loop, the flow of G −1 R (t, t ) is quite simple: Only the dissipative mass is renormalized. Then, we can insert Eq. (5) on the left-hand side of Eq. (37), integrate over τ and obtain, is the area factor that emerges from the momentum integration. Similarly, we obtain the flow of g as (see App. D 1) Here and in the following, the RG flow equations involve Green functions evaluated at p = k, which (after including the kdependence) are given by Eq. (11). The Green functions that appear here are analogous to the Green functions discussed in Sect. III A, with the only difference that p is replaced by k and µ by µ k , as a consequence of the RG procedure. The two above equations provide a real-time representation of the RG flow equations. They describe the coarse graining of the entire time dependence of µ(t) and g(t). The periodicity of the drive is actually not included here, and no assumption or approximation is made with respect to the drive frequency. This representation of the RG flow equations is very general and can be used, for example, also a basis for an adiabatic expansion, i.e. in the opposite limit to the one considered here (see App. A and [181]). It is however not the most efficient representation for our purpose, because the periodicity of the drive is not built in. Moreover, it is difficult to use without an additional approximation, because the relation between the coupling µ and the Green functions is not yet worked out. This relation stems from the first equation of Eq. (15), where G −1 = Γ (2) is a known functional of µ. For a general time dependence of the couplings, it can be an extremely complex relation. Wigner representation We incorporate the periodicity of the drive in the realtime RG flow equations Eqs. (39) and (40) by converting the Green functions to their Wigner representation and including the time-periodicity of g. Inserting Eqs. (7) and (12) into Eqs. (39) and (40) provides the following set of coupled RG flow equations for µ n and g n , m 1234 = 4 i=1 m i , and δ ij is the Kronecker delta. As before the right-hand sides of these equations are non-linear functions of µ n and g n . The complexity of these functions stems from the relation between the couplings and the Green functions Γ (2) = G −1 , where Γ (2) depends on µ n linearly. We will reduce this complexity with our asymptotic Ω −1 -expansion in Sect. IV B. Eq. (41) provides a set of differential equations for the effective couplings as the running cut-off k is lowered. They are obtained to first order in perturbation theory and with momentum shell RG [124][125][126]. Our main result [based on Eq. (49)] will be derived from this representation. We reiterate that the Wigner Green functions have the advantage of exploiting the Floquet formalism (and thus incorporating the periodic drive automatically) while retaining an infinite frequency range to integrate over. This makes it possible to treat (and interpret) the loop integrals in a way that is standard for undriven problems. Floquet representation We give a third representation of the flow equations before we perform the asymptotic Ω −1 -expansion. To this end we define the sets of matrices g n represent the n th matrix and r and s are its elements. These matrix elements are given in terms of the Fourier modes of the coupling g n . The flow equations can be written in terms of these matrices and the Floquet representation Eq. (14), of the Green functions as and the flow of g n is given by if n is even and for n odd. We useG R,m1m2 (ω) = G R,−m2,−m1 (ω). The traces and matrix multiplications in these equations involve the Floquet indexes only, tr{AB} = nm A nm B mn , and the frequency integrals are restricted to a finite interval bounded by ±Ω/2. This representation can be useful for numerical implementations of the RG flow, where the full Ω dependence can be captured by truncating all the matrices to a finite number of Fourier modes. It is however not as transparent analytically. B. Simplified RG flow equations We now discuss how the RG flow equations Eq. (41), can be simplified in the presence of a fast drive. We start by obtaining general equations to O(Ω −1 ), Eq. (45). Next we further simplify the RG flow equations by choosing purely imaginary couplings. This simplification allows us to include the terms up to O(Ω −2 ) [leading to Eq. (49)], and eventually confirms the picture that emerges at O(Ω −1 ). Finally we choose a monochromatic drive and obtain Eq. (52). Asymptotic O(Ω −1 )-expansion As we have discussed in Sect. III B, the drive produces infinitely many poles in the Green functions that must all be taken into account, since they all produce divergences of the same order. This can be done systematically within the asymptotic Ω −1 -expansion, where we identify all the terms at a given order (and lower) in Ω −1 and neglect the rest. See Sect. III C for details on how this expansion is performed. For general couplings, and to order Ω −1 , the asymptotic expansion of the RG flow equations is obtained by inserting Eqs. (28) and (27) into Eq. (41) Here and in the following we use M 0 = K 0 k 2 + µ 0 . Y n and X n are the pre-factors of the O(Ω −1 ) corrections We see from Eq. (45) that even and odd Fourier modes are not renormalized in the same way. For any given microscopic drive, Fourier modes with arbitrarily high values of n are generated by the RG. We see however that the RG preferentially doubles the drive frequency by generating mode n = 2r from mode n = r already at O(Ω 0 ). See App. D 2 for additional details. Imaginary couplings and O(Ω −2 ) corrections It was observed in [115][116][117] for time-independent couplings (IRD limit) that the critical physics is governed by a fixed point with purely imaginary couplings. The real parts of the couplings flowing to zero is a signal of the full decoherence that emerges on large scales, where the dynamics is purely dissipative. We expect this phenomenology to be applicable to the present case, and therefore neglect the real parts to begin with. This provides a great simplification and has enabled us to include corrections up to order Ω −2 . In this way, we will not be able to recover the new exponent measuring decoherence [115][116][117] (or investigate the effect that the periodic can have on it), but this is a 2-loop effect that can not be captured by our current 1-loop approach anyway. The imaginary couplings provide real parameters that can be used to represent the system's phase diagram. In particular this means that we can think of the n = 0 Fourier mode Im(µ 0 ), as a dissipative mass that triggers the phase transition when it is lowered below a threshold (see Fig. 7a). Purely imaginary couplings are given by K = iK I , µ(t) = iµ I (t) and g(t) = ig I (t), with K I , µ I (t) and g I (t) real in the time domain. Then the Fourier modes of the couplings are complex numbers that satisfy As can be checked from Eq. (45), in the case of purely imaginary couplings, no real parts are generated by the RG. This is related to the following symmetry of the mesoscopic action that is realized when the couplings are all imaginary. Physically, this is interpreted by the fact that no reversible dynamics can emerge out of purely dissipative dynamics (except in topologically non-trivial systems at the boundary [182]). The corresponding flow equations are: The drive is encapsulated into the drive parameters that depend on µ n and g n , Here X n is the same as the one defined in Eq. . This produces the terms with g n and G n above. The expansion of the O(µ n =0 ) terms starts at O(Ω −1 ) and produces the terms with R n , X n and Q n . Finally, the expansion of the O(µ 2 n =0 ) terms starts at O(Ω −2 ) and produces the remaining terms. The above equations are the result of a double perturbative expansion. Eq. (41) is controlled for a weak coupling. More precisely, it is systematic to order one in the = 4 − d expansion. As in a standard φ 4 analysis, our results depend smoothly on and the critical physics is exactly captured for d = 4. Then we can (at least qualitatively) extend our results down to d = 3. Eqs. (45) and (49) are the result of a further asymptotic expansion in powers of Ω −1 and are systematic to order 1 and 2 respectively. Our results are therefore systematic to O( ) × O(Ω −2 ). Monochromatic drive We now choose a specific model, where the drive is monochromatic. The microscopic couplings (at k = Λ) are chosen to be [see Eq. (47)] with µ 1 and g 1 complex valued. Then the RG flow equations can be greatly simplified. In particular we can focus on the flow of µ I 0 and g I 0 and write simplified flow equations for the drive parameters M 2 , X 0 , S 0 , R 0 , G 0 and Q 0 . These parameters are all real for purely imaginary couplings and quantify the importance of the drive since they vanish together with the drive amplitude. X 0 , which was already identified in [53], is the only drive parameter that remains at O(Ω −1 ). The simplified equations are obtained by inserting Eq. (49) into the derivatives of the drive parameters with respect to the running cut-off. A detailed analysis (that is done in App. D 2) of the different terms that emerge provides with Q 0 = 0 and U given in Eq. (D24). The details on their derivation are given in App. D 3. Together with Eq. (49), these equations are reduced to the flow equations used in [53], when they are truncated to O(Ω −1 ). For a monochromatic drive, the drive coefficients can be related to the complex phases and amplitudes of µ I 1 and g I 1 µ I 1 = |µ I 1 |e iθµ , g I 1 = |g I 1 |e iθg . The amplitudes |µ I 1 | and |g I 1 |, provide the amplitude of the oscillations and the complex phases θ µ and θ g , the phase of the time dependence of the couplings. Indeed, inserting the above equation into Eq. (51) provides In turn, inserting Eq. (51) in Eq. (50) provides simpler expressions for the drive coefficients With Eq. (53), we see that only the difference between the two phases θ µ − θ g appear in the RG flow equations. For example, we have X 0 = 4|µ I 1 ||g I 1 | sin(θ µ − θ g ). This expresses the fact that the Floquet steady state is unchanged if the time dependences of all the couplings are shifted together. We emphasize that the above equation provide the microscopic (monochromatic) drive parameters at the beginning of the RG flow. As scales are integrated out, higher order Fourier modes are generated and the sums in Eq. (50) must be accounted for (see App. D 2). C. Critical physics In the IRD limit, our system undergoes a second order phase transition as the dissipative mass Im(µ 0 ) is lowered below a critical value. We will see in this sub-section, that the inclusion of even a weak drive has a dramatic effect on this transition: In the presence of a periodic drive a new scale enters, and it becomes impossible for the correlation length to diverge. Re-scaling Second order phase transitions and critical physics are characterized by scaling solutions to the RG flow equations. The physics is fully scale invariant (and the correlation length is infinite) if the flowing couplings are proportional to powers of the running cut-off scale: µ n ∼ k D µ n and g n ∼ k D g n . The exponents D µ n and D g n are the scaling dimensions of the couplings. Furthermore, the critical physics (with a large yet finite correlation length) can be extracted from the reaction of the flow to small perturbations of these scaling solutions. These take a particularly simple form when they are written in terms of the rescaled couplings:μ n = µ n k −D µ n and g n = g n k −D g n . The above scaling solutions turn into a fixed point (forμ n andĝ n ), where the flow of the rescaled couplings stops. Different fixed points (i.e. universality classes) can have different scaling dimensions and therefore correspond to different rescaling choices. The scaling dimensions are computed from the RG flow equations. It is convenient to write the scaling dimensions as a sum of their canonical dimensions, and an anomalous correction. The canonical dimensions are fixed and given by the canonical scaling at the Gaussian fixed point, where the spatial and temporal coordinates are rescaled asq = q/k andω = ω/(K I k 2 ). In the IRD case (and at equilibrium) the relevant fixed point is the interacting Wilson-Fisher (WF) fixed point (see App. E). There the anomalous dimensions vanish at 1-loop order in perturbation theory. In this work, we are interested in the effect of the drive on the IRD criticality and therefore extend this WF rescaling to all the couplingŝ Then we also rescaleΩ = Ω/(K I k 2 ), and the flow equations are given by We emphasize that this choice is tailored to capture the rapidly driven system close to the IRD criticality, i.e. close to the corresponding WF fixed point. It would not be a good choice to look for a new hypothetical nonequilibrium Floquet fixed point, which could describe a second order phase transition at a finite drive frequency. We see no indication of such a fixed point in our flow equations. Yet, a hypothetical perturbatively (in Ω −1 ) inaccessible fixed point cannot be fully excluded based on our 1-loop analysis. In this case however, we do not expect it to play a role in the present regime of asymptotically large Ω. Indeed, such a fixed point would depend on the drive frequency as a fixed parameter, because Ω enters in the discrete time-translation invariance of the system, and can thus not be renormalized. Then the fixed point couplings (such as µ * and g * ) would take extreme values when Ω −1 is very small. This means that the RG flow would need to bring the system across a wide range of couplings before this fixed point is felt. In summary, such a Floquet criticality could only play a role at asymptotically large scales when Ω is asymptotically large. Critical exponents We find that, in the IRD limit, the RG flow equations Eq. (57), coincide with the known equilibrium RG flow of O(2) models (see App. E). In particular Eq. (57) contains a single fixed point, where all the drive parameters vanish andμ to order O( ). This is the WF fixed point 9 . The finite drive frequency plays an important role by providing additional couplings that destabilize the WF fixed point and eventually prevents the RG flow from ever reaching it. We start by describing this mechanism qualitatively and explain how the drive can produce a finite correlation length even when the equilibrium couplings are tuned to criticality. Quantitative predictions and a precise consistency with the result of [53] will come next. When the system is tuned to the critical region of the phase diagram, the RG flow takes the system very close to its fixed point, where it stays for a wide range of values of the running cut-off scale: k ∈ [k 0 , k 1 ], with k 0 k 1 by definition of the critical region (see Fig. 6). Both the first (with Λ > k > k 1 ) and last (with k 0 > k) parts of the flow are not universal. Large and small values of k characterize the small-and large-scale physics respectively. The physics is critical when there is a wide range between these two extremes. In particular, tuning the system to its critical point is equivalent to sending k 0 to zero and extending the scale-invariant regime to arbitrarily large scales. We will identify k 0 with the system's correlation length ξ ∼ k −1 0 , because it provides a scale beyond which non-universal physics kicks in. Then ξ diverges as the system is tuned to the critical point and the scaling of ξ with the distance to criticality is directly related to the rate at which the RG flows away from its fixed point. Quantitative predictions are obtained by looking at the flow close to its fixed point. To this end we define the following vector which denotes the displacement from the WF fixed point in the space of couplings. The critical physics is fully characterized by the RG flow close to this fixed point. For this reason we focus on systems where the coordinates of − → G are small, and linearize the RG flow M is the stability matrix of the flow equations evaluated at the WF fixed point, which can be extracted from Eq. (57). Its eigenvalues provide the escape rate of the RG flow from the fixed point. We find that it is an uppertriangular matrix (see App. F), so that its eigenvalues can be read off its diagonal. They are [to O( )] The stability matrix must have an upper-triangular structure at the fixed point. Indeed, when all the drive coefficients vanish at the beginning of the RG flow (as is the case in the IRD limit), then they are zero for the entire flow. Close to the fixed point this is reflected in the upper-triangular structure of M , which imposes that, if the drive coefficients vanish on the right-hand side, then they do not change as k is lowered. Here we will also benefit from the fact that M is fully upper-triangular. This will be used later to easily show that criticality can only emerge when s = u = x = 0. In particular, IRD criticality together with the corresponding divergence of the correlation length ξ ∼ |∆| −ν is fully contained in Eq. (61). The mass gap, ∆ represents the distance to the critical point in the IRD limit. This is a result of the upper-triangular block structure of M . Indeed, in the IRD limit it is possible to leave the drive coefficients out of the problem and restrict M to its upper-left block, which is a 2 × 2 matrix. Then the critical exponents are the first two entries in Eq. (61) λ 1 = −2 + 2 5 and λ 2 = (see App. E). In that case, the fixed point is made unstable by the negative (so-called relevant) eigenvalue λ 1 . In the critical region the couplings eventually flow away from the fixed point as k is lowered (see Fig. 6) unless the system is tuned to hit it exactly. This identifies the phase boundary as a separatrix between the two attractive (under RG) regions of the parameter space. We use ∆ to denote the distance from the phase boundary in the IRD limit 10 and find that the flow behaves as | − → G | ∼ |∆| k λ1 for k 1 k ≥ k 0 . The running cut-off scale where the flow leaves the vicinity of the fixed point is k 0 and is extracted by setting | − → G (k 0 )| ∼ 1. Identifying the correlation length with the inverse of this scale ξ ∼ 1/k 0 , leads to the well-known scaling ξ ∼ |∆| −ν with ν = −1/λ 1 ∼ = 1/2 + /10 [124]. We now return to the rapidly driven system, where the above argument has to be generalized. In the presence of a drive all the eigenvalues of M become available, and there are 4 negative eigenvalues. The fixed point is therefore greatly destabilized, and 4 independent couplings have to be tuned to a specific value for ξ to diverge. Our main result is based on the fact that (as we show below) this tuning amounts to either setting the drive amplitude to zero or, equivalently, going to the IRD limit. It can be seen from the stability matrix Eq. (F2), that the fixed point can only be reached when s = u = x = 0. Indeed, the solution of the flow equations is In particular, we find that c 8 is proportional to x, that c 7 is a linear combination of x and u, and that c 6 is a linear combination of x, u and s. If c 678 = 0, then also x = u = s = 0. c 678 (and c 1 ) are the coefficients that are associated to negative eigenvalues, and are therefore relevant couplings. This implies that the fixed point can only be reached (and therefore the correlation length only diverge) if s, u and x are set to zero. In that case we recover IRD criticality as discussed above. The above IRD argument leading to an estimation of the correlation length can be directly generalized to the driven case. Indeed, the only effect of the drive is to provide additional flow directions along which the WF fixed point is unstable (see Fig. 6). Then, the scale k 0 ∼ ξ −1 is defined as before as the value of the running cut-off scale where the flow leaves the vicinity of the fixed point We see that the drive introduces multiple scaling regimes. We illustrate this by going back to O(Ω −1 ), where all the drive coefficients but x drop out. Then only c 1 and c 8 (with λ 1,8 ) remain in the above equation, and c 8 ∼ x. We find that for |c 1 | large enough, we have |c 1 | ν > |c 8 | 1/λ8 and the correlation length scales as ξ ∼ |c 1 | −ν . Then the IRD scaling, ξ ∼ |∆| −ν is recovered when ∆ is 11 The eigensystem is defined as large enough (see App. F). When |c 1 | decreases however, |c 8 | 1/λ8 takes over and the correlation length scales as ξ ∼ |x| −1/ . The only way to have ξ → ∞ as c 1 → 0 is to set x = 0. Moreover, we can estimate the point at which the system crosses over from one scaling to another by equating the correlation lengths in the two regimes. We find that this happens when |c 1 | ν ∼ x 1/ (see Fig. 7a). We can picture this as the critical phase boundary being blurred as a result of the system being rapidly and periodically dragged across the phase transition. Indeed, when the system is tuned to the phase boundary, the dissipative mass vanishes on period average, but retains oscillating components because of the drive [see Eq. (1)]. These in turn allow for the system to exhibit a finite correlation length even when ∆ = 0. We can now identify the role of the additional drive parameters that enter at O(Ω −2 ). These produce further scaling regimes. The picture remains however the same: Far from the IRD critical point (|c 1 | large enough), the scaling is the equilibrium one ξ ∼ |∆| −ν . As we try to approach the critical point however, |c 1 | decreases and the scaling crosses over to one of the ξ ∼ |c 6,7,8 | 1/λ6,7,8 . The correlation length never diverges. We emphasize that the 3 relevant exponents 1/λ 678 , which take on the values 1/ and 1/(2 ) are new and original critical exponents that emerge because of the effect of the rapid periodic drive. They can not be related to any of the IRD exponents, and can only be observed in the presence of a rapid periodic drive. This is in direct opposition to the exponents that emerge in the adiabatic regime, and are related to the equilibrium critical exponents by the Kibble-Zurek mechanism. We conclude this section by pointing out that the drive can also affect the phase transition in a non-universal way. Indeed, the sign of c 1 controls the macroscopic phase of the system (see Fig. 7b). In the IRD case we find that ∆ is proportional to c 1 (see App. F). When the system is driven however, c 1 becomes a linear combination of ∆ and the drive couplings. This leads to a drivedependent shift of the phase boundary (see Fig. 7a). In the presence of a drive, the position of the phase transition in the parameter space is shifted. The drive protocol can be used to either enhance or suppress the emergence of a condensate. See e.g. [65,[90][91][92], where similar effects were studied. This is a static effect where the IRD description remains valid, but is renormalized by the drive. V. DISCUSSION In the previous section, we find that non-vanishing drive coefficients inhibit criticality by imposing a finite correlation length onto the system as it changes its phase. We interpret this as signaling the presence of a fluctuation-induced weakly first order transition as we argue in the following. While first order transitions commonly rely on an explicit symmetry breaking to impose a finite scale at the phase transition, here the transition is turned first order as a result of the strong fluctuations of the periodic degrees of freedom. The system goes through a first order transition where the U (1) symmetry is spontaneously broken (see Fig. 8a). Far away from the phase transition (blue areas in Fig. 7a), we expect the system to depend smoothly on the drive protocol. Close to the IRD limit, the effect of the drive is small, and there are two different phases also when Ω is finite. Then our calculation shows that, when the drive is switched on, it is possible for the system to break the U (1) symmetry spontaneously (with the order parameter changing non-analytically), without going through a critical point. Indeed, this is the phenomenology of a first order phase transition. The blue continuous and red dashed curves represent the potential as a function of the order parameter in the symmetric and ordered phase respectively. The usual 'Sombrero' potential for the order parameter is modified when additional gapless modes are integrated out and behaves as a φ 6 potential, where the order parameter jumps from being zero to being finite as µ I 0 is lowered below a threshold. (b), 3-states Potts model: The black ring represents the minimum of the potential of the fully U (1) symmetric system. In the Potts model this symmetry is explicitly broken down to a discrete 3-fold symmetry Z3, so that the potential has 3 minima (shown as red dots) and two different masses (blue and green arrows). As the system goes through the phase transition only one of these masses (vertical blue arrow) vanishes while the other stays finite and imposes a finite correlation length on the system. A more detailed physical picture can be obtained by drawing a parallel with phase transitions that are driven from second to first order by virtue of strong fluctuations (see Fig. 8a). The present mechanism is analogous to the Coleman-Weinberg or Halperin-Lubensky-Ma mechanism, where additional gapless modes -such as gauge fields [127,128] or Goldstone modes [129,130] -compete with the critical ones in the vicinity of a phase transition, and change it from second to first order. In the driven case, the fluctuating fields can be decomposed in discrete Floquet components Φ n (ω) = Φ(ω − nΩ) (with |ω| < Ω/2) that describe the occupation of the different Fourier modes of the order parameter [see Eq. (8)]. The analogy lies in the fact that the fluctuations of Φ 0 compete with the fluctuations of Φ n =0 and are not able to become critical any more. Undriven systems are invariant under continuous time translations Φ(t) → Φ(t + ∆t), for arbitrary ∆t. This continuous symmetry is however broken down to a discrete one Φ(t) → Φ(t + 2π/Ω), in the presence of a periodic drive. The continuous symmetry is trivially restored when the drive is switched off µ n =0 = g n =0 = 0, but also in the IRD limit Ω −1 = 0, where the rotating wave approximation is applicable. This is manifest in Eq. (23), where the effect of the drive enters through the ratio E/Ω, which vanishes when Ω −1 = 0. Even when they do not vanish, the Fourier modes µ n =0 and g n =0 play no role in this limit. This is reflected in the RG flow equa-tions Eq. (49), where the Fourier modes µ n =0 and g n =0 decouple from the rest of the problem when Ω −1 = 0. Conversely, the explicit breaking of time-translation symmetry allows for the presence of additional dimensionful couplings µ n =0 and g n =0 . These are not compatible with the undriven dynamical φ 4 theory, and lead to new relevant couplings at the WF fixed point. This allows us to draw a different parallel with the Potts model, this time. There, a continuous external (order parameter) symmetry is explicitly broken down to a non-trivial discrete subgroup (e.g. U (1) → Z 3 in the Potts model [183,184], or similar phenomena in O(N ) models [185,186]). See Fig. 8b. This allows for new relevant operators to emerge. The analogy with our system is that, while we do not break the external phase rotation symmetry U (1) O(2), we explicitly break time translation invariance down to its discrete version. The corresponding newly relevant operators, which are given by the drive coefficients, emerge as a result of the nonvanishing Fourier modes µ n =0 and g =0 . This interpretation in terms of a first order transition is further supported by the fact that, in the presence of a periodic drive, there is nothing stopping g I 0 from becoming negative along the RG flow (see e.g. [187]). Indeed, the flow of g I 0 does not stop and even decreases (as k is lowered) when g I 0 = 0 [see Eq. (49)] since X 2 0 and/or G 0 are never negative and Q 0 ∼ = 0. This means that the effective large-scale description can become unstable unless higher order couplings are taken into account. This is exactly what happens in φ 6 theory, where the two-particle coupling is allowed to be negative from the outset, and only the three particle coupling (coefficient of the sextic term) stabilizes the theory. There the U (1) symmetry breaks spontaneously through a first order transition. We emphasize that this is most likely to happen close to the driven critical point, where g I 0 ∼ = k g * becomes very small while X 0 and G 0 are held fixed. A fluctuation-induced first order transition presents a qualitative effect of the drive that remains, even when the drive is (finite, but) very rapid. This means that the rotating wave approximation, which predicts a second order phase transition, must break down in the critical region. We close this section by demonstrating that the mechanism discussed above is universal. For a generic drive protocol the drive coefficients will not be zero and will provide a finite correlation length. If we take the example of a monochromatic drive and purely imaginary couplings, we find that the drive parameters can vanish when either µ I ± = 0 or g I ± = 0 [see Eq. (55)]. This would suggest that criticality in a driven system remains possible for an appropriately chosen drive protocol. However, this is only possible when the drive is fine-tuned to a specific form. Indeed, we have chosen a mesoscopic description of the model, which is written in terms of the dynamics of the order parameter alone [see the discussion leading up to Eq. (6)]. The connection between the microscopic and mesoscopic descriptions is extremely complex. When the system is driven at the microscopic level, this drive propagates through the full problem and produces a complex mesoscopic theory, where all the couplings are synchronized with the drive. In practice it is not possible to drive the system microscopically in such a way that µ(t) or g(t) is an exact constant in time, and there is no reason to assume that this happens accidentally. We can actually even see an example of this in the general form of the RG flow equations Eq. (49). Indeed, when either µ or g is undriven (µ n =0 = 0 or g n =0 = 0) then the coupling set to zero will be generated by the other as the running cutoff scale is lowered 12 . This means that when the RG flow reaches the vicinity of the fixed point (and our analysis applies) the relevant drive coefficients can only vanish if there is a physical mechanism forcing them to be zero. Although we identify such a mechanism in App. B as a symmetry of our action, this reflects the fine tuning anticipated above since the symmetry of App. B will not be realized generically. VI. CONCLUSION We have combined the Floquet and Keldysh formalisms together into a dynamic RG approach that consist of jointly accounting for the renormalization of the static (Fourier mode n = 0) and periodic sectors together with their interplay. This has enabled us to account for strong critical fluctuations and include the effects of a periodic drive in the dynamics of a driven open gas of bosons at a second order phase transition. In the presence of a rapid drive, the system goes through a phase transition, where the U (1) symmetry spontaneously breaks. This is a second order transition in the IRD limit, where the system is invariant under time translations. Our main result is that the periodic drive enables new couplings to enter, which in turn regularize the divergence of the correlation length that takes place at this critical point. This is a universal mechanism that mainly relies on the existence of an interacting critical point and could be realized in varied physical systems. There are nevertheless important conditions for its possible realization. First, it takes place in systems that have reached a Floquet steady state, where the drive and dissipation compensate each other and all the Floquet modes are populated. The system must be allowed to fully synchronize with the drive. Dissipation is therefore an essential ingredient. Second, there must be a non-trivial critical point. I.e. the underlying IRD RG fixed point must not be Gaussian (ĝ * = 0). 12 This is not explicitly visible in Sect. IV C, where setting g I ±1 = 0 removes all the drive coefficients, and these are not generated in Eq. (57). This apparent discrepancy with Eq. (49) lies in the fact the g I ±1 is generated at O(Ω −2 ) only and these contributions disappear when this is inserted in Eq. (52). There would be no discrepancy if higher orders in Ω −1 were taken into account. Indeed, in the absence of interaction, the drive decouples from the IRD physics (the stability matrix becomes fully block diagonal) and can not affect the critical physics any more. This is reflected in our calculation by the fact that all the newly relevant couplings become irrelevant if < 0 (for d > 4), when the WF fixed point becomes trivial. In particular, this excludes integrable systems and most classical 1d systems. At this stage, several directions of research await their exploration. A first one concerns a more precise computation of the new critical exponents emerging in the rapid drive limit, Ω −1 → 0. This could be achieved via our dynamic RG approach within a 2-loop calculation, systematic to order 2 [7], or via a non-perturbative RG approach [177,188]. Furthermore, it will be interesting to explore the existence of a novel Floquet RG fixed point at intermediate values of Ω, now of comparable size to other scales in the problem. We expect such a fixed point to be out of reach of our Ω −1 -expansion. Therefore, a different kind of expansion, for example involving a re-summation of the present asymptotic expansion, is necessary. In this respect, the calculation of Sect. III B could be a good starting point. Finally, recent findings for symmetry broken Floquet steady states with qualitatively new and original properties [157], motivate the development of a more analytical understanding. This calls for a more direct approach to the effect of a rapid drive on phases with spontaneously broken continuous symmetries, or more generally, with gapless modes. The presence of such gapless modes will most likely lead to a breakdown of the rotating wave approximation in the many-body system in a similar way as seen here near criticality. This could be accommodated within a dynamic effective action approach, that is capable of handling the steady state order parameter in the symmetry broken phase. In particular, one lesson learned in this work suggests that the order parameter modes Φ 0 and Φ n =0 need to be treated on an equal footing in the presence of gapless modes. In particular, applied to our model, such an approach would enable us to better understand the interplay between the periodic drive and the Goldstone modes that emerge when the U (1) symmetry is spontaneously broken. Here, we briefly analyze the case of a slow drive. See also [189], where a fully quantum system is analyzed. In particular we show how the Green functions can be expanded in powers of Ω leading to an adiabatic approximation of the problem, Eq. (A4). In the case of a slow drive, an expansion in powers of Ω can be carried out straightforwardly. We can directly expand the Wigner and/or Floquet Green functions. We start with the Floquet retarded Green function that is expanded as with [n] nm = δ nm n. M 0 and E are defined in Eq. (20). Then, converting to the Wigner representation provides The second term in Eq. (A2) is obtained from the first term of Eq. (A1) through the Ω-dependent shift in frequency that is necessary to go from the Floquet to the Wigner representation [Eq. (16)]. The above expression is only valid for even values of n. When n is odd, m has to be replaced by m ± 1/2. In any case, m must drop out at the end of the calculation since the Wigner Green functions only have one index. For this reason, we only work out the case of n even. Odd values of n are analogous. The above expressions are still formal since they are written in terms of the inverse of a non-diagonal matrix. They can however be computed exactly, term by term. To this end, we use Eq. (26) without truncating the sum, convert each term to its real-time representation and re-sum the obtained expressions. We use the following relations where N , s and t are integers and A and B are Floquet matrices with the same structure as E, A nm = A n−m . [E ] nm = (E ) n−m = ffl t e i(n−m)Ωt E (t) is the Floquet matrix obtained from the time-derivative of E. To order O(Ω), the obtained Green functions are . The correction to G R;n (ω) start at O(Ω 2 ). It is clear from Eq. (A4) that the problem becomes adiabatic when Ω → 0. Indeed, the Green functions are obtained (to leading order) by computing them for a stationary system and restoring the time dependence at the end. Ω is even not explicit here. Instead, we have written Eq. (A4) in terms of the time derivative of the couplings, M (t) = dM/dt. Even if this derivation is based on the Floquet formalism, it shows how a driven system with a very long period effectively looses its periodicity. This expansion can be inserted in the real-time representation of the RG flow equations Eqs. (39), to produce an adiabatic expansion of the full problem. See [181], where such an expansion is applied to the Kibble-Zurek problem. Appendix B: Detailed balance Detailed balance in stationary (undriven) systems can be framed in terms of a microscopic symmetry of the dynamic action [7,116,133,190]. In this section, we show that this this statement can actually be generalized to periodically driven systems. In this case however, the system does not exhibit detailed balance, but instead, a generalized version of the Fluctuation-Dissipation Relations (FDR) emerges [see Eq. (B7)]. Here we will give an interpretation of this result, which we discuss at the end of Sect. V. Equilibrium driven open systems are found to display FDRs. The converse is also true: A system that displays FDRs can be said to be at thermal equilibrium. Displaying FDRs involves a lot of information since these relations extend to all the correlation functions. This information can however be distilled into a microscopic symmetry [see Eq. (B4)] that the system must obey in order to be at thermal equilibrium [7,116,133,190]. Then the FDRs emerge naturally as Ward identities. While this symmetry was discovered for stationary systems, we can still ask under what conditions it is a symmetry of the periodically driven system. Although the simplest answer is that the drive must be turned off, we find that there are also specific choices of nontrivial drive protocols for which the system is invariant under the equilibrium symmetry. In these cases, the corresponding Ward identities are however not real FDR relations any more [e.g. see Eq. (B7)] because they involve the Wigner Green functions with all values of n independently. Nevertheless, although they remains out of equilibrium, these systems exhibit some of the properties of thermal equilibrium. Although the microscopic symmetry was identified in [7,[158][159][160][191][192][193][194] for classical systems, we follow [116], which is closest to our set-up. Before we give the transformation, we go to a generalized version of our model Eq. (6), where the interaction as well as the kinetic term are bundled in the operator K(|φ| , t) = K∇ 2 − µ − g |φ| 2 . All the parameters of the above equation are potentially periodic in time, and, except for γ, which is real and positive, they are all complex numbers. This model reduces to Eq. (6) when Z = 1 and γ is constant. Our symmetry is easiest to interpret ifφ is rescaled as Z 1,2 are the real and imaginary parts of Z respectively, and r is a real parameter yet to be determined. In general, both can be time-dependent. After this rescaling the parameters of Eq. (B1) become with K = K 1 + iK 2 . This is the most general rescaling that sets the imaginary part of Z to minus one. We now define the following field transformation in terms of the rescaled variables 13 T is another real parameter that will be interpreted as the system's temperature shortly. It was shown for time independent couplings [7,116,133,190], that the Ward identities associated to this symmetry are thermal FDRs. Our driven system is found to be symmetric under Eq. (B4) if there exists a periodic function of time r(t), and a positive real constant T , such that the rescaled action is symmetric under Eq. (B4) [after the rescaling (B2)]. In particular, the fluctuation-dissipation relation for the two-point correlation functions emerge if our field transform has no effect on the correlation function with Indeed, inserting Eq. (B4) and remembering that the right-hand-side of the above equation must vanish provides that the complex conjugate ofφ is transformed asφ * (t) = φ(−t)+i/(2T )∂tφ(−t) because time-reversal is a linear operator. See [133] for further details. which is in terms of the Wigner Green functions and the rescaled variables Eqs. (B2) and (B3). In the absence of drive, only the n = 0 sector is physical. The Green functions are then time-translation invariant G n (ω) = δ n0 G 0 (ω), and the usual FDR emerges. When the system is driven the equilibrium FDRs remain in the n = 0 sector. The Green functions obey FDRs when they are averaged over one period. There are however additional relations for the other Green functions that are not FDRs. We will see (in Sect. V) that it is actually possible to fine-tune the drive protocol (without turning it off) in order to allow the system to become critical. This can be understood in terms of the present discussion. Indeed, it is now clear that certain driven systems are closer to thermal equilibrium than others. Bringing the system as close to equilibrium as possible, then produces some of the properties of equilibrium such as FDRs for the n = 0 sector. Even though we can not directly connect the fine tuning of Sect. V to the equilibrium symmetry within our 1-loop approximation, we can bring our system closer to equilibrium by choosing µ(t) and g(t) to oscillate in phase with each other [thus setting x = 0 in Eq. (57)]. Then the effect of the suppression of criticality goes from being O(Ω −1 ) to O(Ω −2 ). It becomes weaker. We believe that, extending our approximation scheme (to include the renormalization of K, Z and γ) will equate the fine tuning of Sect. V to being symmetric under Eq. (B4). We conclude this section by listing the criteria that the parameters of Eq. (B1) must satisfy for Eq. (B4) to be a symmetry. This will be useful to interpret the different couplings that emerge in Sect. IV C. We find that our driven system is symmetric under Eq. (B4) when: (i) All the time-dependent couplings are even in t (up to a global time shift). (ii) There is a single (possibly time-dependent) real number r(t) such that the imaginary part of K vanishes, [see Eq. (B3)]. This relation defines r. It is a generalization of the requirement (found in [116]) that all the couplings lie on the same ray of the complex plane. This means that if there is a real number r satisfying Eq. (B8), then it is possible to rescaleφ so that Im(Z ) = −1 and Im[K ] = 0. (iii) The time dependence of Z, γ and r are such that the temperature, which is defined as does not depend on time. (iv) The real part of K does not depend on time. Inserting Eq. (B8) then provides ∂ ∂t where the time derivative does not act on the field. This equation must be valid for all values of φ. It implies that there exists a time-independent realvalued operator U (|φ|) such that I.e. the time-dependent couplings all oscillate in phase with each other. We will see in Sect. IV C that the qualitative effect of the drive on the IRD criticality becomes O(Ω −2 ) when µ(t) and g(t) oscillate in phase with each other. See Eq. (50) [and Eq. (55)], where X 0 vanishes in that case. The above equation provides the following interpretation: When the couplings are synchronized, the system is closer to thermal equilibrium (where the system can become critical) and the effect becomes weaker. All these conditions can be inserted into the action Eq. (B1). Then rescaling the response field as in Eq. (B2), and expressing it as an equivalent Langevin equation provides with all the time-periodicity residing in Z 1 = Z1r+Z2 Z1−rZ2 . γ and r are defined in Eqs. (B3) and (B8) respectively. When our periodically driven system is symmetric under Eq. (B4), it is a dissipative system with periodically timedependent reversible couplings. Appendix D: Derivation of the RG flow equations In this section, we give additional details on the derivation of the different versions of the RG flow equations. We start with App. D 1, where we describe the technical details leading to the real-time representation of the flow equations Eqs. (39) and (40). These equations are derived with the help of the 1-loop expansion, but contain no approximation with respect to the periodic drive. In App. D 2, we explain how the Wigner representation of the flow equations Eq. (41), is expanded to order O(Ω −2 ) in the case of purely imaginary couplings, leading to Eq. (49). Finally, we approximate the flow of the different drive coefficients in App. D 3 to obtain Eq. (52). General flow equations In this sub-section, we give further details on the derivations of Eqs. (39) and (40). We start by detailing the three approximations that enter their derivation: (i) We use perturbation theory: Eq. (33). This approximation, which is in principle valid as long as the k-dependent coupling g n is small, is justified for 0 < = 4 − d 1 and at criticality, where the rescaled coupling [see Eq. (56)] are either asymptotically small (for n = 0) or proportional to = 4 − d [for n = 0, see Eq. (58)]. We extend our results to d = 3, even if = 1 in that case. Indeed loop perturbation theory relies on the fact that the dimensionality provides a parameter that can be continuously varied from 1 to = 1 with the results being smoothly deformed [124][125][126]. Furthermore we ignore the RG flow of higher order vertexes (i.e. Γ k is expanded to fourth order in the fields) by setting them to zero on the right hand side of Eq. (33). We assume that if Γ (4) is small, then Γ (6) must be even smaller since it is not present at k = Λ [see Eq. (6)]. (ii) We do not keep track of the full frequency and momentum dependence of the four-point vertex. Instead we restrict the real-time representation of Γ (4) to take a local form 14 . The flowing two-body couplings, which is defined in Eq. (36) is given by which is realized microscopically as can by seen by differentiating the action S as in Eq. (36). Although the RG flow produces a rich space-time dependence [see Eq. (D17)] the above expression stems from the leading term in an expansion of the 4-point vertex in powers of the frequencies. Indeed, the above equation is obtained with inserted in with the variables Γ n (f 2 , f 3 , f 4 ) is the Wigner representation of Γ (4) . This approximation is justified at criticality because the neglected couplings contain higher powers of p and ω and are thus irrelevant. (iii) We do not allow Γ (4) to develop an original tensor structure as k is lowered. Since Γ (4) is the fourth field derivative of the effective action, it is a 4 × 4 × 4 × 4 symmetric tensor 15 . We do not take this into account. Instead we impose that Γ (4) takes the same form as the fourth derivative of the original action S. See Eq. (D12), which shows the field-dependent part of the second field derivative of Γ k . Instead of using a general form on the right-hand-side of the RG flow equations, we replace Γ (4) by the second derivative of Γ I . Then Γ (4) depends on a single complex parameter, g(t). This approximation is partially justified by noting that terms of order larger than two in the quantum field are found to be irrelevant at criticality (see e.g. [116]). This is a semi-classical approximation [195,196], which is reflected in the structure of Eq. (6). Assuming that this remains the case here, we obtain that Γ 4 must vanish if more than two derivatives with respect toφ are taken and the resulting tensor structure is simplified. Moreover, the added tensor structure is absent at k = Λ. It is generated by the RG flow and must therefore be small if g(t) is small. Within perturbation theory, it can be neglected. 1-loop perturbation theory is implemented in Eq. (32) by pulling out the derivative with respect to the running cut-off on the right-hand side and neglecting its effect on Γ (2) [178][179][180] Tr Then inserting the sharp cut-off operator provides Eq. (33), The trace on the right-hand side, which acts on operators (i.e. objects with two sets of indexes), is defined as with the discrete index denoting the fields φ andφ and their complex conjugates. The delta function selects the momenta with modulus set to the running cut-off p = k. The logarithm in Eq. (D6) is defined in terms of operator products, through the Taylor expansion of the usual logarithm function. This product also serves to define the functional inverses [see e.g. Eq. (D14)] through We emphasize that all these approximations are usually made within 1-loop perturbation theory [124][125][126]. We spell them out here for completeness. The RG flow equations of µ n and g n can now be extracted from Eq. (33). We see from Eq. (D6), that the flow equations are expressed in terms of the trace of the inverse of Γ (2) k . The second field derivative of Γ k is a 4×4 matrix defined as with φ = (φ * ,φ * , φ,φ). Under the above assumptions, The first term is the kinetic term. It contains the inverse propagators and no field dependence, The inverse propagators depend on time and momentum. They are defined in Eq. (5). The Green functions are given by the inverse of Γ 0 . See Eq. (11), where the lower left block of [Γ 0 ] −1 is displayed. The second term of Eq. (D10) contains the contribution from the interaction, It depends on the fields and the complex time-dependent coupling g. Now that we have set up all the necessary ingredients, we are ready to write down the RG flow equations in real time. The flow equation of µ(t) can be extracted from the flow equation of Γ R by noting that (at 1-loop in perturbation theory) the pre-factors of the space and time derivatives are not affected by the coarse-graining. I.e. the terms proportional to K∇ 2 and i∂ t do not depend on k. Then we have The above equation will provide the left-hand side of Eq. (39). The right-hand side is obtained by taking the second field derivative of the right-hand-side of Eq. (D6) evaluating it at φ =φ = 0, The whole equation is evaluated at φ =φ = 0. Multiplying the matrices, taking the trace and inserting Eq. (D12) provides Eq. (39) is the time-dependent coupling and G K (t, t) is the Keldysh Green function. The momentum integration is performed according to Eq. (D7) and reduced to the surface of a sphere of radius k (leading to the pre-factor because the Green function only depends on the modulus of p. Note that an overall factor of δ(t − t )δ(p − p ) was divided out. The RG flow equation of g(t) [Eq. (40)] is obtained in a similar way. The derivative of Eq. (D1) with respect to the running cut-off provides while, taking four field derivatives [according to Eq. (36)] of the right-hand side of Eq. (D6) and then perform the matrix multiplications, momentum integrals and traces gives k∂ k Γ (4) (t 1 , t 2 , t 3 , t 4 ) = (D17) In this sub-section, we give some details on the steps going from Eq. (41) to Eq. (49). As we discuss in Sect. III C, the asymptotic expansion in powers of Ω −1 truncated to a given order is obtained from Eq. (41) by using the truncated expansion of the Green functions in powers of E [Eqs. (27) and (28)], performing the frequency integrals and expanding the result in powers of Ω −1 . If the E-expansion is truncated at the same order than the desired order in the Ω −1 expansion, then the result is systematic in Ω −1 . This procedure is straightforward but can get quite long. We illustrate it for the flow of µ n and to O(Ω −1 ) and comment on the general case at the end of this subsection. To O(Ω −1 ), Eqs. (27) and (28) are inserted in Eq. (41). Using the Residue theorem to perform the frequency integration provides Expanding this equation to O(Ω −1 ) provides the first of Eqs. (45) of the main text 16 . The definition of E n as being equal to µ n if n = 0 and E 0 = 0 plays an important role here. Indeed, without E 0 = 0, there would be an additional leading term arising when n = m, which would not depend on Ω. This is straightforward (and not very interesting) in the above equation, but must be handled 16 It is necessary that M I 0 > 0 for the theory to be stable. We assume that it is and remove the absolute values. carefully in the flow of g n , where such cancellations occur. For example, the flow of g n expanded to O(E 0 ) is given by We see that the leading term of the sum on the righthand-side occurs when 2m = n. This term only appears when n is even and leads to the difference between the even and odd values of n in Eq. (41). Both of the above equations contain terms of arbitrarily high order in Ω −1 . These must be taken into account in the Ω −1 -expansion, but if these equations are expanded to an order of Ω −1 that is larger than the corresponding order in the E-expansion, then there will be missing contributions to the Ω −1 -expansion. For example, the expansion of Eq. (D19) to O(Ω −2 ) is required to recover Eq. (49) and will produce all the terms that remain when E is set to zero (i.e. the terms with [g I r ] 2 [O(Ω 0 )] and G 2r [O(Ω −2 )]). The additional terms are however still missing and come from the first and second order terms in the E-expansion. We conclude this sub-section with the comment that the asymptotic expansion to O(Ω −1 ) [leading to Eq. (45)] was performed manually, but that a computer was used to obtain the next order [Eq. (41)] because of the large amount of terms that emerge. The recipe (expand in powers of E, apply the residue theorem and expand in powers of Ω −1 ) remains the same, but the different steps were automated. We used Mathematica to apply the residue theorem (identify the poles and insert them in the integrands) in the different contributions at O(E 2 ) and to sum (and simplify) the whole expression. Then the sum of the three terms vanishes. The k-dependence of g I ±1 can be neglected in Eq. (D20) because its flow starts at O(Ω −2 ) only [see Eq. (49)]. This is particularly useful in the case of a monochromatic drive because it means that the Fourier components of g I n are all generated from g I ±1 in a very simple way: If n is not a power of 2 then g I n = 0. If |n| = 2 r , then g I ±2 r = I ±,r (k) [g I ±1 ] 2 r . All the k-dependence is relegated to the pre-factor I r (k), which is a complicated multidimensional integral over the pre-factor of Eq. (49). The simplest example of this (beyond I ±,0 (k) = 1) is as can be seen from Eq. (49) together with the initial condition g ±2 (Λ) = 0, which is a consequence of the monochromaticity of the drive. g ±4 is then obtained by inserting the above equation into Eq. (49) and leads to a similar equation, although with a different k-dependent pre-factor. This procedure can in principle be iterated up to any value of r, and it is clear that it does not depend on the sign of n, I +,r (k) = I −,r (k) = I r (k). The flow of µ I n behaves in the same way, µ I ±2 r = J r (k)[g I ±1 ] 2 r , although up to O(Ω 0 ) and not for r = 0. This leads to We see that the flow of all the drive parameters actually starts at O(Ω −1 ). In particular X 0 can be taken as a constant if the problem is truncated to O(Ω −1 ), as in [53]. Finally, we use the fact that critical physics is obtained by linearising the flow close to the IRD RG fixed point, where all the drive coefficients vanish. Then it is only necessary to account for the terms that are linear in the drive coefficients. The rest will have no effect on the critical properties. We implement this simplification by using Eq. (D22) and its generalization, g I ±2 r = I r (k) [g I ±1 ] 2 r and neglecting all the terms of order O(g 3 ±1 ) and higher. Then we find that Q 0 together with its flow vanish. Moreover, the flow equations can be closed if we introduce the additional variables that does not flow, k∂ k U = 0. The end result is given by Eq. (52). and a complex Hamiltonian functional H c/d are the real and imaginary parts of H = H c + iH d . They model coherent and dissipative dynamics respectively. See [196] and references therein. The stochastic dissipative dynamics of a non-conserved complex field φ, which is the equation of motion of the 2-component model A of [6] is recovered from Eq. (E1) by setting H c = 0 and inserting time-independent coupling (see e.g. [115][116][117]). This is equivalent to choosing Re[µ] = Re[g] = Re[K] = 0 (and keeping the couplings constants) in Eq. (6). Then we recover relaxational dynamics close to thermal equilibrium with the corresponding Hamiltonian is given by H d . The integrals of Eq. (41) (E4) Finally, we rescale the couplings according tô µ = 2m k 2 µ I 0 ,ĝ = γm 2 k d−4 g I 0 , and write To linear order in = 4 − d, we find the WF fixed point with coordinates and critical exponent See e.g. [124], where these results are obtained in the equilibrium case. Appendix F: Scaling behavior In this section, we give additional details on the scaling analysis of the rapidly driven system. We start by giving an explicit expression for the rescaled drive parameters: Next, we explicitly compute the stability matrix of the periodically driven system. We then show how the coef-ficients c i of Eq. (62) are related to the microscopic couplings. In particular we show that the relevant couplings c 6,7,8 can only vanish when the corresponding drive coefficients x, u and s vanish themselves. Moreover, we show how to derive the phase diagram of Fig. 7a. The stability matrix is the Jacobian of the flow equations evaluated at the fixed point Eq. (58). It is obtained by taking partial derivatives of every element of Eq. (57) with respect to all the couplings and evaluating them at µ 0 =μ * ,ĝ 0 =ĝ * and m = r = g = s = u = x = 0. The gradients of the elements of Eq. (57) are vectors, which provide the lines of M . To O( ) we obtain The above upper-triangular form of the stability matrix makes it particularly easy to diagonalize. The eigenvalues [Eq. (61)] are simply the diagonal elements of M . The eigenvectors of M can also be computed, and have a simple structure as a result of the uppertriangular structure: The first eigenvector (with eigenvalue λ 1 = −2 + 2 /5) is − → v 1 = (1, 0, 0, 0, 0, 0, 0, 0), the second eigenvector is − → v 2 = (1, x 2 , 0, 0, 0, 0, 0, 0) (with x 2 some constant), the third is − → v 3 = (1, x 3 , y 3 , 0, 0, 0, 0, 0), and so on. The constants x i can be complicated functions of , but the important part is that − → v i j = 0 if j > i. This makes it easy to relate the coefficients of the linear combination [Eq. (62)] to the microscopic couplings. In particular, since the only eigenvector with a nonzero entry at the end is − → v 8 , we must have c 8 ∼ x. Then c 7 must be a linear The phase of the system is determined by the sign of c 1 since it determines weather µ 0 is positive or negative at large scales, see Fig. 7a. With the eigenvectors normalized to one, we find that We have inserted ∆ = A(δg + 4π 2 δµ) (with A > 0 a nonuniversal constant) in the second line. In the absence of drive ∆ would be identified with the reduced temperature. Setting c 1 = 0 produces a linear relation between ∆ and the drive coefficients with a non-universal yet finite slope. This is represented in Fig. 7a as a tilted black line.
2020-07-21T01:01:05.590Z
2020-07-18T00:00:00.000
{ "year": 2020, "sha1": "104dbec540bbc8cf8a3ae83dc48a0eb999445aa1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.09463", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "104dbec540bbc8cf8a3ae83dc48a0eb999445aa1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238259945
pes2o/s2orc
v3-fos-license
Selective and Collective Actuation in Active Solids Active solids consist of elastically coupled out-of-equilibrium units performing work. They are central to autonomous processes, such as locomotion, self-oscillations and rectification, in biological systems,designer materials and robotics. Yet, the feedback mechanism between elastic and active forces, and the possible emergence of collective behaviours in a mechanically stable elastic solid remains elusive. Here we introduce a minimal realization of an active elastic solid, in which we characterize the emergence of selective and collective actuation and fully map out the interplay between activity, elasticity and geometry. Polar active agents exert forces on the nodes of a two dimensional elastic lattice. The resulting displacement field nonlinearly reorients the active agents. For large enough coupling, a collective oscillation of the lattice nodes around their equilibrium position emerges. Only a few elastic modes are actuated and, crucially, they are not necessarily the lowest energy ones. Combining experiments with the numerical and theoretical analysis of an agents model, we unveil the bifurcation scenario and the selection mechanism by which the collective actuation takes place. Our findings may provide a new mechanism for oscillatory dynamics in biological tissues and specifically confluent cell monolayers. The present selection mechanism may also be advantageous in providing meta-materials, with bona fide autonomy. The mechanical design of the hexbug -mass distribution and shape of the legs -is responsible for its alignement toward its displacement, here imposed manually, of the cylinder (see Supplementary Information for a quantitative measure of the self-alignment length la). trix, and induce a strain field, which depends on the forces configuration. This strain tensor will in turn reorient the forces. This generic nonlinear elasto-active feedback, a typical realization of which is the contact inhibition locomotion (CIL) of cells 14,19 , opens the path towards spontaneous collective excitations of the solid, which we shall call collective actuation. In this work we propose a minimal experimental setting and numerical model, in which we unveil the modal selectivity of collective actuation and its underlying principles. We consider crystalline lattices with, at the center of each node, an active particle with a fluctuating orientation ( Fig.1-b and Methods). Each node has a well defined reference position, but will be displaced by the active particles ( Fig.1-c). In contrast, the polarization of each particle is free to rotate and reorients towards its displacement ( Fig.1-d, Supplementary Information section 2.2.2 and Movie 1). This nonlinear feedback between deformations and polarizations is characterized by two length-scales: (i) the typical elastic deformation caused by active forces l e (Fig.1-c) and (ii) the selfalignment length l a ( Fig.1-d). We complement the experiments with numerical simulations of elastically coupled self-aligning active particles 40 (Methods). In the over-damped, harmonic and noiseless limit, the model reads:u where the ratio of the elasto-active and self-alignment lengths, π = l e /l a , which we refer to as the elasto-active feedback, is the unique control parameter.n i 's are the polarization unit vectors, u i is the displacement field with respect to the reference configuration and M is the dynamical matrix (Supplementary Information section 3.1). If not hold, such an active solid adopts the translational and/or rotational rigid body motion dictated by the presence of zero modes (Movies 2,3), as reported in other theoretical models 3,5,16 . Here we are interested in stable elastic solids, with no zero-modes. We therefore explore the emergence of collective dynamics in elastic lattices pinned at their edges. For both the triangular ( Fig.2top) and the kagome (Fig.2-bottom) lattice, we observe a regime where all the lattice nodes spontaneously break chiral symmetry and rotate around their equilibrium position in a collective steady state ( Fig.2-a and Movies 5,8). This dynamical and chiral phase, which is reminiscent of oscillations in biological tissues 15,24 , is clearly different from collective dynamics in active fluids 35,36 and rigid body motion in active solids 3,5,9 . The dynamics are best described when projected on the normal modes of the elastic structure sorted by order of growing energies. The dynamics condensate mostly on two modes ( Fig. 2-b), and describe a limit cycle driven by the misalignment of the polarity and the displacements ( Fig. 2-d). In the case of the triangular lattice, the selected modes are the two lowest energy ones. Interestingly, in the case of the kagome lattice, these are the fourth and fifth modes, not the lowest energy ones. For both lattices, the selected pair of degenerated modes are strongly polarized along one spatial direction; they are extended and the polarization of the modes in each pair are locally quasi-orthogonal ( Fig. 2-c). The numerical simulations confirm the experimental observations indicating that collective actuation is already present in the harmonic approximation and is not of inertial origin. It also allows for the observation of additional peaks in the spectrum, which belong to the same symmetry class as the two most actuated modes ( Fig. 2-b, Supplementary Information section 7). As we shall see below these properties are at the root of the selection principle of the actuated modes. The transition to the collective actuation regime (Fig. 2e) is controlled by the elasto-active feedback. The larger it is, the more the particles reorient upon elastic deformations. Below a first threshold π F D , the active solid freezes in a disordered state, with random polarizations and angular diffusion (Movies 4,7). Beyond a second threshold π CA , collective actuation sets in: synchronized oscillations take place and the noiseless dynamics follow a limit cycle, composed of several frequencies in rational ratio (Supplementary Information, section 10.2). In between, the system is heterogeneous (Fig. 2-f and Movie 6), with the oscillating dynamics being favored close to the center, while the frozen disordered regime invades the system layer by layer, from the edges towards the center, as π decreases (Movie 11). Simulations with increasing values of N , while keeping the physical size L constant (Methods), indicate that collective actuation subsists for large N (Fig. 3). The successive de-actuation steps converge toward a regular variation of the fraction of nodes activated in the center of the system, f CA (Fig. 3-c-d and Movies 12,13). At the transition to the frozen disordered state, when π = π F D , the fraction of actuated nodes drops discontinuously to zero, from a finite value f * CA , which decreases with N , but saturates at large N ( Fig. 3-d). In the case of the triangular lattices, the collective oscillation frequency, Ω, measured in the region of collective actuation, decreases continuously to zero (Fig. 3-d-top). This is however non generic : in the case of the kagome lattices, very close to the transition, the dynamics condensate on a different set of modes, pointing at the possible multiplicity of periodic solutions. The transition is essentially discontinuous. Most importantly the spectrum demonstrates that, inside the collective actuation regime, the symmetry class of modes that are selected is independent of the system size ( Fig. 3-e). The selection of the most actuated modes is again dictated by the geometry of the modes, and not only by their energies. In all cases the condensation level remains large, with a large condensation fraction λ 1/2 (see Methods) for a wide range of values of π (Fig. 3-d-inset). Altogether our experimental and numerical findings demonstrate the existence of a selective and collective actuation in active solids. This new kind of collective behaviour specifically takes place because of the elastoactive feedback, that is the reorientation of the active units by the displacement field. The salient features of collective actuation are three-fold: (i) the transition from the disordered phase leads to a chiral phase with spontaneously broken symmetry; (ii) the actuated dynamics are not of inertial origin, take place on a few modes, not always the lowest energy ones, and therefore obey non-trivial selection rules; (iii) the transition follows a coexistence scenario, where the fraction of actuated nodes discontinuously falls to zero. In the remainder of the paper, we unveil the physical origins of these three attributes. At large scales, the dynamics of the displacement and polarization fields, U(r, t) and m(r, t), the local averages of, respectively, the microscopic displacements u i and the polarizationsn i , are obtained from a coarse-graining procedure (see Supplementary Information, section 6) and read: where the elastic force F e [U] is given by the choice of a constitutive relation and the relaxation term −D r m results from the noise. Assuming linear elasticity, the frozen phase, in which the local random polarities and displacements average to U = 0 and m = 0, is sta- ble for small elasto-active feedback. It becomes linearly unstable for π > π cg c = 2 ω 2 min , where ω 2 min is the smallest eigenvalue of the linear elastic operator (Supplementary Information, section 6.5). We then look for homogeneous solutions, assuming a condensation on two degenerated and spatially homogeneous modes, such that F e = −ω 2 0 U, with ω 2 0 the eigenfrequency of such modes. For π > ω 2 0 , we find a polarized chiral phase oscillating at a frequency In the limiting case D r = 0, |m| = 1 (Supplementary Information, section 6.6). The resulting mean field phase diagram ( Fig. 4-a) thus captures the existence of the frozen and chiral phases and their phase space coexistence for a finite range of the elasto-active feedback. However, the disordered m = 0, and the polarized chiral oscillating solutions being disconnected, the nature of the transition is controlled by inhomogeneous solutions, which cannot be investigated within perturbative approaches. Alternatively, we turn ourselves to simpler geometries in which exact results can be obtained. A first important hint at the nature of the transition towards the chiral phase concerns the structure of the phase space, and is best understood from considering the dynamics of a single particle (Fig. 4-b and Movies 9,10 and Supplementary Information section 5). Below π c = ω 2 0 , the phase space for the displacements contains an infinite set of marginal fixed points, organized along a circle of radius R = π/ω 2 0 . At π c , the escape rate of the polarity, away from its frozen orientation, becomes faster than the restoring dynamics of the displacement. As a result, the later permanently chase the polarity, and the stable rotation sets in. All fixed points become unstable at once; and a limit cycle of radius R = π/ω 2 0 1/2 and oscillation frequency, Ω = ω 0 π − ω 2 0 , identical to the one obtained from the mean field approach, branches off continuously (Fig. 4-c-d). Note that the oscillating dynamics does not arise from a Hopf bifurcation, but from the global bifurcation of a continuous set of fixed points into a limit circle. Understanding how the nonlinear coupling of N such elementary units leads to the selection mechanism of the actuated modes requires a more involved analysis. One sees from Eqs. The function c(•, •) only depends on the eigenvectors of M, {|ϕ i }. It is bounded between 0 and 1 and is maximal when the modes |ϕ i and |ϕ j are extended and locally orthogonal. More specifically, the pair of modes which dominates the dynamics, {|ϕ 1 , |ϕ 2 } for the triangular and {|ϕ 4 , |ϕ 5 } for the kagome lattice, are precisely the ones that optimize the bound. The construction of this bound is very general. It demonstrates that for any stable elastic structure, there is a strength of the elasto-active feedback above which the frozen dynamics is unstable and a dynamical regime must set in. It also captures the mode selection in the strongly condensed regime. Our findings about the linear stability of the fixed points for the triangular and kagome lattices are summarized in Extended Data Fig.1. That some fixed points loose stability does not imply that collective actuation sets in: from these fixed points, the system can either slide to a neighboring stable fixed point or condensate on some dynamical attractor. An exact theory to describe this condensation process is still missing in the general case, but can be formulated in the simpler, yet rich enough, case of a linear chain of N active particles, fixed at both ends. In the zero rest length limit of the springs, the rotational invariance of the dynamical equations ensures that the eigenvalues and eigenvectors of the dynamical matrix are degenerated by pairs of locally orthogonal modes. In such a situation, the limit cycle solution, corresponding to the collective actuation regime, is found analytically (Supplementary Information, section 9.2), leading to a precise transition diagram, illustrated here for N = 7 (Fig.4-e). When π exceeds the threshold value π CA , the limit cycle is stable. We have checked that it is the only stable periodic solution, up to N = 20. If π CA ≤ π ≤ π max c , it coexists with an infinite number of stable fixed points. The evolution of their respective basins of attraction can be largely understood by studying the N = 2 case (Supplementary Information section 9.3.1, Fig. S9). For π < π CA , the dynamics leave the limit cycle and become heterogeneous. The physical origin of the spatial coexistence lies in the normalization constraint of the polarity field, n i = 1, which translates into a strong constraint over the radii of rotation, namely R i ≥ 1 (Supplementary Information, section 9.2.3 and 9.2.4). Whenever R i becomes unity the polarity and displacement vectors become parallel, freezing the dynamics. The spatial distribution of the R i is set by that of the modes selected by the collective actuation, with particles closer to the boundaries having typically a smaller radius of rotation than the ones at the center. The threshold value π CA , below which the dynamics leave the limit cycle, is precisely met when the particles at the boundary reach a radius of rotation R = 1. For π < π CA , the competition between outer particles, which want to freeze, and the central particles, which want to cycle, leads to the sequential layer by layer de-actuation, illustrated in Fig. 4-f for a linear chain with N = 7 and observed experimentally and numerically. The threshold value π F D is reached when, eventually, the remaining particles at the center freeze and the system discontinuously falls into the frozen disordered state. Altogether we have shown that (i) the chiral phase takes its origin in the one-particle dynamics; (ii) the selection of modes results from the nonlinear elasto-active feedback, which connects the linear destabilization of the fixed points to the spatial extension and local orthogonality of pairs of modes; (iii) the spatial coexistence emerges from the normalization constraint of the polarity fields. The role of noise, which was not considered in the numerical and theoretical analysis, is another matter of interest. In the frozen disordered regime, the noise is responsible for the angular diffusion of the polarities amongst the fixed points. In the collective actuation regime the noise level present in the experiment does not alter significantly the dynamics. Numerical simulations confirm that there is a sharp transition at a finite noise amplitude D c , below which collective actuation is sustained (Extended Data Fig.2-a). For noise amplitude much lower than D c , the noise merely reduces the mean angular frequency Ω (Extended Data Fig.2-b) . Closer to the transition, the noise allows for stochastic inversions of the direction of rotation, restoring the chiral symmetry. (Extended Data Fig.2-c). Finally, it has been shown very recently, that non symmetrical interactions, together with non conservative dynamics, generically lead to chiral phases 41 . Here, the polarity and displacement vectors of a single particle do experience non symmetrical interactions, the phase of the displacement chasing that of the polarity. Mapping the coarse grained equations to the most general equations one can write for rotationally symmetric vectorial order parameters 41 , we find that the macroscopic displacement and polarity fields also couple non-symmetrically (Methods). This suggests a possible description of the transition to collective actuation in terms of non reciprocal phase transitions. If this were to be confirmed by a more involved analysis of the large scale dynamics, it would motivate the study of the disordered to chiral phase transition in active solids, which has not been addressed theoretically yet. In the same vein, one may ask whether the coarse-grained system shall obey standard or odd elasticity 13 . More generally, the recent miniaturization of autonomous active units 42 opens the path towards the extension of our design principle to the scale of material science. In this context, extending the relation between the structural design of active materials -including the geometry and topology of the lattice, the presence of disorder, the inclusion of doping agents -and their spontaneous actuation offers a wide range of perspectives.
2021-10-05T01:16:23.811Z
2021-10-04T00:00:00.000
{ "year": 2021, "sha1": "b915e705c3982ee2154aecba5b4dce3641d7b7b0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2110.01516", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e51841ba29cb247dc4d1d141b297384cbaf08528", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
259534437
pes2o/s2orc
v3-fos-license
Strategic environmental policies: Electric vehicles vs internal combustion engine vehicles In a world collapsed by pollution and environmental degradation, the decision of policies in favour of the environment is affected by the dilemma between economic efficiency and environmental protection. The objective of this article is to analyse this dilemma from the construction of a theoretical two-stage game model in which an environmental tax policy is chosen by government and local firms may produce differentiated vehicles: Electric Vehicles, Hybrid Vehicles, or Internal Combustion Engine Vehicles. At the first stage, the government determines the tax level on pollution taking the firms’ output levels as given. At the second stage, firms, competing in an oligopolistic market, choose their output and emission levels observing the tax level set by the government. It is found that a high perception of pollution damage encourages the setting of a pollution tax despite the fall in the consumer surplus and the profits of Internal Combustion Engine Vehicles producers; this policy encourages the production of Electric Vehicles and Hybrid Vehicles. Otherwise, the government is not willing to set a severe pollution policy. This work is relevant because the level of environmental policy can be determined from the perception that people have about the environmental damage caused by the production of cars. Introduction The first-ever Global Sustainable Transport Outlook Report made by the United Nations (UN), addressing all modes of transport, in developing and developed countries defines sustainable transport as the provision of services and infrastructure for the mobility of people and goods -advancing economic and social development to benefit today's and future generations -in a manner that is safe, affordable, accessible, efficient, and resilient while minimizing carbon and other emissions and environmental impacts (UN 2016). Sustainable transport is also believed to be an essential ground to progress in realizing the promise of the 2030 Agenda for Sustainable Development and in achieving the 17 Sustainable Development Goals. Sustainable transport comprehends important emission reductions for fossil fuel vehicles (buses, trucks, and cars). According to the International Energy Agency (IEA), globally, transport accounted for onefourth of total emissions in 2016, 71% larger than in 1990. Road transportation takes the lead in the highest absolute. Overall, the share of road transport emissions increased by two percentage points to 74%, while air and water remained unchanged (IEA 2018a). That is, business, as usual, cannot be sustained for long. In fact, according to the World Health Organisation (WHO), nine out of ten people breathe polluted air, including from transport sources, and 7 million people die every year (WHO 2018). 217 These data encourage the use of new clean technologies such as those used by electric and hybrid vehicles. In fact, one of the ten recommendations by The High-Level Advisory Group on Sustainable Transport is to promote sustainable transport technologies through outcome-oriented government investment and policies that encourage private sector investment and action through various incentive structures (UN 2016). As nicely stated by Altenburg (2014), the negotiations between national governments and industries on low-carbon determine these policies and are closely tied to levels of environmental ambition, technological preferences (e.g., attitudes towards nuclear energy, shale gas, etc.), the degree of market imperfection, and on the expected co-benefits (e.g., green jobs or energy security). Efforts to reduce the negative impact of pollution caused by gasoline vehicles depend on a set of public policies that foster the acquisition and use of electric and hybrid cars. To make this possible, public policy has to guarantee the benefit of companies, the disposition and purchasing capacity of the consumer, and the fiscal support of the government. Volkswagen (2019) argues that drivers who have been happily driving an Internal Combustion Engine Vehicle (ICEV) need more than just a little persuasion to give Electric Vehicle (EV) a chance. Even when it is perceived that the EV is an environmentally friendly transport option independently of the type (Ding et al. 2017), the EVs are not as desirable as expected as many restrictions appear on the market. The purchase of an electric or hybrid vehicle depends among many other reasons from consumers' attitudes formed by environmental concerns (Heyvaert et al. 2015, andBennett 2015), and of consumer purchase capacity. The approaches to achieving this goal vary greatly from market to market: in regulatory, tax, and financial incentivesor a combination of all three. Public policies are highly required to create a profitable market for EVs and consumer requirements. Volkswagen (2019) shows a reasonable survey on the countries and policies used to promote electric vehicles. According to ACEA (2023) the market share of electric cars expanded to 12.1% in 2022, an improvement of 3.0% compared to 2021. When it comes to hybrid and plug-in hybrid cars, they now register a share market share of 22.6% and 9.4%, respectively. By contrast, traditional gasoline and diesel fuels continued to lose ground. However, combined, they still accounted for more than half (52%) of European Union car sales in 2022. However, the reality in some developing countries, and specifically in Latin America, is far from the functional optimism of European countries. Even though the market for electric and hybrid cars has grown significantly, this growth is very heterogeneous. Although in recent years the sales of electric vehicles have experienced significant growth in the world, in some developing economies this growth has not been as dynamic as expected. Although there are incentives for the entry of electric vehicles, the subsidy for fossil fuels in the Latin America region continues to be a factor to take into account, since economic activity in the region is based on the use of this dirty fuel. According to estimates by the 218 International Monetary Fund (IMF 2021), in Latin America and the Caribbean 159,400 million dollars a in 2020 are paid in subsidies for fossil fuels. Despite its accelerated increase, the segment of electric or hybrid cars is a minority in the countries of the Latin America region, for example, according to Statista (2022), in the Mexican and Chilean markets, these vehicles represent less than 10% of the total purchases of private cars in 2022. Among the reasons that hinder the adoption of electric vehicles in the Latin American market are the high price of the models, which are mostly imported. According to Briseño et al. (2021) in Mexico, for the buyers, the affordability of these vehicles is more important than their energy efficiency. It is a segment with relatively expensive costs. Other relevant reasons are the lack of clear and sounded financial incentives and, above all, the lack of an adequate charging infrastructure (Statista 2022). Thus, despite the enormous optimism about hybrid and electric cars, and the enthusiastic response in the global market. From the aforementioned, in some developing economies this growth in the sale of electric cars does not translate into a greater proportion of electric and hybrid cars in the vehicle fleet of many of the developing countries. Our paper tailors such a model in which the government decides on public policies, (tax/subsidy), to be levied on vehicle industry producing differentiated vehicles, an EV at one extreme, or an ICEV at another extreme, or a Hybrid Vehicles (HV) in between the two extremes. The government's policy decision, which also affects the decision of the firms, basically depends on the price of the EV or HV (Zhang et al. 2016;Mersky et al. 2016;Boren and Ny 2016;and IEA 2018b), the income that the government could obtain from establishing a tax on gasoline cars production, and, relevantly, the perception about the disutility generated by the pollution generated by internal combustion cars. These three variables are important in low-and middle-income countries where the largest proportion of the population do not have access to an electric car. But in addition, the perception of the disutility generated by pollution implies a certain level of environmental awareness that in most developing countries is heterogeneous (Michel, P. & Rotillon, G. 1995;Espinosa Ramirez & Cruz Barba 2022). Moreover, we do not intend to delve into the problem of the economic evaluation, or life cycle assessment of EVs, and on the comparison of EVs with ICEVs are growing as in Mersky et al. (2016); Holdway et al. (2010); Samaras and Meisterling (2008). Our focus is rather on the outcomes of the strategic actions of the governments and the producers in a context in which the prices of the cars differ according to the income of the people, and the perceived disutility of pollution emitted. How quickly to switch production from ICEVs to HVs and EVs is a central question, one that is driving divergent strategies. If automakers get ahead of consumers in introducing electric vehicles, that could inflate their costs and hurt sales of gas-powered vehicles, jeopardizing investment in EVs. Falling behind rivals in EV offerings, on the other hand, could cost them a chance to establish themselves in a key growth area for 219 decades to come. This model, unlike the rest of the literature, tries to model this stylized fact in developing countries where environmental criteria and producers-government strategic interaction are different than in developed economies. It is important to say that we follow the three principles set by Lundberg and Marklund (2018) in which (1) the policies should be effective, (2) there should be one objective per instrument, and (3) multiple objectives and multiple policy instruments must be mutually independent of each other. Our model setup covers almost all the aforementioned determinants in a game-theoretic framework which will be discussed in detail in the following sections. Our starting assumption is that the EV sector has higher private costs and lower social costs. By implementing an encouraging public policy, governments may promote EV production and sales and create a stimulus to market creation and expansion. There are several measures to achieve these goals. For example, governments may promote public procurement programs for EVs, reduce the purchase price of EVs, reduce the costs for EV producers using subsidies, increase the costs for ICEV producers using tax regulations, reduce the operating costs for EV owners (e.g., discounted parking) and providing infrastructure or promoting the market for infrastructure investments. In this study, we propose a tax imposed on pollution because developing countries, generally, fiscal difficulties, and establishing taxes on the production of gasoline cars generates incentives for the adoption of clean technologies, and obtain tax revenues. We assume as Fanti and Buccella (2017) the existence of a heterogeneous Cournot duopoly, and the use of a game-theoretic approach where the results are depending on the degree of product differentiation. We consider a model in which local firms produce differentiated goods. These firms compete in an oligopolistic market. The government chooses the level of public policy (tax) imposed on pollution to maximize welfare. The model is set on a two-stage game. At the first stage, the government determines the tax level on pollution taking the firms' output levels as given. In the second stage, firms choose their output and emission levels observing the tax level set by the government. As usual, the problem is solved using backward induction. Although Carbone et al. (2022) is an excellent reference on the management of environmental taxes to formulate some sub-optimal alternatives, we focus on a partial equilibrium model because we are interested in the strategic interaction between government and companies. The next section we point out the literature review. Next, we determine the basic economic model. After this, we carry out a comparative statics analysis. Posteriorly, the optimal policy is analysed. Finally, some concluding remarks are made. Literature review The strategic use of environmental policy instruments may be effective when production is the main source of pollution (Koska et al. 2021). The strategic role of production taxes as an instrument of environmental policy in the presence of disutility from environmental pollution, and the implications of the strategic use of this instrument on the production decisions of firms have not been studied sufficiently in recent literature, only Shao et al. (2019) develop a game-theoretic model to investigate what vehicle types should be produced from both private firms' and social perspectives (Shao et al. 2019). However, they focus on the consumer' decision rather than the strategic interaction between government and firms considering the consumer perspective. This is the main contribution of this article. In international trade models, Fujiwara (2012), Kayalica and Kayalica (2005), and Kayalica and Yilmaz (2006) analyze the relationship between import tariffs, export subsidies, and emissions taxes under consumption externalities (disutility), but they do not consider the strategic interaction between environmental policies and the production decisions of firms (Fujiwara 2012). The literature has focused on the policies addressed to encourage the use of EVs as an alternative to reduce pollution. These policies are subsidies, tax incentives and financial incentives. For example, Gallagher and Muehlegger (2011) analyse the efficacy of fiscal and non-fiscal policies that induce consumer preferences of hybrid and EVs. Beresteanu and Li (2011) found that fuel price and income tax incentive programmes significantly affected the demand for HVs and EVs in the USA. Shepherd et al. (2015) measured consumer preferences for EVs and examined the subsidy impact on EVs' purchasing. Bjerkan et al. (2016) identified that exemptions from purchase tax and VAT are important incentives for EV acquisition in Norway. According to Moore II (2021), the environmental costs of producing and using ICEVs are real and significant but subsidizing the purchase and operation of electric vehicles very costly (mainly for some developing economies). Instead, tax internal combustion engines at a level that places the estimated cost of the environmental damage they do inside the price of such vehicles, and let the automotive market operate. The electric vehicle market will then thrive. This paper points out the use of a pollution tax in the context of the production and use of EVs, HVs and ICEVs. The question is, what is the optimal tax that a government could establish on the emission of pollutants due to the production of cars considering that its decision will affect the production decision of the companies and the consumer surplus? First, we are going to consider that the acquisition of an electric car by consumers depends entirely on the price. This is a sufficient explanation if we model this problem in the context of developing countries where charging 221 stations together with limited range (He et al., 2013;Avci et al., 2015;Chen et al., 2016;Li et al., 2016;Liao et al., 2016), and consumer attitude (Larson et al., 2014;Bilotkach & Mills, 2012;Haustein & Jensen, 2018), are secondary variables. Taxes on internal combustion vehicles have been a topic of interest in the literature on environmental and energy policy. One of the main reasons for these taxes is to address environmental externalities associated with vehicle use, including air pollution and greenhouse gas emissions. Taxes on internal combustion vehicles can be effective in reducing demand for such vehicles. Goulder and Parry (2008) found that a gas tax can reduce vehicle miles travelled significantly and reduce demand for new vehicles by up to 25%. Studies have shown that taxes on ICVs can be effective in reducing carbon emissions. Proost and Van Dender (2012) found that a 10% increase in the tax on gasoline could lead to a 5% reduction in carbon emissions in the United States. Similarly, a study by Stavins et al. (2007) found that a tax on ICVs could be more effective in reducing carbon emissions than other policy instruments, such as fuel economy standards. Schmalensee and Stavins (2017) discuss the use of cap-and-trade systems and taxes to reduce emissions from the transportation sector, including taxes on producers of internal combustion vehicles. Even when the main objective of taxing ICEVs is to reduce or eliminate the environmental externality, this suggests that these taxes are a powerful tool to incentivize the purchase of electric vehicles. Towoju and Ishola (2020) argue for a tax on producers of these vehicles to reduce emissions and promote the development of cleaner technologies. Zhang et al. (2013) discuss the role of taxes and other incentives in promoting the adoption of electric vehicles, and the potential impact on producers of internal combustion vehicles. Yan (2018) models the potential impacts of a tax on producers of vehicles with low fuel economy and discusses the potential economic and environmental benefits. Even some taxing policies against ICEVs are directly related to encourage the adoption of EVs (Ji et al. 2022;Jenkins 2014). On the other hand, there is also concern that such taxes may disproportionately affect lowincome households, who may be more reliant on combustion vehicles for transportation as the cost of electric vehicles is prohibitive. Li et al. (2020) found that taxes on ICVs in China led to a significant reduction in sales of these vehicles and an increase in sales of electric vehicles. However, the study also found that the tax disproportionately impacted low-income households, who were less able to afford electric vehicles. Klier and Linn (2014) found that taxes on ICVs could lead to a reduction in consumer welfare, particularly for low-income households, who spend a higher proportion of their income on transportation. One problem that arises with taxing ICEVs is the change in tax revenue. An increase in taxes on ICEVs initially increases tax revenue but can later lead to a reduction in tax revenue as EV adoption becomes more widespread. This dilemma is considered in this article, since although the government This problem of falling tax revenue is more relevant in developing countries, and specifically in Latin American economies. For example, in the case of Mexico, in Bonilla et al. (2022), in a scenario up to 2050, they predict a drop in income of 21.6% per year with the adoption of EVs. On the other hand, in the case of Ecuador, Bedoya Jara, et al. (2017) based on a basic principle of environmental taxation: whoever pollutes, pays, the collection effect, has allowed the Government to implement important environmental measures. In the case of Chile, Martinez (2020) makes a careful analysis of the use of green taxes where the amount and efficiency must be taken care of when implementing these taxes, because exceeding them could break the tax balance. In short, despite the growing optimism in the adoption of EVS in developed countries, in developing countries, and especially in Latin American countries, this optimism is relative. The cost of EVs, the fiscal limitations in the adoption of tax schemes, and the impact of these policies on the welfare of both the consumer and the producer, make the rates of adoption in these economies slower (Rajper & Albrecht, 2020;Adhikari et al., 2020). Establishing a tax on the production of ICEVs by the government implies, in many of the Latin American economies, considering not only the environmental impact, but also the benefit of consumers who face high prices for EVs, and the benefit of producers and how they interact with these tax policies adapting their productive transition from ICEVs to EVs. In this paper, we develop a theoretical model of this stylized fact that has not been considered in recent literature. Model framework We use the simplest possible structure capable of bringing out the main points. In this model, there are two firms a and b, both firms produce a differentiated good in terms of cost and demands. The firm a produces ICEVs, and firm b may produce ICEVs, HVs, or EVs. The election of firm b is going to depend on the cost structure. On the other hand, we assume linear demands as in which we have quasilinear preference under a numeraire commodity. This assumption is convenient as the vehicle demanded (as a good) is strictly different from other goods in which the income effect is highly related. When we use general preferences the results of the model are the same but more complicated in terms of mathematical expressions. We use linear demands for simplicity because another kind of demands do not change the where π a and π b denote the profits of firms a and b. Firms compete in an oligopolistic setting where k i is the constant marginal (and hence, average cost) such that (i = a, b). The firms are assumed to behave in a Cournot-Nash fashion. Hence, profit maximization yields first-order conditions of (5) and (6) as It can easily be verified that with the linearity of demand the second-order conditions are always satisfied. Solving (7) and (8) we have profit-maximizing equilibrium output for both types of firms; Substituting (10) and (9) into (7) and (8) we find the optimal profits as = 2 We consider that the firms are differentiated by the environmental technology adopted by firm The unit cost of production, k i the first term in (13) and (14) is c i , which is the part of the unit cost that is determined by technological and factor-market conditions, and it is taken to be constant. On the other hand, the amount of pollution generated (before any abatement) by each firm is θ i x i , where θ i is the amount of pollution per output emitted by production process and it is constant. A 225 small θ i means that the environmental production technology adopted by a firm is more efficient, there is less pollution emitted by the firm. However, this technology in firm b depends on the degree of differentiation as well. In this case, this technology would be the same if both goods are homogenous such that θ b (1) = θ a , but with completely differentiated goods, the technology of production is less polluting in firm b than in firm a such that θ b (0) = 0. So, we have θ a ≥ θ b ≥ 0. We assume that the abatement technology is such that it costs each firm a constant amount λ to abate one unit of pollution. From (13) and (14) we have It is clear that the unit cost of firm b is larger than the unit cost of firm a. Adopting environmental technology is more expensive than normal non-environmental technology. From here we can deduce that With no pollution policy, and given the cost difference, the goods produced by the firm a is larger than firm b. In other words, the vehicles produced by the firm a are larger than firm b. Finally, from (11) to (14) we get Clearly, we have that π a − π b ≥ 0. Here, we wonder how pollution may affect the health of the people in the country given by environmental degradation. Pollution here is considered a negative externality which implies some cost to abate it. This negative externality calls for a policy effort to reduce the emission of pollution. For this to be the case, we assume a government that is considering applying an environmental policy, let say tax, to control the emission of pollution to avoid environmental degradation. Following Lahiri and Ono (2000), we consider a pollution tax, which may affect the car production decision, and therefore, the amount of pollution emitted into the atmosphere. The cost structure would be rewritten from (13) and (14) A part of k i , the first term, is given by technological and factor market conditions, and the remaining parts are policy-induced. A pollution tax has two associated costs to the firms: (i) the tax paid, 226 and (ii) the cost of pollution abatement. Denoting z i the post-abatement pollution level per unit of output, λ(θ i − z i ) is the unit abatement cost, and tz i the unit tax paid. To set an optimal policy, the government is willing to set a pollution policy taking into account the benefit on the health of the people and tax revenue, and the reduction in consumer and producer surplus given by the increase in production costs. The government maximizes a welfare function like: where the first two terms are the producer surpluses, the third term is the consumer surplus, the fourth term is the tax revenue, where t is the pollution tax, and the fifth term is the pollution disutility where ψ is the marginal pollution disutility, and R is the amount of pollution emitted into the atmosphere. The consumer surplus is defined as CS = CS a + CS b such that from (1) and (2) The total amount of pollution is defined as = + Once we have set the basic framework of the model, we determine some comparative statics to determine the optimal pollution tax t * . The model is set on a two-stage game. At the first stage, the government determines the tax level on pollution taking the firms' output levels as given. In the second stage, firms choose their output and emission levels observing the tax level set by the government. As usual, the problem is solved using backward induction. With these equations and game-theoretic structure, we complete the model specification and turn to its analysis in the following sections. Comparative statics The setting of a pollution tax affects primarily the cost of firms. It is clear to say that any tax is affecting negatively the cost structure of firms. From (15) and (16) we have An increase in pollution tax increases the cost of firms. By (21) we consider that the impact of tax on costs affects the optimal output or the number of vehicles produced. From (9) and (10) we have The impact of a pollution tax on the optimal output depends on the degree of differentiation. When γ = 1 the result seems to be negative because the amount of pollution emitted by the two firms would be the same, and therefore an increase in the pollution tax reduces the output of the firms. When both goods are completely differentiated (γ = 0) an increase in pollution tax reduces the ICEVs produced. From (11) and (12) The intuition is similar to the previous case. When γ=1 the result seems to be negative because, presumably, since they are homogeneous goods, the amount of pollution emitted by the two firms would be the same, and therefore an increase in the pollution tax reduces the profit of the firms. When both firms 228 are completely differentiated, a pollution tax reduce the profits of firm producing ICEVs. To obtain the comparative static of consumer surplus, we have from (16) and (17) = (4 − 2 ) [ ( 2 − 2) − ] < 0 Independently of the level of differentiation, the consumer surplus is reduced with an increase in pollution tax, although with fully differentiated goods the consumer surplus of electric cars does not change. The impact of a pollution tax on tax revenue is given by There is a positive impact given by the increase in pollution tax, and a negative impact given by the impact of a pollution tax on the optimal outputs. An increase in the pollution tax means more income for government, but this tax discourages the production of polluting goods. Independently of the degree of differentiation, the overall effect seems to be ambiguous. When both goods are completely homogenous, the amount of tax collected, and the reduction in the optimal output are the largest possible. With completely differentiated goods the amount of tax collected and the reduction in the output are the smallest one. Finally, the impact of a pollution tax on people's health is given by There is positive impact of a tax on pollution disutility given reduction of pollution given by the production process independently of the degree of differentiation. It is defined by the reduction in the output, and it is larger as more homogeneous are the goods. Optimal pollution tax Once we have set some comparative statics, we derive the optimal. Derivation of (17) respect to the optimal pollution tax, and considering (21) to (29) we get [2 − 2 2 − 2 2 ] + = 0 The result seems ambiguous, and we need to consider some restrictions. According to (15) and (16), the firms decide on z i and x i . The firms' optimal behaviors on pollution emission give As Lahiri and Ono (2000) argue, the firms do not abate pollution at all when the tax rate is smaller than the private marginal cost of abatement; they simply prefer to pay the tax. On the other hand, when the tax rate is larger than the marginal cost of abatement, the firms emit only the harmless level of pollution. Substituting (31) into (15) and (16) we have and from the total amount of pollution (20) we have From (33), when t ≥ λ, the amount of pollution is zero independently of the pollution tax. When t < λ, all firms pay the pollution tax as none of them abates any pollution. Since the first case, no pollution is emitted independently of pollution tax; we focus on the second case in order to determine how the differentiation level affects the optimal pollution tax. Considering this case, and given that θ a ≥ θ b ≥ 0, we substitute (31) in the comparative statics from (22) to (25), and (29), we have the following intuition, From (34) to (37), we have that an increase in pollution tax reduces the output and the profit of firm a, independently of the degree of differentiation. In other words, pollution tax decreases the production of ICEVs. On the other hand, the impact of a pollution tax on output and profit of the firm b depends on the degree of differentiation: with a larger degree of differentiation, the increase in the pollution tax increases the output and profit of firm b. It means, more EVs or HVs are produced. With a small degree of differentiation, the opposite intuition holds. From (38) we have that an increase in pollution tax reduces the pollution disutility on people's health. It is because there is a reduction in the total pollution emitted into the atmosphere. The impact of a pollution tax on consumer surplus and tax revenue is the same as described above. To obtain the optimal policy, we rewrite (30) as 231 The next step is to determine the optimal pollution tax, which is easier considering that θ a ≥ θ b ≥ 0. From (39), and providing the second-order condition hold, we can get the optimal tax as: where 1 = − 2 − 2 < 0 2 = ( 2 − 1) − ( 2 + 2) < 0 3 = ( 2 − 1) − ( 2 + 2) < 0 The optimal tax depends on the marginal pollution disutility such that a sufficiently larger marginal pollution disutility promotes the setting of a pollution tax (t * > 0). Intuitively, the government is willing to set a strict pollution policy because the reduction in the harmfulness of pollution on people's health is larger than the loss in consumer and producer surplus given by the increase in costs. A sufficiently small marginal pollution disutility promotes the setting of no pollution policy (t * = 0). The government is not considering the people's health. Both results are independent of the degree of differentiation. If we consider the optimal tax under different scenarios of differentiation, we have remarkable results. We take the extreme cases in which there is total differentiation or no differentiation at all (γ = 0 and γ = 1). Evaluating (40) in the case in which we have complete differentiation, such that that γ = 0, θ b = 0 and x b < x a , we have * | = 0 = − In the case in which we have complete homogenous goods, we consider, as the assumptions made above, that γ = 1, θ b = θ a and x b = x a . Evaluating (40) in the case of perfectly homogeneous goods we have * | = 1 = − 6 Taking the difference between (41) and (42) we have that 232 * | = 0 − * | = 1 = 5 > 0 The optimal tax set by the government in the case in which we have complete differentiation is larger than the tax set in the case in which we have complete homogenous goods. From here, and considering the linearity of the tax, we can deduce that a reduction in the level of differentiation (an increase in γ) reduces the optimal tax. It means, This result holds once the tax is positive, it means with a sufficiently large pollution disutility. In the case in which the marginal pollution disutility is small, and the tax is zero, an increase in the degree of differentiation does not affect this policy unless a subsidy to pollution is considered. In this case both firms produce ICEVs as the cost of production is smaller than producing HVs or EVs according to (24) and (25). It is a surprising result in many aspects. Taking the case in which the marginal pollution disutility is large, a reduction in the level of differentiation (both firms produce polluting vehicles as ICEVs and HVs) means that the environmentally friendly firm becomes more polluting. Given the large marginal pollution disutility, it is expected to have a large pollution tax. However, it is not the case, and, with a small degree of differentiation (more homogeneous firms), the pollution tax is reduced because the consumer and producer surplus become relevant in the policy decision. In the case in which we have total differentiation, the producer surplus of the environmentally friendly firm b for producing EVs, and the benefit in consumption given by this firm, compensate the loss in producer surplus of the firm a for producing ICEVs, and the loss in consumption of ICEVs. With complete homogenous firms producing ICEVs, the impact of a tax on consumer and producer surplus is larger than in the case of complete differentiated goods where firm a produces ICEVs and firm b produces EVs, so the government set a smaller pollution tax. In our case, when firm b decides to produce a vehicle closer to the ICEVs rather than EVs, the tax levy by the government is reduced because the consumer will face higher prices given by the tax to the polluting vehicles and the consumer surplus becomes important in the policy decision of the government. Reducing pollution tax to vehicle producers reduces the cost of producing, and therefore the price for consumers despite the pollution disutility. When both producers get closer in producing polluting cars (or ICEVs) discourage the pollution policy. Conclusions Globally speaking transport is responsible for one -fourth of total emissions in 2016, while road transportation alone takes the lead with a share of 74%. This means road transportation alone is responsible for almost 19% of the overall air pollution in the world. EVs, if in particular tied to renewable energy grids, would help significantly to mitigate climate change. To encourage the industry and hence boost EVs production public policies must be implemented in an increasing fashion. However, this decision should consider the loss in consumer and producer surplus given by the reduction in the ICEVs. Despite the optimism in the growth of the EV market in developed countries, the scenario in developing countries is not so encouraging. The growth in the electric car market is not high, and the reason for this slowdown comes from the reluctance to implement more aggressive policies against the use of ICEVs due to various reasons. First, the cost of these cars in the market is high compared to the low income of the majority of the population, discouraging the consumption of ICEVs in favour of EVs affects consumer surplus. Second, the drop in ICEV production affects producer surplus. Third, the perception of environmental damage in developing countries is limited. Promoting EVs reduce the benefit offered by the ICEVs industry and, in developing economies, it seems to be crucial. In developing economies, there is an interaction between the government policy aimed at promoting the use of EVs through taxes on ICEVs (tax collection is more profitable than subsidizing EVs), and the strategies of car manufacturers on their production decision of EVs and ICEVs. In this article, this stylized fact is modelled where the interaction between companies and government is analysed in the face of the dilemma of encouraging the use of EVs by imposing environmental taxes on the production of ICEVs. This stylized fact, very consistent with the reality of developing countries in general, and of Latin American economies in particular, is analysed using a game-theoretic framework. This stylized fact has not been analysed in the literature so far and makes this work relevant in theoretical terms as a first step. We consider a model in which local firms produce differentiated goods. These firms compete in an oligopolistic market. The government chooses the level of public policy (tax) imposed on pollution to maximize welfare. The model is set on a two-stage game. At the first stage, the government determines the tax level on pollution taking the firms' output levels as given. In the second stage, firms choose their output and emission levels observing the tax level set by the government. As usual, the problem is solved using backward induction. After exploring some comparative statics, we solve for optimal pollution taxation. Producing ICEVs is cheaper than producing EVs and HVs. Firms have no incentives to produce ecologically efficient vehicles unless the environmental policies are strong enough. According to (15), 234 (16), (24) and (25), in the case of severe pollution policy, the producers of the vehicles find profitable, in terms of cost, to produce EVs and HVs. In the model we developed, the amount of EVs is at most equal to the ICEVs, there is no empirical evidence suggesting the opposite with the actual technology. Our results in (40) suggest that tax pollution is desirable when there is a high perception of the damage caused by pollution. Otherwise, the government is not willing to set a severe pollution policy. According to (43) Additionally, from (41), (42) and (43), when the firms tend to be homogenous, the pollution tax is smaller. Even when the pollution tax is positive given by a relatively high pollution disutility, when firms become homogeneous or, in other words, firms produce ICEVs, the government is setting a smaller pollution tax because the market requirements become important in the form of consumer and producer surplus. This model shows that the application of environmental policies is only possible when there is an obvious perception of damage to the environment that can lead to vehicle pollution. According to Volkswagen (2019), the countries with the highest demand for electric vehicles are those that, in their public policy, are encouraged by a strong environmental awareness. In addition to that, most of the countries mentioned are developed. Hardly a developing country will sacrifice economic efficiency for a greener but more expensive option. The cost of an electric car in developing countries, where there are not enough tax incentives in favor of clean technologies, is prohibitive. The value of this work is to highlight that without a lower cost of producing electric cars, and better environmental awareness, the use 235 of electric cars to improve environmental conditions is only a gesture of goodwill in most countries of the world. Among the many limitations that this work has, being a theoretical work it is necessary to support the results from the econometric work. However, when modeling a stylized fact, it is a first approximation and future work should be econometrically oriented. On the other hand, this model assumes that the production of cars produced is subject to consumption within the same economy, and it is not the general reality of developing countries and only countries like Brazil, Argentina and Mexico have car manufacturers. Although in a strict sense this stylized fact is partially true in countries with car manufacturers, economic intuition is valuable when you consider not only the automotive market but also those markets with some impact on the environment. A very interesting extension of this model would be to Parameterize the model. Parameterizing implies extending and enriching this analysis and would certainly be very positive for this article. We believe that a new article would be desirable where a necessary and enriching extension of this research would be to perform numerical simulations and analyze various parameterization schemes for this model.
2023-07-11T18:42:49.967Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "a298b4cd08f7ba04c288bffb9d68128de5202033", "oa_license": "CCBY", "oa_url": "http://www.cya.unam.mx/index.php/cya/article/download/3234/2021", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d4a559a8ea133f3fc27c0aa5d07ca802c83a349", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [] }
253024864
pes2o/s2orc
v3-fos-license
Comparison of three cognitive assessment methods in post-stroke aphasia patients Background The cognitive level of post-stroke aphasia (PSA) patients is generally lower than non-aphasia patients, and cognitive impairment (CI) affects the outcome of stroke. However, for different types of PSA, what kind of cognitive assessment methods to choose is not completely clear. We investigated the Montreal Cognitive Assessment (MoCA), the Mini-Mental State Examination (MMSE), and the Non-language-based Cognitive Assessment (NLCA) to observe the evaluation effect of CI in patients with fluent aphasia (FA) and non-fluent aphasia (NFA). Methods 92 stroke patients were included in this study. Demographic and clinical data of the stroke group were documented. The language and cognition were evaluated by Western Aphasia Battery (WAB), MoCA, MMSE, and NLCA. PSA were divided into FA and NFA according to the Chinese aphasia fluency characteristic scale. Pearson’s product–moment correlation coefficient test and multiple linear regression analysis were performed to explore the relationship between the sub-items of WAB and cognitive scores. The classification rate of CI was tested by Pearson’s Chi-square test or Fisher’s exact test. Results The scores of aphasia quotient (AQ), MoCA, MMSE, and NLCA in NFA were lower than FA. AQ was positively correlated with MoCA, MMSE, and NLCA scores. Stepwise multiple linear regression analysis suggested that naming explained 70.7% of variance of MoCA and 79.9% of variance of MMSE; comprehension explained 46.7% of variance of NLCA. In the same type of PSA, there was no significant difference in the classification rate. The classification rate of CI in NFA by MoCA and MMSE was higher than that in FA. There was no significant difference in the classification rate of CI between FA and NFA by NLCA. Conclusion MoCA, MMSE, and NLCA can be applied in FA. NLCA is recommended for NFA. Introduction Stroke is a common clinical cerebrovascular disease. The research data show that there were 80.1 million stroke cases worldwide in 2016 (GBD 2016Stroke Collaborators, 2019. In China, the incidence of stroke in 2030 is estimated to be 1.5 times that in 2010 . Post-stroke aphasia (PSA) is an acquired functional defect mainly manifested in the disorder of language output and reception process after the damage of the central nervous system (Stefaniak et al., 2020), accounting for about one-third of the total stroke population (Flowers et al., 2016). The increasing number of patients who experienced stroke events also showed an increasing trend of PSA. The interaction exists in language and cognitive function (Ardila and Rubio-Bruno, 2018). Some studies have shown that the cognitive function after stroke is related to the impairment of language function, and the cognitive level of PSA is generally lower than that of non-aphasia patients (Kang et al., 2016;Yao et al., 2020). As an independent predictor, cognitive function also can judge the prognosis of stroke and affect the rehabilitation outcome of stroke (Kwon et al., 2020). Cognition is the general name of the process of recognizing and knowing things, including perception, recognition, attention, memory, concept formation, thinking, reasoning, and image process. It belongs to the high-level activities of the cerebral cortex. After brain injury such as stroke, the function of the cerebral cortex is affected to varying degrees, resulting in cognitive impairment (CI; Norris et al., 2016). An objective and comprehensive evaluation of cognitive function is helpful to formulate targeted treatment plans and effectively improve patients' cognitive function (Cicerone et al., 2019). Patients with PSA often have CI, and more than half of patients with aphasia also have nonverbal CI (Fonseca et al., 2019). Previous studies have confirmed that cognitive function, including nonverbal CI, is associated with language impairment (Bonini and Radanovic, 2015;Ardila and Rubio-Bruno, 2018;Yao et al., 2020). However, there is no clear conclusion about the degree of correlation between CI and language damage and what language factors affect it. In the previous literature, there have been studies focusing on the evaluation of some dimensions of cognitive function, such as execution, attention, memory, and so on (Murray, 2012;Pompon et al., 2015;Thompson et al., 2018;Simic et al., 2019). The results of some studies comparing the effectiveness of these measures, such as the Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE), in the evaluation of cognitive function after stroke show that MoCA may be more valid in post-stroke cognitive screening (Dong et al., 2010;Burton and Tyson, 2015). However, because the cognitive assessment tools used in previous studies such as MoCA and MMSE contain a large number of language components, although the evaluation results show that PSA patients have CI, it is still difficult to distinguish between real cognitive problems and language communication problems. To better measure the cognitive level of patients with language dysfunction, respond to a more real and accurate functional state and formulate a more suitable treatment plan for patients, it is necessary to study the application of nonverbal cognition in PSA. In recent years, some scholars have made efforts in the neural basis, clinical characteristics, evaluation, and treatment of nonverbal cognition (Peach, 2017;Wu et al., 2017;Schumacher et al., 2019;Yao et al., 2020). This study aims to compare the three cognitive assessment methods, including the non-language-based Cognitive Assessment (NLCA), MoCA, and MMSE, and provide suggestions for the evaluation of CI in patients with PSA. Study population A total of 92 stroke inpatients from Huashan Hospital, Fudan University, were recruited between May 2021 and June 2022. In this study, subjects were divided into non aphasia (NA) and aphasia, and aphasia patients were subdivided into fluent aphasia (FA) and non-fluent aphasia (NFA). Combined with symptoms, medical history, and imaging data, stroke patients with the aphasia quotient (AQ) of Western Aphasia Battery (WAB) lower than 93.8 points were judged as PSA (Kertesz and Poole, 2004). Among patients with PSA, those who scored 21 to 27 on the Chinese aphasia fluency characteristics scale were judged as FA, and those with a score of 9 to 13 are judged as NFA (Gao, 2006). The inclusion criteria for PSA were as follows: (a) right handedness, (b) Chinese as the first language, (c) first stroke, (d) left hemisphere lesions, (e) AQ < 93.8, (f) can cooperate to complete all assessments and sign informed consent. The inclusion criteria of NA was that AQ ≥ 98.4, and other conditions were the same as above. On the basis of PSA, according to the score of the Chinese aphasia fluency characteristics scale, 21 to 27 points were the inclusion criteria for FA, and 9 to 13 points were the inclusion criteria for NFA. The exclusion criteria were as follows: (a) recurrent stroke, (b) CI caused by other causes or existing before stroke, such as Alzheimer's disease, (c) cerebellar and brainstem lesions or severe dysarthria, (d) severe audiovisual impairment, and (e) other serious medical diseases or unstable conditions. This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of Huashan Hospital, Fudan University [ethical approval no. (2021) Linshen No. (503)]. Signed written informed consent was given by all participants or their legal representatives. Measurement methods In this study, WAB was used to judge whether it was PSA and the severity of PSA, and the Chinese aphasia fluency characteristics scale was used to judge fluency. The MoCA, MMSE, and NLCA were selected to screen and evaluate the cognitive function. Frontiers in Psychology 03 frontiersin.org WAB was published by Andrew et al. in 1974. The severity of language impairment was assessed by AQ calculated from the sub-item scores of spontaneous, comprehension, repetition, and naming. The full score of AQ was 100 points, and < 93.8 points could be used to judge aphasia. The lower the score, the more serious the language impairment (Kertesz and Poole, 2004). The Chinese aphasia fluency characteristics scale evaluated 9 spoken feathers, including vocabulary, intonation, pronunciation, length of phrase, laborsome speech, press of speech, substantive words, grammar, and paraphasia. The possible scores of each item were 1, 2, and 3, and the total score of the scale was 9 to 27. According to the total score, we judged whether aphasia patients were FA (21-27) or NFA (9-13; Gao, 2006). MoCA was developed by Dr. Nasreddine in 1996 and officially published in English and French in 2005. The scale takes about 10 min and has reliable results. It can sensitively screen mild CI and mild Alzheimer's disease (Nasreddine et al., 2005). MoCA Beijing edition was used in this study. The evaluation contents include visual space and executive function, naming, memory, attention, language, abstraction, delayed recall, and orientation. The full score is 30 points and ≥ 26 points are normal. MMSE was developed by Folstein et al. in 1975 andGalasko et al. developed a simplified version in 1990. It is mainly used for the evaluation of CI in patients with dementia. It takes about 5-15 min. The evaluation contents include orientation, memory, attention and computing power, and language ability. The full score is 30 and ≥ 27 is normal (Folstein et al., 1975). NLCA was developed by Xiaojia Liu and others in 2013. After using NLCA to evaluate aphasia patients, mild CI patients, and normal people, the scale's reliability, effectiveness, and practicability are confirmed. The scale mainly relies on visual materials to evaluate five nonverbal cognitive dimensions, including visuospatial function, attention, memory, logical reasoning ability, and executive function, with a full score of 80, ≥ 75 points is normal (Wu et al., 2017). All the evaluation contents were completed by two uniformly trained speech-language therapists within 3 days after recruitment. Statistical analysis Statistical analyses were conducted using SPSS 25.0 (IBM Corporation, Armonk, NY, United States). For two groups of continuous numerical variables that conform to the normal distribution, the Student's t-test is used, which is expressed by mean ± standard deviation. The mean values of three groups of continuous numerical variables were compared using one-way ANOVA. If it does not conform to the normal distribution, a nonparametric test shall be adopted. Categorical variables were expressed by rate, using Pearson's Chi-square test or Fisher's exact test. In the general characteristics, age, years of education, and course of disease were statistically analyzed by one-way ANOVA, and gender was analyzed by Pearson's Chi-square test. In PSA, the scores of AQ, spontaneous, comprehension, repetition, and naming were analyzed by Mann-Whitney's U test, and the scores of MOCA, MMSE, and NLCA were analyzed by t-test. The classification rate of CI was analyzed by Pearson's Chi-square test or Fisher's exact test. Pearson's product-moment correlation coefficient test was used to explore the correlation between variables. All variables which demonstrated significant moderate or higher correlations (r > 0.3, p < 0.01) were entered into stepwise multiple linear regression analysis to evaluate their potential impacts on cognitive function evaluation in PSA. A two-sided p < 0.05 was considered to be statistically significant in this study. General characteristics This study screened 344 patients, of which 252 were excluded and 92 entered the analysis procedure ( Figure 1). The characteristics of subjects are presented in Table 1. There was no significant statistical difference in age, gender, education, and course of disease between groups (p = 0.487, 0.474, 0.511, and 0.571, respectively). Language and cognitive assessments All subjects received WAB, MOCA, MMSE, and NLCA tests. WAB is to assess the severity of aphasia, while MOCA, MMSE, and NLCA are to assess CI. The scores of AQ, spontaneous, comprehension, repetition, naming, MOCA, MMSE and NLCA in FA were significantly higher than those in NFA (p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p = 0.001, respectively; Table 2). Compared with FA, NFA seems to have more serious impairment of language function and cognitive function. As shown by the results in Table 3, MoCA and MMSE are positively correlated with all aspects of language assessment, while NLCA and repetition are not. Multiple linear regression analysis showed that naming could explain 70.7% of variance of MOCA and 79.9% of variance of MMSE; comprehension explained 46.7% of variance of NLCA. Language dysfunction has affected the three screening tools of CI to varying degrees. The classification rate of CI For NA, there was no statistical difference in the classification rate of CI obtained by using the three cognitive measurement methods. In the same type of PSA, there was no significant difference in the classification rate of CI between the three methods (p = 0.153, 0.546; Table 4). The classification rate of CI in NFA using MOCA and MMSE was higher than that in FA (p = 0.015, p = 0.001). Nevertheless, there was no significant difference in the classification rate of CI between FA and NFA using NLCA (p = 0.182; Table 5). Discussion In this study, three representative cognitive assessment methods were selected to study whether language impairment in stroke patients has an impact on cognitive screening, and provide reference suggestions for cognitive assessment of post-stroke aphasia patients. As for the relationship between language and cognition, there may be overlapping areas at the neural level, such as the frontal lobe, temporal lobe, and parietal lobe, and the damage of language leads to the common damage of cognitive-related brain networks (Schumacher et al., 2019). The behavioral results of this study showed that the total scores of the three cognitive assessment methods were positively correlated with AQ, which showed that language impairment had an impact on different cognitive assessment methods. The previous studies have also shown a link between language impairment and cognitive impairment (Bonini and Radanovic, 2015;Ardila and Rubio-Bruno, 2018;Fonseca et al., 2019;Yao et al., 2020). The results showed that the language and cognitive impairment of NFA was more serious in stroke population. Therefore, the choice of cognitive assessment for such patients should be more cautious. We preliminarily analyzed the components of the scales. Except for visual space and executive function in MoCA, other test items rely on language expression ability. In the total score of 30 points, MOCA has 24 points and MMSE has 25 points, which needs language expression. The NLCA evaluation process can be completed with the help of audio-visual perception, without oral expression. However, the previous behavioral research results also support that nonverbal CI is related to comprehension impairment (Caplan et al., 2013;Thompson et al., 2018;LaCroix et al., 2021). This shows that nonverbal cognitive assessment methods reduce the impact of language impairment on the evaluation of cognitive function, although they may still be unable to completely get rid of the interaction between language and cognition. For non-aphasia patients after stroke, MoCA can be recommended for cognitive screening, which is consistent with the previous research results (Dong et al., 2010;Burton and Tyson, 2015). In PSA, there was no significant difference in the classification rate of CI among MOCA, MMSE, and NLCA, but compared with NLCA, In addition, we also analyzed the Barthel Index. The post-test results of ANOVA showed that there was a statistical difference between NA and NFA. This may indicate that NFA also has great obstacles in daily life activities, not only in language and cognition, but also may be a direction of future research. The current research has some limitations. First, the classification of PSA is not comprehensive enough. In this study, PSA was only classified into two categories, not eight categories. In the future, we can expand the sample size based on previous studies for a more detailed classification of aphasia. Second, the evaluation result of NLCA has only a critical value without severity classification. Therefore, the classification of the severity of CI needs the help of other tools. Third, this study focuses on the results of the behavioral evaluation and does not involve imaging, such as MRI. In the future, imaging can be used to explore the neural mechanism between language and cognitive impairments, to provide the basis for the integrated rehabilitation of language and cognition. Conclusion The impairments of language and cognitive function in patients with NFA are more serious than those in patients with FA. The results of the cognitive assessment were positively correlated with language impairment. MoCA, MMSE, and NLCA can be applied to FA, and NLCA is more recommended to be used in NFA. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of Huashan Hospital, Fudan University [ethical approval no. (2021) Linshen No. (503)]. The patients/participants provided their written informed consent to participate in this study. Author contributions ZY, XL, and JJ: conceived and designed the analysis. ZY, SX, and JZ: collected the data. ZY, SX, DW, XH, CL, YZ, MC, and QY: contributed data or analysis tools. ZY and XL: performed the analysis. ZY and JJ: writing-original draft: All authors discussed the results, contributed to the final manuscript and writingreview and editing. ZY and SX contributed equally to this work and are the first authors. JJ is the corresponding author of this article. QY is the co-corresponding author. All authors contributed to the article and approved the submitted version.
2022-10-21T14:53:54.144Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "1be43d28da4f8cf9ee994e51227130deb37d6a23", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1be43d28da4f8cf9ee994e51227130deb37d6a23", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55976357
pes2o/s2orc
v3-fos-license
$M_F$-dependent Hyperfine Induced Transition Rates in an External Magnetic Field for Be-like $^{47}$Ti$^{18+}$ Hyperfine induced $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ transition rates in an external magnetic field for Be-like $^{47}$Ti were calculated based on the multiconfiguration Dirac-Fock method. It was found that the transition probability is dependent on the magnetic quantum number $M_F$ of the excited state, even in the weak field. The present investigation clarified that the difference of the hyperfine induced transition rate of Be-like Ti ions between experiment [Schippers {\sl et al.}, Phys Rev Lett {\bf 98}, (2007) 033001(4)] and theory does not result from the influence of external magnetic field. Introduction The hyperfine induced transition (HIT) rate of the 2s2p 3 P 0 level for Be-like 47 Ti ions has been measured with high accuracy by means of resonant electron-ion recombination in the heavy-ion storage-ring TSR of the Max-Planck Institute for Nuclear Physics, Heidelberg, Germany [1]. However, the measured transition rate A HIT = 0.56(3) s −1 differs from all present theoretical results A HIT ≈ 0.67 s −1 [2,3,4] by about 20%. In the theoretical calculations the major part of the electron correlation, which always causes the dominant uncertainty, has been taken into account very elaborately. As a result, it is desirable to find out other reasons for the difference. In this letter, we focus on the influence of the magnetic field present in the heavy-ion storage-ring on the HIT rate. The HIT rate in an external magnetic field depends on the magnetic quantum number M F of the excited state, even in a relatively weak field. This effect, combined with the non-statistical distribution of the magnetic sublevel population of the excited level, might lead to the difference in transition rate mentioned above. Theory In presence of the magnetic field, the Hamiltonian of an atom with non-zero nuclear spin where H f s is the relativistic fine-structure Hamiltonian that includes the Breit interaction. H hf s is the hyperfine interaction Hamiltonian, which can be written as a multipole expansion where T (k) and M (k) are spherical tensor operator in electronic and nuclear space, respectively [5]. H m is the interaction Hamiltonian with the external homogeneous magnetic field B, where N (1) are first-order tensor with the similar form of T (1) , ∆N (1) is the so called Schwinger QED correction [6]. We choose the direction of the magnetic field as the z-direction, and only M F is a good quantum number. The wavefunction of the atomic system can thus be written as an The total angular momentum F is coupled by the nuclear I and electronic J angular momentum. The Υ and Γ are the other quantum numbers labeling the nuclear and electronic states, respectively. The coefficients d ΓJF in Eq. (4) are obtained through solving the eigenvalue equation using HFSZEEMAN package [7] Hd = Ed, where H is the interaction matrix with elements The readers are referred to Ref. [6,7] for a detailed derivation of the different matrix elements . For the present problem, the wavefunction of the 3 P 0 state can be written The quotation marks in the left-hand wave function emphasize the fact that the notation is just a label indicating the dominant character of the eigenvector. Remaining interactions between 2s2p 3 P 0 and higher members of the Rydberg series can be neglected due to large energy separations and comparatively weak hyperfine couplings [8]. Furthermore, those perturbative states with different total angular momentum F can be neglected because of relatively weak magnetic interaction. As a result, Eq. (7) is simplified to Similarly, the wavefunction of the ground state is approximatively written where all perturbative states were neglected for the same reasons as mentioned above. The one-photon 2s2p 3 P 0 → 2s 2 1 S 0 E1 transition becomes allowed via mixing with the perturbative states of 2s2p 3 P 1 and 2s2p 1 P 1 (see Eq. (8)) induced by both the offdiagonal hyperfine interaction and the interaction with the magnetic field. The decay rate a(M e F ) HIT from the excited state |"2s2p 3 P 0 I M e F " to the ground state |"2s 2 1 S 0 I M g F " in s −1 is given by Substitute Eq. (8) and (9) into above formula, then Applying standard tensor algebra, the Eq. (11) is further simplified to where λ is the wavelength inÅ for the transition and 2s 2 1 S 0 ||P (1) ||2s2p S P 1 the reduced electronic transition matrix element in a.u.. From the Eq. (12) we can obtain the Einstein spontaneous emission transition proba- It should be noticed that in present approximation of weak magnetic field, i.e., neglecting those perturbative states with different total angular quantum number F , the formula for the transition rate (see Eq. 13) is similar to the one where the transition is induced by only hyperfine interaction [2,3]. However, a significant difference exists in the mixing coefficients d S by virtue of incorporating the magnetic interaction into the Hamiltonian for the present work. The electronic wavefunctions are computed using the GRASP2K program package [10]. Here the wavefunction for a state labeled γJ is approximated by an expansion over jjcoupled configuration state functions (CSFs) In the multi-configuration self-consistent field (SCF) procedure both the radial parts of the orbitals and the expansion coefficients c i are optimized to self-consistency. In the present work a Dirac-Coulomb Hamiltonian is used, and the nucleus is described by an extended Fermi charge distribution [11]. The multi-configuration SCF calculations are followed by relativistic CI calculations including Breit interaction and leading QED effects. In addition, a biorthogonal transformation technique introduced by Malmqvist [12,13] is used to compute reduced transition matrix elements where the even and odd parity wave functions are built from independently optimized orbital sets. Results and discussion As a starting point SCF calculations were done for the configurations belonging to the even and odd complex of n = 2, respectively. Valence correlation was taken into account from the effect of magnetic field present in the storage ring. Actually, the magnetic field effect has already been noticed and been discussed in previous experiment measuring the lifetime of the hyperfine state of metastable level 5d 4 D 7/2 for Xe + using the ion storage ring CRYRING at the Manne Siegbahn Laboratory (Stockholm) [16]. Returning to the present problem, experiment was conducted in the heavy-ion storage-ring TSR where the rigidity of the ion beam is given as B × ρ = 0.8533 T [1], and the bending radius of the storage ring dipole magnets is ρ = 1.15m [17]. As a result, the magnetic field in the exper- Table 1. As can be seen from this table, the transition rates A(M e F ) HIT for each of the individual excited states "2s2p 3 P 0 I M e F " are obviously different because the mixing coefficients d S in Eq. (13) depend on the magnetic quantum number M e F of the excited state. As can be found from Table 1, the lifetime of 3 P 0 level is still not sensitive to the sublevel specific lifetimes, if the magnetic sublevels are populated statistically (the lifetimes τ = M e F τ (M e F )/(2I + 1) = 1.52s, 1.52s, 1.53s in the external magnetic field B=0.5T, 0.742T and 1T, respectively). In this case, the zero-field lifetime within the exponential error can be obtained, as made in Ref. [1], through only a fit of one exponential decay curve instead of 6 exponential decay curves with slightly different decay constants. To the contrary, in the experiment measuring the HIT rate of the 2s2p 3 P 0 level of the Be-like Ti ion, the level concerned was produced through beam-foil excitation [18]. As we know, the cross sections with magnetic sublevels for ion-atom collision are different [19,20], and the magnetic sublevel population is in general not statistically distributed. Combining this fact with the M F -dependent HIT rate in an external field, the transition probability of 3 P 0 level cannot be obtained by statistical average over all magnetic sublevel. However, we also noticed that an external magnetic field can lower the transition rate only for those magnetic sublevels with M F 0. In other word, only if these specific magnetic sublevels with M F 0 were populated, it is possible to explain or decrease the discrepancy between the measured and theoretical HIT rates for Be-like 47 Ti. In fact, such extreme orientation of the stored ions seems improbable by means of beam-foil excitation. Moreover, the experimental heavy-ion storage-ring was only partly covered with dipole magnets (this fraction amounts to 13%) [17]. It further reduces the influence of magnetic field on the lifetime of level. Therefore, we still cannot clarify the disagreement between experimental measurement and theoretical calculations at present even though the influence of an external magnetic field was taken into account. Summary To sum up, we have calculated the hyperfine induced 2s2s 3 P 0 → 2s 2 1 S 0 E1 transition rate in an external magnetic field for each of the magnetic sub-hyperfine levels of 47 Ti 18+ ions based on the multiconfiguration Dirac-Fock method. It was found that the transition rate is dependent on the magnetic quantum number M e F of the excited state, even in relatively weak magnetic fields. Considering the influence of an external magnetic field, we still did not explain the difference in the HIT rate of Be-like Ti ion between experiment and theory. Table 1: Hyperfine induced 2s2p 3 P 0 → 2s 2 1 S 0 E1 transition rates in presence of magnetic field B=0.5 T, B=0.742 T and B=1 T for Be-like 47 Ti ion. a represents the transition probability from the excited state "2s2p 3 P 0 I M e F " to the ground state "2s 2 1 S 0 I M g F ", A is the Einstein transition probability from the excited state "2s2p 3 P 0 I M e F ". τ is the lifetime of excited state "2s2p 3 P 0 I M e F ". The experimental wavelength (λ) 346.99Å [14] was used in this calculations, where the influence of hyperfine interaction and magnetic field was neglected.
2010-11-18T09:39:21.000Z
2010-11-18T00:00:00.000
{ "year": 2011, "sha1": "13db87c9701e7b6be7819e71f474a71c2289d79e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1011.4160", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "13db87c9701e7b6be7819e71f474a71c2289d79e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
218819387
pes2o/s2orc
v3-fos-license
Chlorides Entrapment Capability of Various In-Situ Grown NiAl-LDHs: Structural and Corrosion Resistance Properties : In this work, various NiAl-LDH thin films, exhibiting specific surface morphologies, were developed directly on aluminum AA 6082 substrate to understand the two main characteristics of layered double hydroxide (LDH), i.e., ion-exchange behavior and barrier properties, which are found to have a significant influence on the LDH corrosion resistance properties. The as-prepared NiAl-LDH films were analyzed through the scanning electronic microscope (SEM), X-ray di ff raction (XRD), while the corrosion behavior of the synthesized films was investigated by the electrochemical impedance spectroscopy (EIS) and potentiodynamic curves. The results indicated that NiAl-LDH microcrystals grow in various fashions, from porous relatively flat domains to well-developed platelet structure, with the variation of nickel nitrate to ammonium nitrate salts molar ratios. The LDH structure is observed in all cases and is found to cover the aluminum surface uniformly in the lamellar order. All the developed NiAl-LDHs are found to enhance the corrosion resistance of the aluminum substrate, specifically, a well-developed platelet structure is found to be more e ff ective in chloride adsorptive and entrapment capabilities, which caused higher corrosion resistance compared to other developed NiAl-LDHs. The comparison of the synthesized NiAl-LDH morphologies on their ion-exchange capabilities, barrier e ff ect and their combined e ff ect on corrosion resistance properties is reported. Introduction Protection against corrosion of aluminum alloys is a widely investigated subject to increase the usage of aluminum in a variety of applications due to aluminum high strength to weight ratio, thermal and electrical conductivities, along with their abundance and low price. In that scenario, numerous approaches have been studied to develop economic coating systems to protect aluminum alloys, for instance, magnetron sputtering [1], anodizing [2], self-assemble [3], polymeric coatings [4], and chemical conversion techniques [5]. Recently, layered double hydroxides (LDHs) have got prominent attention for potential applications in various fields, including, adsorbents [6], drug delivery systems [7], environmental sciences [8], and biomedical science [9]. LDHs have also be found an efficient system to enhance corrosion resistance due to their ion-exchange capabilities, high surface area, a wide range of cationic salts availability, cost-effectiveness and other lucrative characteristics [10][11][12]. LDH is Characterization The morphology of the synthesized NiAl-LDHs was characterized by SEM-EDX (JEOL JSM-IT300 equipped with an EDS detector, Tokio, Japan). The crystallographic structure was studied by XRD (XRD (X'Pert High Score diffractometer, Rigaku, Japan) was performed using cobalt Kα emission, λ = 1.79 Å −1 at 10 mA and 30 kV conditions. The scanning rate was 4 • min −1 . The corrosion resistance of LDH was analyzed by the electrochemical measurements, performed in 0.1 M NaCl solution at ambient conditions using a Parstat 2273 potentiostat/FRA (Princeton Applied Research/Ametek, Berwyn, IL, USA). Fourier transformed infrared spectroscopy Varian 4100 FTIR Excalibur Series instrument (Agilent, Santa Clara, CA, USA), in the attenuated total reflectance (ATR) mode were recorded to analyze surface functional group and the chemical bonding of the samples, in the range of 550 to 4000 cm −1 with a 4 cm −1 resolution and at 32 scans, by using a diamond crystal as Internal Reflective element (IRE). A classic three-electrode cell configuration is used to measure the electrochemical properties of NiAl-LDH, in which a Pt plate served as the counter electrode, while Ag/AgCl (+207 mV vs. SHE) and prepared NiAl-LDH was used as the reference electrode and working electrode respectively. The prepared coatings were sealed by epoxy resin but leaving the testing surface (3.14 cm 2 ) uncovered for the corrosion tests. The potentiodynamic measurements were performed at a scan rate of 2 mV/s versus OCP. EIS measurements were acquired from 100 kHz down to 10 mHz, using a 5 mV (rms) amplitude perturbation. Before the experimentation, the LDH film was exposed to the electrochemical solution (0.1 M NaCl) for 30 min for system stabilization. Figure 1 shows the XRD patterns of the NiAl-LDHs, exhibits distinct reflection peaks around at 2(θ) 11.7 • , 23.0 • , 35 • correspond to (003), (006) and (012) respectively, demonstrating the characteristics peaks of LDH formation [18,22]. The calculated cell parameters of the NiAl-LDHs are reported in Table 2. The "003" reflections of all synthesized NiAl-LDHs were observed almost on the same 2(θ) angle of~11.7 • , indicating a basal spacing around 0.88 nm, which correspond to the presence of NO 3 inside LDHs [23,24]. With the variation of ammonium nitrate salt concentration, the intensity and broadness of the reflection peaks vary and diffraction peaks (003) of NiAl-LDH begin to be bit sharper, depicted the enhance crystallinity. Crystallite size reduced slightly from NiAl-LDH a to NiAl-LDH d , where NiAl-LDH d have shown crystallite size of 17.22 nm, compare to 19.94 nm of NiAl-LDH a . The cell parameter "c" is further calculated by the correlation "c = 3d 003 = 6d 006 " where gradual reduction from NiAl-LDH a to NiAl-LDH d is observed. The basal spacing from NiAl-LDH a to NiAl-LDH d is found to reduce slightly, indicating the strong intercalation of NO 3 − ions. The interlayer thickness, lattice constants of "c" and crystallite size for NiAl-LDHs are listed in Table 1. The selective samples, NiAl-LDH d , was further investigated by FTIR analysis in attenuated reflection mode as shown in Figure 2. The broadband displayed around 3370 cm −1 is assigned to OH group stretching, while absorption band around, 1627 to 1633 cm −1 caused due to the flexural oscillation peaks of interlayer water molecules [25]. Moreover, the absorption peaks around 1350 cm −1 assigned to the asymmetric stretching bond of intercalated NO 3 −1 [26]. The bond at 655, 751 and 1202 cm −1 may associate with the Al-OH stretching [27]. Figure 2. The broadband displayed around 3370 cm −1 is assigned to OH group stretching, while absorption band around, 1627 to 1633 cm −1 caused due to the flexural oscillation peaks of interlayer water molecules [25]. Moreover, the absorption peaks around 1350 cm −1 assigned to the asymmetric stretching bond of intercalated NO3 −1 [26]. The bond at 655, 751 and 1202 cm −1 may associate with the Al-OH stretching [27]. Figure 3 shows the SEM images of the synthesized NiAl-LDHs, where in all cases, LDHs microcrystals uniformly covered the entire aluminum substrate surface in the lamellar form. By comparing the surface morphologies of obtained LDHs from NiAl-LDHa to NiAl-LDHd, it can be found that NiAl-LDHa is a less porous structure than that of NiAl-LDHd, where well ordered LDH platelet-structure is observed. This phenomenon is particularly evident in high-resolution SEM micrographs (Figure 3b,d,f,h), where four distinct morphologies can clearly be observed, from less porous amorphous structure to well-formed platelet flower-like structure. It can be concluded that due to well organize geometry structure, ion exchange property might be the basic attributes of NiAl- samples, NiAl-LDHd, was further investigated by FTIR analysis in attenuated reflection mode as shown in Figure 2. The broadband displayed around 3370 cm −1 is assigned to OH group stretching, while absorption band around, 1627 to 1633 cm −1 caused due to the flexural oscillation peaks of interlayer water molecules [25]. Moreover, the absorption peaks around 1350 cm −1 assigned to the asymmetric stretching bond of intercalated NO3 −1 [26]. The bond at 655, 751 and 1202 cm −1 may associate with the Al-OH stretching [27]. Figure 3 shows the SEM images of the synthesized NiAl-LDHs, where in all cases, LDHs microcrystals uniformly covered the entire aluminum substrate surface in the lamellar form. By comparing the surface morphologies of obtained LDHs from NiAl-LDHa to NiAl-LDHd, it can be found that NiAl-LDHa is a less porous structure than that of NiAl-LDHd, where well ordered LDH platelet-structure is observed. This phenomenon is particularly evident in high-resolution SEM micrographs (Figure 3b,d,f,h), where four distinct morphologies can clearly be observed, from less porous amorphous structure to well-formed platelet flower-like structure. It can be concluded that due to well organize geometry structure, ion exchange property might be the basic attributes of NiAl- Figure 3 shows the SEM images of the synthesized NiAl-LDHs, where in all cases, LDHs microcrystals uniformly covered the entire aluminum substrate surface in the lamellar form. By comparing the surface morphologies of obtained LDHs from NiAl-LDH a to NiAl-LDH d , it can be found that NiAl-LDH a is a less porous structure than that of NiAl-LDH d, where well ordered LDH platelet-structure is observed. This phenomenon is particularly evident in high-resolution SEM micrographs (Figure 3b,d,f,h), where four distinct morphologies can clearly be observed, from less porous amorphous structure to well-formed platelet flower-like structure. It can be concluded that due to well organize geometry structure, ion exchange property might be the basic attributes of NiAl-LDH d to increase the corrosion resistance of Al AA6082, while the comparatively NiAl-LDH a basically less profound to exchange NO 3 − with the Cl − and exposed to be more dominant barrier layer. The same trend is observed in the other developed NiAl-LDHs i.e., NiAl-LDH b , NiAl-LDH c . Table 3 shown the weight % composition of NiAl-LDHs, calculated by energy dispersive spectroscopy in-plane scanning mode. It is clear that NiAl-LDHs mainly consist of Ni, Al, O, and N. The Ni/Al ratio from amorphous porous structure to platelet structure is found to be increased from 3.44 to 4, which provides a reflection for the NiAl-LDH assembly. The effect of structural growth on the film thickness reported in Figure 4b, with a cross-sectional image of NiAl-LDH d in Figure 4a (reported as an example). It is clear that the film thickness remains in the range of 30 to 35 µm, and regular platelet NiAl-LDH d structure has shown slightly higher film thickness (34.6 µm) then amorphous NiAl-LDH a structure (30.01 µm). It can be said that the thickness of the LDH correlates with the amount of cations, pH, reaction temperature, aging time, alkali solution and so on [28]. However, on consistent values of all the describe factors, that the morphological quality of LDH nanostructures increases with the increase of nitrate concentration in the solution. In an aqueous solution containing metallic aluminum and nitrate anions several electrochemical processes involving anodic dissolution of aluminum and cathodic reduction of nitrates and oxygen can occur. The cathodic processes generate hydroxyl ions and create a pH gradient. Although the reduction of nitrates to nitrite has been proven, there are other possible reactions involving nitrate and nitrite anions, producing nitrogen gas or ammonia, which may contribute to the overall reduction process [29] and thus can affect the film thickness as well surface morphology. In that view, we can say, the anions concentration in the solution may have effect on the growth rate of LDH, but up to what extent, this is not clear. The possible formation mechanism of NiAl-LDH is associated, where the aluminum surface dissolved in the basic reaction solution to form Al 3+ . The anodic regions results in the large concentrations of OHgroups on the surface of aluminum and favor the formation of Al(OH)3 which act as a precursors for the formation of LDH, while the final step is related to the precipitation of Ni, OH and NO3 on the surface of Al(OH)3 to form the NiAl-LDH hydroxide mixture. Finally, the divalent Ni 2+ ions in Ni(OH)2 were substituted by the trivalent Al 3+ ions, which result in the coexistence of Al(OH)3 and Ni(OH)2 to form precursor film of hydrotalcite-like LDH structure [30][31][32]. Corrosion Behavior of the LDH Films The polarization curves of the developed NiAl-LDH on AA6082 are shown in Figure 5. The polarization curves of synthesized NiAl-LDHs coating have shown a decrease in both anodic and cathodic current density compared to the bare AA6082. All the synthesized NiAl-LDHs films on AA6082 have shown lower corrosion current density along with a shift of the corrosion potential to higher values as compared to bare AA6082, however, NiAl-LDHd has shown the significantly reduced anodic and cathodic current density compared to the substrate and also from other developed NiAl-LDHs. It is also worthy to note that NiAl-LDHd has shown the relatively higher film thickness of around 35 µm, which may provide a comparatively better barrier film against the aggressive media, and this phenomenon is well correlated with the previous works [10][11][12][13] and for NiAl-LDHd a reduction of about 3-orders of magnitude in corrosion current density was observed compared to the substrate. Furthermore, the open circuit potential (OCP) is switched toward nobler values with structural variation from NiAl-LDHa to NiAl-LDHd, whilst a high corrosion potential of −0.18 V vs Ag/AgCl was observed for NiAl-LDHd, probably due to the formation of well-ordered platelet structure. The possible formation mechanism of NiAl-LDH is associated, where the aluminum surface dissolved in the basic reaction solution to form Al 3+ . The anodic regions results in the large concentrations of OH − groups on the surface of aluminum and favor the formation of Al(OH) 3 which act as a precursors for the formation of LDH, while the final step is related to the precipitation of Ni, OH and NO 3 on the surface of Al(OH) 3 to form the NiAl-LDH hydroxide mixture. Finally, the divalent Ni 2+ ions in Ni(OH) 2 were substituted by the trivalent Al 3+ ions, which result in the coexistence of Al(OH) 3 and Ni(OH) 2 to form precursor film of hydrotalcite-like LDH structure [30][31][32]. Corrosion Behavior of the LDH Films The polarization curves of the developed NiAl-LDH on AA6082 are shown in Figure 5. The polarization curves of synthesized NiAl-LDHs coating have shown a decrease in both anodic and cathodic current density compared to the bare AA6082. All the synthesized NiAl-LDHs films on AA6082 have shown lower corrosion current density along with a shift of the corrosion potential to higher values as compared to bare AA6082, however, NiAl-LDH d has shown the significantly reduced anodic and cathodic current density compared to the substrate and also from other developed NiAl-LDHs. It is also worthy to note that NiAl-LDH d has shown the relatively higher film thickness of around 35 µm, which may provide a comparatively better barrier film against the aggressive media, and this phenomenon is well correlated with the previous works [10][11][12][13] and for NiAl-LDH d a reduction of about 3-orders of magnitude in corrosion current density was observed compared to the substrate. Furthermore, the open circuit potential (OCP) is switched toward nobler values with structural variation from NiAl-LDH a to NiAl-LDH d , whilst a high corrosion potential of −0.18 V vs. Ag/AgCl was observed for NiAl-LDH d , probably due to the formation of well-ordered platelet structure. Coatings 2020, 9, Lastpage; doi: FOR PEER REVIEW www.mdpi.com/journal/coatings aggressive media, and this phenomenon is well correlated with the previous works [10][11][12][13] and for NiAl-LDHd a reduction of about 3-orders of magnitude in corrosion current density was observed compared to the substrate. Furthermore, the open circuit potential (OCP) is switched toward nobler values with structural variation from NiAl-LDHa to NiAl-LDHd, whilst a high corrosion potential of −0.18 V vs Ag/AgCl was observed for NiAl-LDHd, probably due to the formation of well-ordered platelet structure. The EIS measurements of as-prepared NiAl-LDHs after1 day immersion in 0.1 M NaCl solution are shown in Figure 6. The higher value of impedance in the low-frequency domain (impedance modulus at 0.01 Hz, |Z| 0.01 ) roughly indicates higher corrosion resistance properties [33]. From Figure 6a, NiAl-LDH d has shown the impedance value around 6.3 Ω cm 2 at |Z| 0.01 , which is near 2-orders magnitude higher than bare AA6082 alloy. The higher shift of impedance for NiAl-LDH c and NiAl-LDH d defined the presence of the strong dielectric protective film which is well consistent with the anti-corrosion behavior obtained from the potentiodynamic curves and also it well explained the ion-exchange effect on the increase in corrosion resistance. In fact, the LDHs are related to providing corrosion protection due to: (1) barrier effect, as they are dielectric materials which protect the metal surface by avoiding the interaction with the metal substrate; and (2) by ion-exchange capability and entrapping Cl − ions by releasing nitrates [34]. Page 9 of 13 Coatings 2020, FOR REVIEW That made LDH structures a compact system for entrapping the chloride ions and prevents the aggressive media from interacting with the aluminum surface. Qdl Ω −1 cm −2 s α αdl Considering the EIS response of the samples, two relaxation processes can be observed in the phase angle spectrum (Figure 6b): The time constant in the high-frequency range (10 3 -10 4 Hz) can be attributed to the properties of the LDH layer itself, while the time constants in the middle frequency range 10 0 -10 1 Hz are the overlapping of the contributions of the aluminum oxide and of the faradic process of substrate and solution interphase. In the case of aluminum AA6082 substrate, two-time constants can also be observed, one related to the formation of the oxide in the middle-frequency range and other due to corrosion reactions in the low-frequency range. The EIS results (Table 4) were further fitted using "ZSimpWin" software to get more detail of corrosion resistance properties in an effort to understand in detail the effect of surface morphologies on the corrosion resistance parameters. As the synthesized coating has shown two relaxation processes from middle-high to low-middle frequency range due to coating systems, while the variation in LDH film thickness can also be responsible for the change in behavior of electrochemical response and so as defects/porosity. The electrical equivalent circuit R s (CPE LDH (R LDH (CPE dl R ct ))) is used to analyze the EIS response of NiAl-LDHs, [35,36], where R s is the resistance of electrolyte, while R LDH describe the NiAl-LDH film resistance with a constant phase element which accounts for the Dielectric properties of the LDH film (CPE LDH ) and R ct represents the charge transfer resistance in parallel with constant phase element (CPE dl ). According to the mathematical representation of a CPE, (i.e., Z CPE = 1/(Q(ωj) α ) the parameters Q and α have been employed to describe the response of the electrodes. The total resistance (R t ) can be used to analyze the protective ability of deposited NiAl-LDHs. Since the R t values give relative information related to the corrosion rate i.e., higher is the total resistance, lower will be the corrosion rate. It can be seen that total resistance (R t = R ct + R LDH ) gradually increases with the change of surface morphologies (porous domains to platelet structure), moreover, a well-formed platelet structure showed the higher value of total resistance. The relatively high values of R LDH values indicate that the LDH coatings are more compact while also protective as suggested by the relatively high values of R ct . This is well consistent with the polarization curves and bode plots analysis, but here it is also important to mention that CPE LDH and CPE dl have a value of α far from 1, and thus the film did not act as a pure capacitance and it is difficult to interpret the real physical meanings of EIS fitting parameters. From electrochemical and physical characterization, we can conclude that better is the ion-exchange capability to hold firmly the chlorides inside the interlayer's, the better is the corrosion resistance properties. The equivalent circuit used to model the impedance results is shown in Figure 7, along with an example of fitting the experimental results of NiAl-LDH d . The well organizes geometry of NiAl-LDH is found to facilitate better ion-exchange with the Cl − and strongly hold them between the LDH interlayer's thus act as a strong protective film on aluminum alloy against corrosion. Due to well-formed platelet LDH structure, nitrate ions properly intercalate between the interlayers and cause an increase in the chloride uptake and holding capacity, thus leading to the stabilization of the layered structure which prevents chloride ions migration to the underlying metal. That made LDH structures a compact system for entrapping the chloride ions and prevents the aggressive media from interacting with the aluminum surface. Table 4. Evolution with a time of the fitting parameters R LDH , Q LDH , α LDH , R ct , Q dl, and α dl. Sample Immersion Time That made LDH structures a compact system for entrapping the chloride ions and prevents the aggressive media from interacting with the aluminum surface. In Table 5 a comparison of the fitting parameters R LDH and R ct after 1 day of immersion in a sodium chloride electrolyte is reported. The value of R LDH for the coatings developed in this study is remarkably higher compared to the data reported in the literature. However, one should consider that: Coatings 2020, 10, 384 9 of 12 (1) in this study the crystallization treatment has been prolonged in order to obtain relatively thick coatings, while in the literature very often only thin conversion layer of LDHs are investigated; (2) the electrolyte employed in this study is more diluted than 3.5 wt % NaCl (0.1 M ≈ 0.58 wt %): for this reason, higher resistance values are expected. To understand the Chloride entrapment capabilities of NiAl-LDHs, a direct Mohar chloride measurement method is used to measure chlorine adsorption behavior of LDHs before and after the contact with 0.1 M chlorine solution for a period of one day. Here, silver nitrate is used as a reagent and potassium chromate as an indicator [39], the silver nitrate solution was added slowly in the tested chloride solution, and result the formation of a precipitate of silver chloride, while the endpoint of the titration occurs when all the chloride ions are precipitated and the addition of silver nitrate reacts with the chromate ions (indicator) to form a red-brown precipitate of silver chromate (Figure 8). The calculated concentration of chloride after contact with NiAl-LDH and for comparison the chlorine concentration in 0.1 M NaCl solution is listed in Table 6. It can be seen that the chloride uptake value for NiAl-LDH d is much greater than the other prepared NiAl-LDHs. The mechanisms behind the chloride removal from the solution are likely to rely on the ion-exchange capability of the LDHs. In fact, the reduced amount of Cl − measured upon exposure to the NiAl-LDH is in agreement with the anions uptake in the film. Among the investigated samples, NiAl-LDH d has been found to combine the best corrosion protection properties (as suggested by polarization curves and EIS) as well as the highest chloride uptake capability. A hypothesis to explain these findings is to assume that chloride ions are entrapped inside the LDH structure (thanks to anions exchange mechanism), thus reducing the aggressiveness of the salt solution towards the metal substrate. Together with higher thickness, this would help to increase the corrosion protection properties of the LDH coating. Figure 9 shown the optical images of NiAl-LDHs after corrosion analysis and can be seen that LDH film remains visually intact and uniform, which is in agreement with the observed system stability of the LDHs. To understand the Chloride entrapment capabilities of NiAl-LDHs, a direct Mohar chloride measurement method is used to measure chlorine adsorption behavior of LDHs before and after the contact with 0.1 M chlorine solution for a period of one day. Here, silver nitrate is used as a reagent and potassium chromate as an indicator [39], the silver nitrate solution was added slowly in the tested chloride solution, and result the formation of a precipitate of silver chloride, while the endpoint of the titration occurs when all the chloride ions are precipitated and the addition of silver nitrate reacts with the chromate ions (indicator) to form a red-brown precipitate of silver chromate (Figure 8). The calculated concentration of chloride after contact with NiAl-LDH and for comparison the chlorine concentration in 0.1 M NaCl solution is listed in Table 6. It can be seen that the chloride uptake value for NiAl-LDHd is much greater than the other prepared NiAl-LDHs. The mechanisms behind the chloride removal from the solution are likely to rely on the ion-exchange capability of the LDHs. In fact, the reduced amount of Cl − measured upon exposure to the NiAl-LDH is in agreement with the anions uptake in the film. Among the investigated samples, NiAl-LDHd has been found to combine the best corrosion protection properties (as suggested by polarization curves and EIS) as well as the highest chloride uptake capability. A hypothesis to explain these findings is to assume that chloride ions are entrapped inside the LDH structure (thanks to anions exchange mechanism), thus reducing the aggressiveness of the salt solution towards the metal substrate. Together with higher thickness, this would help to increase the corrosion protection properties of the LDH coating. Figure 9 shown the optical images of NiAl-LDHs after corrosion analysis and can be seen that LDH film remains visually intact and uniform, which is in agreement with the observed system stability of the LDHs. The endpoint, all the Cl − ions have precipitated and with silver nitrate, precipitates with the chromate indicator giving a slight red-brown coloration. Conclusion In this study, an in situ growth approach was used to prepare anticorrosive NiAl-LDHs of various morphologies on aluminum AA6082 substrate and the effect of different LDH surface morphologies on their ion exchange capability with Cl − and corresponding corrosion resistance properties are reported. It is revealed that platelet NiAl-LDHd structure has shown better ion-uptake behavior compared to other analyzed morphologies. About 122 mg/L chloride uptake was observed from 0.1 M NaCl the electrolyte. In addition, it was found to remarkably reduce both the anodic and cathodic current compared to the bare substrate. The findings from EIS analysis furtherly confirmed the ability of NiAl-LDHd to protect the underlying metal against corrosion. Together with a physical barrier effect, the capability of the developed LDH structure to entrapped chloride ions, thus reducing the aggressiveness of the salt solution towards the metal substrate, are believed to be responsible for the observed increase in the corrosion protection properties of the LDH coating. As a general conclusion, the selection of an appropriate choice of metal cations ratio and microstructure optimization seems to play a key role in the development of LDH coatings with enhanced corrosion protection properties. Conclusions In this study, an in situ growth approach was used to prepare anticorrosive NiAl-LDHs of various morphologies on aluminum AA6082 substrate and the effect of different LDH surface morphologies on their ion exchange capability with Cl − and corresponding corrosion resistance properties are reported. It is revealed that platelet NiAl-LDH d structure has shown better ion-uptake behavior compared to other analyzed morphologies. About 122 mg/L chloride uptake was observed from 0.1 M NaCl the electrolyte. In addition, it was found to remarkably reduce both the anodic and cathodic current compared to the bare substrate. The findings from EIS analysis furtherly confirmed the ability of NiAl-LDH d to protect the underlying metal against corrosion. Together with a physical barrier effect, the capability of the developed LDH structure to entrapped chloride ions, thus reducing the aggressiveness of the salt solution towards the metal substrate, are believed to be responsible for the observed increase in the corrosion protection properties of the LDH coating. As a general conclusion, the selection of an appropriate choice of metal cations ratio and microstructure optimization seems to play a key role in the development of LDH coatings with enhanced corrosion protection properties.
2020-04-16T09:11:57.768Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "fe75eaf9709a14872ea519ae925c05f54897411e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/10/4/384/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f282635ad79f84d1111c419fe483e120761239d7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
17766442
pes2o/s2orc
v3-fos-license
Superconformal Field Theory with Boundary:Spin Model GSO projected Superconformal field theory (Spin Model) with boundary is considered. There were written the boundary states. For this model were derived one-point structure constants and"bootstrap"equations for boundary-bulk structure constants. Introduction Superconformal Field Theory on manifold with boundary plays an important role in open superstring theories and are the basic ingredient for the construction of the open superstring theory. Perhaps it can be also essential for some two dimensional exactly solvable models and their critical phenomenon. Here we recall basic facts from the superconformal field theory adapted to our case and establish our notation. The basic ideas of superconformal field theory can be found in refs [1], [2]. Supersymmetric extensions of Virasoro algebra are obtained by generalizing conformal transformations to superconformal transformations of supercoordinatesẑ = (z, θ). The generators of superconformal transformations δz = u + θǫ; δθ = ǫ + 1 2 θu z ; δz =ū +θǭ; δθ =ǭ + 1 2θūz (1) are super stress-energy tensor G(z, θ) = 1 2 S(z) + θT (z). The operators L n and S r (Laurent coefficients of T and S) generate analytic coordinate and supersymmetry transformations respectively and obey the algebra, The algebra has a Z 2 symmetry, so there are two possible moding for the fermionic generator S r , either half-integer (r ∈ 1/2 + Z) or integer (r ∈ Z) giving the Neveu-Schwarz (NS) and Ramond (R) algebras respectively. Highest weight states | h of the NS and R algebras satisfy Representation are built up by applying the raising operators L n ,S r with n, r > 0 to the highest weight state | h . In the Ramond sector superconformal current has zero mode, which form two dimensional Clifford Algebra with the Fermion Number Operator Γ = (−) F , commuting with the L 0 . As a result, we have double degeneration of the ground state [1]. In this space we can choose the following ortogonal basis | h + = R h where | h + and | h − are eigenvectors of operator (−) F with eigenvalues +1 and −1 respectively having the same conformal weight h. Using commutation relations (2) we can obtain: Thus, if one normalizes | h + as, h + | h + = 1 , then from (5) it follows, that and which is ortonormal. In further we will use the basis (4). Let us note, that if h = c 16 , then | h − becomes 0-vector and decouples from representation of algebra. Hence chiral symmetry of the ground state is destroyed and the global supersymmetry is restored. In the general superconformal theory the full operator algebra of NS superfields and R ± spin fields is nonlocal [1]. There are two possibility for projecting onto a local set of fields. First one, keeping only the NS-sector giving the usual algebra of superfields, a fermionic model. The second one, we can get a local field theory the "spin model" restricting in superconformal field theory by Γ = 1 sector. In this paper we are going to consider "Spin Model" with boundary defined on the upper half plane (the "Fermionic Model" there was studied in ref. [3]). It is easy to see, that the requirement of preservation of the geometry gives strong limitations on parameters of superconformal transformation. One can see that the expansion coefficients of parameters must be real. Therefore holomorphic and anti-holomorphic transformations are not independent. So, let's make analytical continuation of T and S on to lower half plane. It means that now we have only one algebra (2) in opposite to "bulk" theory, there were two, holomorphic and anti-holomorphic algebras, which is consistent with the fact, that in theory with boundary we have only one set of coefficient in expansion of parameters.. Then for X = R ± (z 1 ,z 1 )...R ± (z n ,z n ) correlation function from (6) (using bulk OPE) follows, that in contrast to bulk Ward Identity where T (z) and S(z) acts only on (z 1 , ..., z n ), in theory with boundary the action of T (z) and S(z) is extended to (z 1 ,z 1 , ..., z n ,z n ) and hence, in the relations of the boundary Ward Identity the doubling of terms on the right hand sides takes place due to terms with z ′ i =z i . So, correlation function for Ramond fields X(z 1 ,z 1 , ...z n ,z n ) B in our geometry satisfies the same differential equation as does bulk correlation function of Ramond fields X(z 1 ,z 1 , ...z 2n ,z 2n ) . Boundary States Further we will construct boundary states of theories defined on the upper half plane or strip, which one can also interpretate as a world sheet of an open superstring. Mapping of the upper half plane on to strip is given by the conformal transformation z = e t+iσ , where (t, σ) are coordinates on strip (0, π). In general superconformal field theory with boundary, the unique requirement on boundary condition is the superconformal invariance: If one compactifies t by mod 2πImτ (τ is purely imaginary) he obtaines the theory defined on a cylinder with radius Imτ . Then partition functions with boundary conditions α, β at the ends of cylinder can be written (for antiperiodic and periodic boundary condition in time direction) as follows, The bulk superconformal algebra is the tensor product of two algebras, therefore natural chirality operator is Γ = (−1) Ftot , where F tot = F +F is the fermion number of the full algebra. The projection of boundary SCFT is analogous to the GSO projection of the bulk SCFT with the difference that in the boundary theory only one chirality operator Γ = (−1) F is defined, since for boundary case there is just one algebra. The projection to local theory in NS and R sectors is given by Γ = 1. Let's note that summarizing partition functions in each sector α ′ β ′ are just projecting into subspace having even fermion number. From the other side, the same partition function can be considered as a propagation of closed superstring on σ direction between boudary states α |, | β , where H cyl is the Hamiltonian for closed superstring, L cyl 0 ,L cyl 0 are generators of Virasoro and | α , | β satisfy to conditions (7), which can be rewritten as where ζ = e −i(t+iσ) . One can rewrite conditions (10), in the form: where r ∈ Z or r ∈ Z + 1 2 . It is easy to see from (10),(11) that one should choose "+" boudary states (or "−") for both ends of the cylinder for propagation of Neveu-Schwarz and "+−" (or "−+" ) for propagation of Ramond states in open string channel.. Of course "+" and "−" states are not essentially different. For our purposes we will fix "+" boundary states for σ = 0 end of cylinder and vary "+" and "−" for the other end. One of the basic aims of this paper is to find solutions (11) in each irreducable representation of superconformal algebra. The solution of conditions (11) in NS sector is given by the following anzats [4], where U ± N S is an anti-unitary operators, satisfying the following conditions: One can see that equations (13) yield It's easy to show, that (12) satisfies to conditions (11). For this purpose we just have to check, that for any basic vector i | ⊗ j |, following relations are valid, It is more interesting Ramond sector. For the beginning let us consider the case h = c/16. We can use the same anzats (12) to solve (11), where U R ± is anti-unitary operators, satisfying to conditions: Since the ground state is now non-trivial, we have freedom in a definition of the action U R ± on this space. And we have the only restriction on U R ± : In representation, where 16 S 0 and (−) F can be represented as where σ x and σ z are Pauli matrixes. Using (18) and representation (19), we get: where a and c satisfy anti-unitary condition: aa * + cc * = 1 and ac * + a * c = 0. According to latter equations there are two independent choices for U R ± : It is interesting to note, that for h = c 16 , the uniqueness of U R ± is recovered. The nature of this degeneration is very interesting but we will not analyze it. We only note that it is sufficient to restrict to first choice of U R ± . The partition functions (8) of the theory defined on compactified cylinder can be expressed as a linear combination of characters since instead of holomorphic and atiholomorphic algebras (in the bulk) now there is just one algebra: are the characters of the superconformal algebras in NS and R sectors respectively. For the last character note that R fermion has zero energy on the cylinder at the supersymmetric ground state (h =ĉ/16). By non-negative integer n i αβ , m i αβ denoted the number of times that representation i occurs in the spectrum of H open αβ . The character formulas for the NS and R algebra have been derived by Goddard, Kent and Olive [7] and by Kac and Wakimoto [8] and under the modular transformation τ → −1/τ the characters for the "spin model" transform linearly [9], whereq = e −2πi/τ . In order to have complete set of boundary states defined by equation (12), we have to consider diagonal bulk theory. Following to Cappelli and Kastor [9] there are different superconformal theories corresponding to different modular invariant combination of characters here the factor F is equal to 2 for the nonsupersymmetric R highest weight states, which one twofold degenerated, and is equal to 1 otherwise. N i,j is the number of highest weight states (h i ,h j ) in the bulk theory which one obeys to the sum rules, requiring modular invariance of Z N S (q) = Z N S (q), (26) There are at least two series of solutions to the above sum rules. One of these the diagonal (or scalar) solution of the superconformal sum rules are given by N nm,kl = δ nk δ ml in NS, R sectors. For the diagonal theory the constructed states | j R,N S ± give complete set in the space of all boundary states and we can therefore write Using these representations we can rewrite (9) For such theories, when each representation occurs just once in the spectrum of bulk H, we have linearly independent different characters, therefore comparing last relations and (24), namely Z α + β + (q) = Z N S αβ (q) + Z (−)N S αβ (q) and Thus, solving equations for coefficients of boundary states |α + (in the same way for |α − ), we can write finaly particulary for |α + following expression These states have property that n ĩ 0k = δ i k which means that the representation k appears in the spectrum of H0k. operators near a boundary [3]. There are two types of bulk fields: Ramond spin fields R(z,z) and Neveu-Scwarz superfields Φ(ẑ,z) = φ(z,z) + θΨ(z,z) +θΨ(z,z) + θθF (z,z) where one can write short distance expansion for φ(z,z) and R(z,z) near boundary as follows here [φ B (x)], -are conformal class of boundary vertex operators φ B and C B φφ B , C B Rφ B -are OPE's boundary structure constants of Neveu-Schwarz and Ramond fields respectively. From (32) it is possible to obtain corresponding relations for Ψ and F fields. Now let's obtain these boundary structure constants. First of all note that for identity boundary operator corresponding structure constant is equal to constant factor of one point boundary correlation function. One point boundary correlation (with boundary conditions labelled by B) of NS and R fields with corresponding to superconformal invariance and boundary Ward identity can be written Thus, according to the definition [5] [6], and using the superconformal physical boundary states (30) we find To determine the boundary structure constants C B φφ B , C B Rφ B we use associativity of the boundary operator algebra which imposes global constraints on correlation functions. For this purpose, consider 2-point functions, in two channels. Of course corresponding correlation functions for Ψ and F can be restored from (38) by supersymmetry. We can evaluate these correlation functions using OPE in different crossing channels. Associativity of the operator algebra implies that correlation function of these two channels should give the same result (crossing symmetry), here η =| z 1 − z 2 | 2 / | z 1 −z 2 | 2 is cross-ratios, F m ij;ij (η), F m ρσ;ρσ (η), C ijm and C ρσm are conformal blocks and bulk structure constants respectiviely. According to different basis of differential equations (to which obey conformal blocks) the solutions are expressed by each other linearly [10], So, all boundary structure constants are expressed via well known bulk quantities.
2014-10-01T00:00:00.000Z
1998-06-13T00:00:00.000
{ "year": 1998, "sha1": "cd6c645162d74121c88d68501d323bc62e62a29f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9806107", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6ed35e62f45771246ff6902a75446ecbfbd7cae0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25613759
pes2o/s2orc
v3-fos-license
Diversity Outbred Mice at 21: Maintaining Allelic Variation in the Face of Selection Multi-parent populations (MPPs) capture and maintain the genetic diversity from multiple inbred founder strains to provide a resource for high-resolution genetic mapping through the accumulation of recombination events over many generations. Breeding designs that maintain a large effective population size with randomized assignment of breeders at each generation can minimize the impact of selection, inbreeding, and genetic drift on allele frequencies. Small deviations from expected allele frequencies will have little effect on the power and precision of genetic analysis, but a major distortion could result in reduced power and loss of important functional alleles. We detected strong transmission ratio distortion in the Diversity Outbred (DO) mouse population on chromosome 2, caused by meiotic drive favoring transmission of the WSB/EiJ allele at the R2d2 locus. The distorted region harbors thousands of polymorphisms derived from the seven non-WSB founder strains and many of these would be lost if the sweep was allowed to continue. To ensure the utility of the DO population to study genetic variation on chromosome 2, we performed an artificial selection against WSB/EiJ alleles at the R2d2 locus. Here, we report that we have purged the WSB/EiJ allele from the drive locus while preserving WSB/EiJ alleles in the flanking regions. We observed minimal disruption to allele frequencies across the rest of the autosomal genome. However, there was a shift in haplotype frequencies of the mitochondrial genome and an increase in the rate of an unusual sex chromosome aneuploidy. The DO population has been restored to genome-wide utility for genetic analysis, but our experience underscores that vigilant monitoring of similar genetic resource populations is needed to ensure their long-term utility. The power of genetic mapping studies in model organism populations derives, in large part, from uniform and high allele frequencies at all variant loci across the genome. Multi-parent populations (MPPs) such as the Diversity Outbred (DO) mouse population provide high mapping precision due to the accumulation of recombination events across multiple breeding generations. It is important to maintain allelic balance during the breeding process and this can be achieved by maintaining a large effective population size with randomized matings (Rockman and Kruglyak 2008). The founding generation (G0) of the DO population consisted of randomly chosen mice from the incipient Collaborative Cross (CC) breeding lines, which were derived from eight inbred founder strains (Collaborative Cross Consortium 2012; Svenson et al. 2012). Software-assisted breeding has facilitated adherence to the randomized breeding design of the DO. However, natural phenomena such as allelic incompatibility between loci or meiotic drive have the potential to directionally disrupt allelic balance more rapidly than expected from stochastic genetic drift, which would require hundreds of generations to substantially alter allele frequencies in this population. DO user community enabled us to monitor the autosomal and sex chromosomal haplotype distributions over time. In generations G8 and G9, we observed a substantial and growing distortion of allele frequencies on chromosome 2. An excess of WSB/EiJ alleles had been previously noted in the CC lines, where it appeared to have stabilized at a frequency of 0.20 (Aylor et al. 2011;Collaborative Cross Consortium 2012). However, the frequency of WSB/EiJ alleles in the DO continued to rise and, by G12, exceeded 0.60 (more than five times the expected value of 0.125). The cause for this rapid sweep was identified as a novel meiotic drive locus named R2d2 (responder to meiotic drive on chromosome 2; Didion et al. 2015). R2d2 is a copy number variant; in a permissive genetic background, alleles with high copy number (including WSB/EiJ) are subject to TRD through the maternal germline due to meiotic drive. It appeared likely that, without some intervention, the WSB/EiJ allele would sweep to fixation and as a result genetic variation in the DO would be depleted across a large portion of chromosome 2. Further support for this conclusion was provided by a similar distortion in allele frequency observed in the region spanning the R2d2 locus in a related outbred population known as the Heterogeneous Stock-Collaborative Cross (HS-CC), which has undergone many additional generations of outbreeding (Supplemental Material, Figure S1) (Iancu et al. 2010). The fixation of a single haplotype across a large genomic region in a mapping population would create a blind spot with no detectable genetic variation. To date, over 145 quantitative trait loci (QTL) have been mapped to genetic variants on chromosome 2 [(URL: http://www.informatics.jax. org), 2/25/2016] and 730 features with phenotypic alleles [URL: http:// www.informatics.jax.org), 2/25/2016] are localized to this chromosome. These include protein coding genes, chromosomal deletions, miRNA genes, antisense lncRNA, chromosomal duplications, chromosomal inversions, endogenous retroviral sequences, insertions, and lincRNA genes. Fixation of a single haplotype would eliminate our ability to detect the effects of variants in this region on complex traits, and would mask the region in expression QTL and other systems used in genetic analyses. Epistatic interactions involving loci on chromosome 2 would be limited to the WSB/EiJ haplotype, thereby either masking or exaggerating their effects in the DO population. Remedial strategies In order to maintain allelic variation on chromosome 2, we decided to intervene in the meiotic drive process. We sought an intervention that would deviate as little as possible from the original breeding strategy, in which two randomly selected progeny (one female and one male) from each of 175 DO lineages are assigned to newmating pairs at random with avoidance of paired siblings. Due to increasing demand for DO mice starting in generation G8, the randomized assignments were being made in duplicate such that two males and two females were drawn from each litter and assigned to mating groups "A" and "B." Progeny from 175 mated pairs in the "A" group were selected to maintain the core DO population and progeny from mated pairs in the "B" group were used primarily for distribution. The availability of two mated pairs per lineage expanded the available pool of matings and allowed us to reduce the frequency of WSB/EiJ alleles, while avoiding a bottleneck in the core breeding population. Several remedial strategies could be employed to mitigate the impact of meiotic drive on genetic variation. One strategy was to fix the central region of chromosome 2 with the WSB/EiJ-derived haplotype leaving the flanking regions segregating for all eight founder haplotypes. However, this strategy would leave a large "blind spot" in the middle of the chromosome due to reduced recombination in that region (Liu et al. 2014;Morgan et al. 2016b). An alternative strategy was to purge the WSB/EiJ allele, preserving genetic variation along the entire chromosome but with seven, rather than eight, founder haplotypes represented in the central region. We concluded that retaining as much segregating variation as possible was the preferred solution. We speculated that the latter strategy could be augmented by the reintroduction of a WSB/EiJ haplotype carrying a spontaneous copy number reduction at R2d2 that would be incapable of drive (Didion et al. 2015). The allelic distortion on chromosome 2 was first detected in genotypes from generations G8 and G9, but by the time these data were analyzed, the G12 matings had been established and WSB/EiJ allele frequency had exceeded 0.60 (Didion et al. 2016). Based on the observed rate of change, we determined that it was still possible to purge the WSB/EiJ allele at R2d2 while maintaining the essential characteristics of the DO population-random assortment and balanced allele frequenciesacross the uninvolved regions of the genome. The need to monitor the progress of the purge and its potential impact on other regions of the genome lead us to the discovery of additional irregularities in the genetic makeup of the mitochondrial genome and sex chromosomes in the DO population, which we describe below. DO production colony maintenance The DO mouse production colony is maintained at The Jackson Laboratory in a standard barrier, specific pathogen-free facility. Breeder pairs were housed in individual pens of duplex cages [pen dimensions 12$ · 6$ · 5$ (L · W · H)] on pressurized, individually ventilated racks with hardwood chip/shaving bedding. The mice were fed a Lab Diet 5K0Q (St. Louis, MO) ad libitum and were provided with filtered water in bottles acidified to pH 2.5-3.0. The room temperature was maintained at 70°(6 2°) and 50% humidity (6 20%), with a light cycle of 14 hr on and 10 hr off. The pups were weaned at 3 wk of age and housed in duplex cages with up to five sex-segregated animals per duplex pen. The core colony is maintained in 175 lineages. At each new generation, two progeny (one female, one male) are selected at random from each lineage and assigned to mating pairs at random with avoidance of sib-mating. These progeny are selected from first litters when possible and additional litters are used to provide mice for distribution. Due to an increase in demand at generation G8, we established a second mated pair within each lineage. Progeny from the "B" matings were primarily used for distribution. Breeding records are provided in File S1. Many users of DO mice provided their genotype data for haplotype inference using the DOQTL software . These samples were genotyped on one of the MUGA, MegaMUGA, or GigaMUGA (Neogen, Lincoln, NE) array platforms (Morgan et al. 2016a). Genotype data from DO mice are being archived at http://do.jax.org, and will also be accessible from the Mouse Phenome Database (http://mpd.jax. org). We strongly encourage users of DO mice to contact E.J.C. or G.A.C. to coordinate submission of genotyping data. The HS-CC population was bred at Oregon Health & Science University (OSHU) in the research colony of Robert Hitzemann. The HS-CC were formed from the eight CC founder strains using a pseudorandom breeding design (Iancu et al. 2010). The colony is maintained as 48 families using a rotational breeding strategy, i.e., a male from family one is bred to a female from family two and so on. The colony is currently at G35. The HS-CC mice are housed at the Portland Veterans Affairs Medical Center (VAMC), in a nonbarrier facility under standard conditions. At G25, 88 individuals were genotyped using the MegaMUGA (Morgan et al. 2016a). Marker-assisted purge of WSB/EiJ alleles at the R2d2 locus To perform the breeding intervention in a cost-effective manner, all mating pairs were genotyped for the presence of WSB/EiJ alleles on chromosome 2 at three SNPs: rs27943666, rs28048346, and rs28030588. Primer sequences are in File S2. Genotyping was performed by LGC Genomics (Beverly, MA). DNA was isolated using sodium hydroxide extraction followed by neutralization with Tris HCl. The genotype analysis enabled us to categorize the genotypes of each potential breeder as WSB/EiJ (W) carrier status, or all other alleles (a). Tracking of DO matings is matrilineal. Selected DO mice from generation N-1 are assigned to a mate pair with an identifier of the form GN0xxx, where N is the generation of the expected offspring and xxx is the lineage number of the dam. The sire is selected at random (with avoidance of siblings) and his maternal lineage is noted in the breeding records. Due to the nature of the production environment, there are some gaps and minor recording errors in the breeding records. Beginning with matings that produced the G8 generation, matings were set up in duplicate. The dam and sire of the A mating are full siblings of the dam and sire, respectively, in the B mating. Initially the A mating was designated for propagating the DO population and the B matings were used to expand production capacity. However, in the event that the A mating did not produce sufficient numbers of female and male offspring for the next generation, the B mating was available as a backup. In the event that neither the A nor the B mating produced a full set of offspring, another lineage was selected to provide either the dam, the sire, or both parents for the next generation matings. In the ideal, these replacements would not occur and drift in the population would achieve its theoretical minimum value. In order to execute the purge of WSB/EiJ alleles at R2d2, we deviated from this mating scheme in two ways. First, both the A and B mate pairs were genotyped. Four of the nine mating types (Table 1) can produce at least some mice that do not carry a W allele, including progeny of the aa · aa matings, Wa · Wa matings, Wa · aa, and aa · Wa matings. Of these, only offspring of crosses with a Wa parent need to be genotyped to identify aa progeny. Progeny of the aa · WW and WW · aa matings are all Wa and do not need to be typed. Progeny of the WW · Wa and Wa · WW crosses are either WW or Wa and can be identified by genotyping. Second, to populate the next generation, offspring of all but the WW · WW mating types were retained, but progeny were typed as needed and those with either one or zero W alleles were selected for the next generation. To minimize the W allele frequency in the subsequent generation, only a fraction of the Wa · WW/WW · Wa progeny were retained. By prioritizing those mated pairs that reduced or retained W alleles, and by selecting specifically those that carried the minimum number of alleles, we expected to bring the allele frequency from 62 to , 15% in one generation, and theoretically to purge the W allele completely in the next generation. The meiotic drive effect was originally reported to result in 66% transmission of W alleles from Wa matings based on population estimates. Further work revealed that selection at R2d2 occurs only through the female germline and is background dependent, with transmission ratio varying from 50 to 100% in favor of the W allele depending on the mating (Didion et al. 2015). Therefore, marker-assisted breeding and selection required multiple generations and progressed more slowly than our original estimates. Haplotype reconstruction Two litters are consistently produced in both of the production and distribution colonies. Litter sizes and sex ratios were determined from breeding records of these colonies. Analyses are based on data separated by litter. We used the allele calls from the MegaMUGA and GigaMUGA platforms (Morgan et al. 2016a) as inputs to a hidden Markov model (HMM) and performed haplotype reconstruction in each DO mouse . Briefly, we estimated the posterior probability that each mouse was in one of 36 possible unphased diplotype states, given the allele call data. The HMM produces 36 diplotype probabilities (which sum to one) for each mouse at each marker. We estimated the frequency of each founder allele along the genome by condensing the 36 diplotype probabilities to eight founder haplotype probabilities and summing these across samples. Screen for R2d2 mutants and estimation of transmission ratio The R2d2 copy number was estimated in 71 DO G16 females carrying the WSB/EiJ allele at R2d2 in heterozygosity using two copy number assays (Life Technologies, Carlsbad, CA, catalog numbers Mm00644079_cn and Mm00053048_cn). These assays target the Cwc22 gene that is present at the R2d2 locus (Didion et al. 2015). The female with the smallest copy number (DO-G16-107) was backcrossed for three generations to C57BL/6NJ males, selecting at each generation for heterozygous R2d2 females. TRD was tested in the progeny of nine females from this pedigree (the original DO G16, one F 1 , three N 2 , and four N 3 ) by crossing them to C57BL/6NJ males and genotyping 18-60 pups. DNA for PCR-based genotyping was performed on crude whole genomic DNA extracted by heating tissue n Sex chromosome abnormalities DO mice were classified as males or females based on the mean hybridization intensity at markers located on the X and Y chromosomes; females have low signal from Y-linked probes and higher intensity at X-linked probes than males. XO females were identified by the lack of a Y chromosome, significantly reduced overall signal intensity for markers on the entire X chromosome, and by the complete lack of heterozygosity on chromosome X. Males with partial duplication of the distal X chromosome were identified by the presence of a Y chromosome and the presence of heterozygous calls on the distal portion of the X chromosome. To rule out the possibility that heterozygous calls on the X chromosome in males were technical artifacts, we used only markers with robust performance in females. We also confirmed that males with putative duplications have female-like hybridization intensity at heterozygous markers on the distal X, consistent with the presence of two X chromosomes. Data availability Breeding records are provided in File S1. A complete sample list is provided in File S5. Genotype data are available at http://churchilllab.jax.org/website/Chesler_2016_DO. RESULTS Discovery and remediation of the chromosome 2 selfish sweep Monitoring of DO mouse genotypes from multiple experiments revealed an increasing frequency of WSB/EiJ alleles in a region of chromosome 2 centered at 90 Mb ( Figure 1A). Although the effect was first noticed in data from generations G8 and G9, the G12 matings were already in place and the WSB/EiJ allele frequency had reached 60% before we could begin intervention. Animals that were distributed from the DO colony through G19 retained a high but steadily decreasing frequency of WSB/EiJ alleles ( Figure 1B). Marker-assisted selection of progeny was used to establish matings in the core DO colony beginning at generation G13, and WSB/EiJ allele frequency declined rapidly for the next five generations (Figure 1, C and D). By G21, WSB/EiJ allele frequencies among breeder pairs dropped to zero and the WSB/EiJ allele had been completely purged from the DO at generation G22. The finding of a G16 DO mouse with a spontaneous mutation on the WSB/EiJ haplotype that reduced copy number at R2d2 raised the possibility of reintroducing the WSB/EiJ haplotype to the selected region on chromosome 2. However, after comprehensive testing of maternal transmission ratio at R2d2 in a pedigree segregating for the low copy-number allele, we concluded that the allele was still able to sustain meiotic drive (overall 159 progeny inherited the WSB/EiJ allele while 108 inherit the non-WSB/EiJ allele, P = 0.0018). Therefore, we suspended the plan to reintroduce the WSB/EiJ haplotype with the mutated R2d2 allele. Effect of selection on the genetic structure of the DO In order to understand the impact of selection against the WSB/EiJ haplotype in the region spanning R2d2, we examined the frequency of all eight founder haplotypes across the genome, paying particular attention to chromosome 2 ( Figure 2). We chose to contrast haplotype frequencies between distributed animals from generation G21 (which at the time of writing included genome-wide profiles of 504 mice) and generation G11 mice (one generation before the start of the purge for which we have access to genome-wide profiling for 879 mice). In the targeted region of chromosome 2, the decrease in WSB/EiJ allele frequency was offset by increases in most of the remaining founder haplotypes. The highest frequencies are associated with the 129S1/SvImJ (24.4%) and C57BL/6J (29.6%) haplotypes. The wild-derived haplotypes increased from 9.8 to 13.9% for CAST and 5.8-14.9% for PWK. There is an excess of the WSB haplotype throughout most of the nonselected portion of chromosome 2 due to linkage with R2d2, with a peak of 31.9% proximal to R2d2. The remaining seven founder haplotypes are not markedly changed and there were no regions with complete loss of any founder haplotype outside of the R2d2 region. We expect that these imbalances will persist in the DO colony with some allowance for genetic drift. Genome-wide haplotype frequencies did not change substantially from generation G11 to G21 (File S3). The largest observed change in allele frequencies outside of chromosome 2 was an increase in the NOD haplotype from 13.6 to 33.5% over a region spanning from 90 to 100 Mb on chromosome 15. The eight founder strains of the DO differ substantially in their relative contribution of unique sequence variants, with the largest contribution of variants coming from CAST/EiJ and PWK/PhJ, followed by WSB/EiJ. Thus, haplotype frequencies alone give an incomplete picture of the changes in standing genetic variation following the purge of the WSB/EiJ haplotype (Figure 3). The central region targeted by the marker-assisted purge contains 2772 SNP and 884 small indel variants that are private to WSB/EiJ. The flanking regions contain an additional 74,265 and 22,618 WSB private SNPs and indels, respectively. These alleles are lost or nearly lost in the current DO population. The distribution of minor allele frequencies for the majority of SNPs on chromosome 2 not private to WSB/EiJ remained stable throughout the selection process (Figure 3 and File S4). Chromosome Y and mitochondria To determine whether there are significant changes in allele frequency in the mitochondria and the Y chromosome, and to determine whether putative changes are related to the R2d2 purge, we compared the frequencies for G6 through G21. The initial description of the DO population (Svenson et al. 2012) did not report the haplotype frequencies in the mitochondria and the Y chromosome due to the lack of markers in the genotyping platform used at the time (MUGA). This shortcoming was addressed in newer genotyping arrays (MegaMUGA and GigaMUGA) (Morgan et al. 2016a). In addition to pre-and postpurge generations, we also calculated the input frequencies based on the genotypes of incipient CC mice used to establish the DO population (Chesler et al. 2008;Collaborative Cross Consortium 2012;Svenson et al. 2012). There is a reduction in the mitochondrial haplogroup ABCD, corresponding to founder strains A/J, C57BL/6J, 129S1/SvImJ, and NOD/ShiLtJ, concomitant with the purge and strongest between G13 and G14 ( Figure 4A). For the Y chromosome, there is no evidence of directional changes either before or after the purge ( Figure 4B). We conclude that, despite the changes in mitochondrial haplotype frequency, the DO retains every type of Y chromosome and mitochondria. In fact, at G21, the frequencies of the four genetically distinct mitochondrial haplotypes are more evenly distributed than they were in the founding generations. X chromosome abnormalities Examination of the genotypes on the sex chromosomes at G11 and G21 led us to the unexpected finding of DO mice with two types of sex chromosome abnormalities ( Figure 5 and Table 2). Seven XO females (out of 688 total females) are characterized by the absence of heterozygosity and reduced average hybridization intensity of the SNP probes on the X chromosome. In addition, we identified 65 DO males (out of 695 total males) with apparent duplication of the distal region of the X chromosome. The rate of XO females remains constant but there is a significant increase from 4.4% in generation G11 to 15% in G21 (P , 0.0001) in the frequency of males with duplications of the distal X chromosome. We cannot exclude the possibility that some of these duplications may represent translocations from the X to the Y chromosome. The Y chromosome of the CAST/EiJ strain already has an expanded pseudoautosomal region (PAR) due to an X-to-Y translocation (White et al. 2012). Duplications of the distal X are challenging to identify in the DO due to the presence of this extended PAR in CAST/EiJ (see male 0568 in Figure 5) and the fact that the length of the duplicated region appears to vary (compare males M382 and 1172R in Figure 5). Litter size, sex ratio, and mating success in the DO Analysis of breeding records for more than 5000 litters across 15 generations of DO breeders (File S1) show little directional change in litter size and sex ratio. However, an effect on litter size becomes apparent when matings are partitioned according to sex and R2d2 genotype. As expected, litter size was not found to depend on sire genotype (Litter 1: P = 0.645 and Litter 2: P = 0.536). In contrast, there is a highly significant reduction in the litter size for R2d2 heterozygous females ( Figure 6). The effect is present in both first (P = 1.22 · 10 221 ) and second (P = 2.7 · 10 210 ) litters. This result is consistent with previous reports that the level of meiotic drive observed in heterozygous females is positively correlated with a reduction in the average litter size. We conclude that fixation of the non-WSB/EiJ allele has mitigated a potentially deleterious effect of heterozygosity at the R2d2 locus on DO maintenance and production. The proportion of failed matings among the DO breeders varied from 3 to 7% prior and subsequent to the active intervention. However, higher rates of failure, reaching . 10% in G14, were observed during the peak of the selection process. Failed matings result in a disruption of the ideal randomized mating scheme. They reduce the effective population size and represent the only route for selection to affect mitochondrial or Y chromosome allele frequencies. DISCUSSION At 21 generations, the DO mouse population has proven to be a valuable resource for genetic mapping (Recla et al. 2014;Smallwood et al. 2014;Church et al. 2015;French et al. 2015;Gu et al. 2016) and systems genetics (Kelly et al. 2015;Chick et al. 2016). However, an ongoing selective sweep driven by the R2d2 locus occurred in the DO breeding colony and threatened to eliminate genetic variation across a large region of chromosome 2. We were able to rescue most of the standing allelic variation in this region using a marker-assisted breeding strategy to purge the WSB/EiJ allele that was responsible for the sweep. Our strategy had minimal impact on independent assortment of unlinked loci, and has maintained allelic variation throughout the genome. The allele purge reversed the naturally occurring process even though WSB/EiJ allele frequencies were quite high at the time marker-based selection was initiated. Five generations were required to completely remove the WSB/EiJ haplotype at R2d2 from the breeding colony. As of the current generation (G22), it appears that we have eliminated the meiotic drive allele but we will monitor this locus for several more generations to ensure that the purge was complete. Although the decision to use a purge strategy retained much of the genetic variation on chromosome 2, loss of variation was inevitable. In addition to the complete loss of private WSB/EiJ alleles in the central region flanked by selection markers, there was a concomitant loss of WSB/EiJ in the nearby flanking regions of the genome. The other seven founder haplotypes, although retained in the region targeted by the purge, no longer occur in the expected ratios due to either drift or inadvertent selection. The most substantial distortion across this region appears to be a reduced frequency of private NOD/ShiLtJ alleles. Earlier detection and reversal of segregation distortion in the DO population might have reduced this impact. Deviations from idealized scenarios in MPPs are not insurmountable. For example, variation attributable to private WSB/EiJ variants on chromosome 2 is still amenable to study in the context of the CC and DO resources. The availability of a second population of outbred animals derived from the same founder strains, the HS-CC, provides an opportunity to query the impact of WSB/EiJ alleles in the R2d2 region. It is interesting to note that the selective sweep in the HS-CC population was not complete by G25. This could reflect the impact of differences in the breeding scheme or reduction in frequency of genetic background effects that amplify drive at R2d2. WSB/EiJ alleles are also present in the CC inbred strains . These resources will enable the identification of WSB/EiJ alleles with important phenotypic effects. Where questions of multi-locus interactions are concerned, strategies including CC · DO crosses, WSB/EiJ · DO crosses, and genome editing to reconstitute lost WSB/EiJ alleles in the DO population could be considered. We suspected that changes in the WSB/EiJ allele frequency at the R2d2 locus could affect litter size or sex ratio. We observed a decrease in litter size among R2d2 heterozygous dams and an increase in nonproductive matings during the purge. Given the breeding design of the DO, the haplogroup frequencies for the mitochondria and Y chromosome should not change due to drift. However, selection in favor of particular breeders during the purge as well as against unproductive breeding pairs could have an effect. For the Y chromosome, we observe little effect of the R2d2 purge. In contrast, there is a substantial effect on the mitochondria. We speculate that the strong reduction of the ABCD haplogroup frequency, in particular between G13 and G14, was caused by the overrepresentation of this haplotype in females that were either homozygous for the WSB allele at R2d2 (and thus excluded from the matings) or had the high TRD phenotypes (and thus contributed fewer breeders to the next generation). We believe that this association was due to chance, as we do not find any evidence for an association between mitochondria haplotype and presence or level of TRD in the DO (data not shown). We investigated other genomic features that could have been impacted by the purge, with particular focus on the sex chromosomes. It has been previously reported that the boundary of the pseudoautosomal region (PAR) of the sex chromosomes is located 430 kb proximal in the CAST/EiJ strain compared to other laboratory strains (White et al. 2012). In other words, this 430 kb interval is now present in both the X and Y chromosomes in CAST/EiJ, so the presence of heterozygosity in that region should be diagnostic for the presence of the CAST/EiJ Y chromosome. The X chromosome duplications in males described here extend further into the X chromosome beyond this "extended PAR" and are not exclusively associated with any specific Y chromosome haplogroup. Whether the duplications are X-linked, Y-linked, or pseudoautosomal, and whether they are associated with XO aneuploidy, is unknown. While the increased frequency of the duplication may be explained in part by the improved sensitivity to detect this type of abnormality with GigaMUGA (G21) as compared to MegaMUGA (G11), it is possible that biology is driving the accumulation of these duplications. The difference in the sizes of the duplicated segments and their prevalence among distributed DO mice ten generations apart suggest that these duplications are not rare in the DO population. Multi-parent crosses have become increasingly popular for QTL studies (Gnan et al. 2014;Huang et al. 2014;Tsaih et al. 2014;Dell'Acqua et al. 2015) due to the idiosyncratic histories of existing reference populations and the desire to meet ideal properties of uniform allele frequencies, randomized assortment, and recombination. Here, we demonstrate the need for proactive monitoring and potential corrective measures to maintain the utility of populations like the DO. Although we tried to eliminate biases that are typically introduced in conventional breeding programs, this was not entirely possible and an unexpected, naturally occurring selection drove the population structure away from these ideals. In a randomized breeding scheme where synchronization of matings is required, mild selection against late reproductive maturity will also occur. Biological factors including meiotic drive, recombination hot spots, and other phenomena will ultimately dictate the uniformity of allele frequencies and mapping precision of the population. Despite our best efforts to eliminate the most obvious sources of selection bias in breeding, such events are unavoidable. Our experience has provided several lessons for future efforts to construct multi-parent reference populations. Population monitoring is a crucial aspect of the development of genetic populations. It is essential for designers of MPPs to clearly prioritize resource development relative to the interesting research made possible by the study of undisrupted breeding history. Prior definition of quality metrics by stakeholders, e.g., acceptable range of segregation distortion, long range linkage disequilibrium, and attrition due to reproductive variability, can facilitate rapid remedial measures to preserve essential population characteristics. In the case of the DO, remediation of segregation distortion to restore the population to its utility as a resource required a deviation from the original breeding plan and the noninterventionist ethos of its stakeholders. All multi-parent genetic reference populations deviate from the ideal in some sense. Selection and TRD are inevitable. However, with careful monitoring and selective interventions, the essential character of the population-balanced allele frequencies and low average kinship-can be retained. What is required for any population is a set of reliable lineage specific genotyping primers for each cross progenitor, routine cost-effective monitoring of crosssectional genome-wide deviation from expected allele frequencies, and a sufficiently rapid genotyping and breeding decision process to enable marker-assisted interventions. It is possible that other distortions could emerge as DO production continues. We have instituted a routine check on allele frequencies across the population, and will be able to rapidly detect and evaluate the need for future interventions. By carefully monitoring allele frequencies we can detect and reverse unexpected changes and preserve the value of the DO as a premier platform for systems genetics in mammals, and recommend proactive monitoring and intervention in the construction of multi-parent resources. n Figure 6 Litter size distribution among the DO breeders by allele at the R2d2 locus. Wa alleles in dams leads to smaller litter sizes, but not in sires. The three alleles are 'aa' for homozygous non-WSB/EiJ, "Wa" for heterozygous WSB/EiJ, and "WW" for homozygous WSB/EiJ. Red dots are the median of each group and black boxes are the interquartile range. P-values are from a one-way ANOVA of litter size vs. genotype. ANOVA, analysis of variance; DO, Diversity Outbred.
2017-11-29T21:12:45.949Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "a91c36c929c7e6a7add381a11bdb7a8d23cc23b1", "oa_license": "CCBY", "oa_url": "https://www.g3journal.org/content/ggg/6/12/3893.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ef14049897a00d8173cba44959504414522f593b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119206003
pes2o/s2orc
v3-fos-license
Low-energy resonances and bound states of aligned bosonic and fermionic dipoles The low-energy scattering properties of two aligned identical bosonic and identical fermionic dipoles are analyzed. Generalized scattering lengths are determined as functions of the dipole moment and the scattering energy. Near resonance, where a new bound state is being pulled in, all non-vanishing generalized scattering lengths diverge, with the $a_{00}$ and $a_{11}$ scattering lengths being dominant for identical bosons and identical fermions, respectively, near both broad and narrow resonances. Implications for the energy spectrum and the eigenfunctions of trapped two-dipole systems and for pseudo-potential treatments are discussed. Currently, the creation of ultracold heteronuclear ground state molecules poses one of the major experimental challenges in the field of ultracold physics [1]. The trapping of ultracold ground state molecules with large phase space density promises to allow an exciting array of novel research lines to be studied. Although the largest phase space density of ultracold ground state molecules achieved to date is still fairly small, a number of promising cooling schemes have been demonstrated [2]. Thus, it is expected that degenerate molecular gases with large electric dipole moment will be created in the laboratory in the near future. Polar molecules are a candidate for qubits in quantum computing [3] and may be used in high precision measurements that aim at placing yet stricter limits on the electric dipole moment of the electron [4]. Furthermore, dipolar gases are predicted to show roton-like features [5] and to exhibit rich stability diagrams whose details depend on the trapping geometry [6]. The stability of dipolar atomic Cr condensates has recently been investigated experimentally. To enhance the anisotropic effects, which are due to Cr's magnetic dipole moment, the s-wave scattering length was tuned to zero by applying an external field in the vicinity of a Fano-Feshbach resonance [7]. To create and then utilize ultracold molecules, it is mandatory to develop a detailed understanding of the scattering properties of two interacting dipoles in free space and in a trap. Unlike the interaction between s-wave alkali atoms, the interaction between two dipoles is long-range and angledependent. A two-dipole system can, e.g., be realized experimentally by loading an optical lattice with either two or zero dipoles per site. If the optical lattice is sufficiently deep and if the interaction between nearest and next to nearest neighbors are absent or negligible, then each optical lattice site can be treated as an independent approximately harmonic trap. This paper determines the scattering properties of two aligned dipoles, either identical bosons or identical fermions, as functions of the dipole moment and the scattering energy. In general, the dipoles can either be magnetic or electric. For concreteness, we restrict our discussion in the following to the scattering between molecular electric dipoles. Sequences of scattering resonances, which can be classified as "broad" and "narrow", are found. For identical bosons, these resonances have previously been termed potential and shape resonances, respectively, and have been interpreted within the framework of adiabatic potential curves [8]. The resonance positions are correlated with the appearance of bound states in free space and "diving" states in the energy spectrum of two aligned dipoles under external confinement. The nature of the broad and narrow resonances is further elucidated by analyzing the bound state wavefunctions. In addition, we show that the eigenequation of two aligned dipoles in a harmonic trap interacting through an anisotropic zero-range pseudo-potential reproduces much of the positive energy spectrum, but exhibits some peculiar unphysical behavior for small energies. The origin of this unphysical behavior is pointed out and a simple procedure that eliminates it is presented. Neglecting hyperfine interactions and treating each dipole as a point particle, the interaction potential between two dipoles aligned along the z-axis is for large interparticle distances r given by V dd ( r), V dd ( r) = d 2 (1 − 3 cos 2 θ)/r 3 . Here, d denotes the dipole moment and θ the angle between the relative distance vector r and the z-axis. We model the shortrange interaction V sr ( r) between the dipoles by a simplistic hardwall potential, V sr ( r) = ∞ for r < r c and 0 for r > r c , so that the full model potential is given by V m ( r) = V sr ( r) for r < r c and V dd ( r) for r > r c . The boundary condition imposed by r c can be thought of as introducing a short-range K-matrix, which is modified by the long-range dipole potential [9]. The characteristic length scale of V sr ( r) is given by the hardcore radius r c and that of V dd ( r) by the dipole length D * , D * = µd 2 / 2 , where µ denotes the reduced mass. The corresponding natural energy scales are given by E r c and E D * , respectively [E r c = 2 /(µr 2 c ) and E D * = 2 /(µD 2 * )]. A straightforward scaling of the relative Schrödinger equation shows that D * and r c are not independent but that the properties of the system depend only on the ratio D * /r c [10]. This ratio can be tuned experimentally by varying D * through the application of an electric field [9]. To obtain the K-matrix elements K l ′ ,m l ′ l,m l , where l and l ′ denote the orbital angular momentum quantum number of the incoming and outgoing partial waves, respectively, and m l and m l ′ the corresponding projection quantum numbers, we solve the relative Schrödinger equation for V m ( r) for a fixed scattering energy E sc numerically. The azimuthal symmetry conserves the projection quantum number, and throughout we restrict our analysis to m l = 0. The radial Schrödinger equation is propagated using the Johnson algorithm with adaptive step size [11]. The K-matrix elements K l ′ ,0 l,0 (k) = tan δ l,l ′ (k) are found by matching the log-derivative to the free-space solutions at sufficiently large r. Since the long-range part of V m ( r) is proportional to the spherical harmonic Y 20 (θ, φ), the phase shifts δ l,l ′ (k) are only non-zero if |l − l ′ | ≤ 2. Figures 1 and 2 show the generalized scattering lengths a l,l ′ for two identical bosons and two identical fermions, respectively, for three different scattering energies E sc as a function of the dipole length D * . The scattering lengths a l,l ′ (k), a l,l ′ (k) = −K l,l ′ (k)/k (k denotes the wavevector, k = 2µE sc / 2 ), are defined so that the a l,l ′ (k) approach a constant as k → 0 [12,13]. The largest D * /r c value considered in Fig. 1 is 40. If we choose r c ≈ 10a 0 , then the largest dipole length considered in Figs. 1 and 2 is D max * ≈ 400a 0 , implying a minimum dipole energy E min D * of 1.27 × 10 −4 K. For the polar molecule OH, this corresponds to a maximum dipole moment of 0.404 Debye, a value that should be attainable experimentally. The scattering energies in Figs. 1 and 2 range from 9.36 × 10 −8 E r c to 9.36 × 10 −5 E r c , or, using as before r c = 10a 0 , from 1.91 × 10 −8 K to 1.91 × 10 −5 K. Thus, the largest E sc /E D * value considered in Figs. 1 and 2 is 0.15. This places the present study in the regime where the minimum value of the cross section has been predicted to behave universally [14], but where the parameters of the two-body potential and the s-wave scattering length it results in, especially near resonance, are important [15]. channel calculation show deviations from the BA for certain D * values. The positions of the "spikes" coincide with the resonance positions of a 00 . Notably, the widths of the spikes decrease with increasing l + l ′ . The top panel of Fig. 1 shows the resonance positions as predicted by the WKB phase accumulated in different adiabatic potential curves [8]. The crosses, obtained by analyzing the WKB phase of the lowest adiabatic potential curve, predict the positions of the broad resonances very accurately. The squares, obtained by summing the WKB phases of all other adiabatic potential curves, predict the number of narrow resonances semi-quantitatively but do not predict their positions accurately [8]. Figures 2(a) and 2(b) show the generalized scattering lengths a 11 and a 31 for two aligned identical fermions interacting through V m ( r) as a function of D * . Away from resonance, a 11 and a 31 vary approximately linearly with D * . The spikes in Fig. 2 are interpreted as resonances, which we term, as in the boson case, broad and narrow [22]. Figure 2 shows five broad and two narrow resonances (located at D * ≈ 23.5r c and 37.5r c ). A key difference between dipole scattering of identical bosons and identical fermions is that the lowest non-vanishing scattering length for bosons (i.e., a 00 ) cannot be approximated by applying the BA to V dd ( r) (the BA for V dd ( r) gives a 00 = 0) while the lowest non-vanishing scattering length for fermions (i.e., a 11 ) can be, away from resonance, approximated by the BA for V dd ( r) (the BA for V dd ( r) gives a 11 = −2D * /5) [13,21]. The crosses and squares shown in the top panel of Fig. 2 less accurate than that for identical bosons. Figures 1 and 2 show that the widths of broad and narrow resonances increase with increasing E sc for fixed D * /r c and with increasing D * /r c for fixed E sc . Putting this together, we find that the resonance widths of both broad and narrow resonances increase with increasing E sc /E D * . To better understand the resonance structure in Figs. 1 and 2, we determine the bound state energies of the two interacting dipoles in free space. The Schrödinger equation for the relative coordinate is solved using two-dimensional B-splines. The two-dipole system supports a new bound state at those D * /r c values where the scattering lengths a 00 and a 11 for two identical bosons and fermions, respectively, diverge. Solid lines in Figs. 3(a) and 3(b) show the bound state energy for two identical bosons in the vicinity of a broad and a narrow resonance, respectively, while solid lines in Figs. 3(c) and 3(d) show the bound state energy for two identical fermions in the vicinity of a broad and a narrow resonance, respectively. The two-body energy E b of weakly-bound s-wave interacting systems is well described by the s-wave scattering length a 00 , . To test if this simple pseudopotential expression holds for dipolar systems, we analytically continue the scattering lengths for V m ( r) to negative energies. We obtain stable a 00 (E) and a 11 (E) for negative scattering energies by matching the coupled channel solutions to the free-space solutions at relatively small r values (r max ≈ |k| −1 ). The bound state energies for two identical bosons in free space, determined self-consistently [16,17] from , are shown by circles in Figs. 3(a) and 3(b). Similarly, we determine the bound state energies for two identical fermions in free space by solving the equation [18,19] self-consistently [circles in Figs. 3(c) and 3(d)]. Somewhat surprisingly, the bound state energies in the vicinity of both broad and narrow reso- nances are very well described by a single-channel expression for identical bosons and identical fermions (see below for further discussion). In addition to the free-space system, we consider the trapped system. Dashed lines in Fig. 3 show the energies for two dipoles interacting through V m ( r) under external harmonic confinement V trap , V trap = µω 2 r 2 /2. Near resonance, the lowest state with positive energy changes rapidly and turns into a negative energy state with molecular-like character. The energy of this "diving" state is slightly higher than the energy of the free-space system (the trap pushes the energy up). Figures 4(a) and 4(b) show the scaled eigenfunctions rψ(r, θ) for two identical bosons with E ≈ −0.88 ω as a function of r for different θ near a broad and a narrow resonance, respectively. Similarly, Figs. 4(c) and 4(d) show the scaled eigenfunctions for two identical fermions with E ≈ −0.88 ω near a broad and a narrow resonance, respectively. In all panels, the wavefunction cut for θ = 0 • has the largest amplitude, reflecting the fact that the dipole-dipole potential is most attractive for θ = 0 • . Interestingly, the nodal structure of the wavefunction for two identical bosons near a broad resonance [ Fig. 4(a)] has a similar structure to that of the wavefunction of two identical fermions near a broad resonance [ Fig. 4(c)]: Both nodal surfaces show approximately spherical symmetry. On the other hand, the nodal structure of the wavefunction for two identical bosons near a narrow resonance [ Fig. 4(b)] has a similar structure to that of the wavefunction for two identical fermions near a narrow resonance [ Fig. 4(d)]: The nodal surfaces depend on both r and θ. To quantify the higher partial wave contributions, we project the wave functions shown in Fig. 4 onto spherical harmonics. The s-wave contribution of the boson states near the broad and narrow resonances is about 95%, while the p-wave contribution of the fermion states near the broad and narrow resonances is about 95 and 80%, re-spectively. The gas-like states near resonance, in contrast, are dominated by a single partial wave (for bosons, e.g., the swave contribution of the energetically lowest-lying gas-like state is about 99% while the d-wave contribution of the energetically next higher-lying state is about 99%). In the future, it will be interesting to investigate how the higher partial wave contributions of the weakly-bound anisotropic molecules affect the scattering properties between two such composite particles and, more generally, the BEC-BCS crossover-type physics. Lastly, we show that the entire energy spectrum of two aligned dipoles under external harmonic confinement interacting through V m ( r) can be reproduced by a zero-range pseudopotential framework. Since the anisotropy of the dipole-dipole interaction leads to a coupling of different partial waves, the pseudo-potential V ps ( r) contains an infinite number of terms [20], V ps ( r) = ∑ ∞ l,l ′ =0 g l,l ′ (k)Θ l,l ′ ( r), where the coupling strength g l,l ′ (k) is proportional to − tan δ l,l ′ (k)/k l+l ′ +1 and Θ l,l ′ ( r) denotes an operator [20]. Assuming the a l,l ′ vanish for |l − l ′ | > 2, as is the case for two interacting dipoles, the eigenequation for two particles under spherically symmetric external harmonic confinement can be elegantly written in terms of a continued fraction [21]. To obtain the eigenenergies for two aligned dipoles under external harmonic confinement interacting through V ps ( r), we solve the eigenequation self-consistently, using the energy-dependent a l,l ′ (k) obtained for V m ( r) as input parameters. The resulting energies, shown by crosses in Fig. 3, agree well with those obtained for V m ( r) (dashed lines). However, for small |E| the eigenequation for the pseudo-potential results in an unphysical eigenenergy [not shown in Figs. 3(a)-(d)]. For two identical bosons, e.g., the eigenequation for V ps ( r) permits a solution with E ≈ 0.05 ω, which is absent in the eigenspectrum of two identical bosons under external harmonic confinement interacting through V m ( r). Importantly, if we restrict the pseudo-potential to the V 00 and V 11 terms for two identical bosons and fermions, respectively, the eigenspectra for V ps ( r) and V m ( r) agree very well for E 0.5 ω (two iden-tical bosons) and E 1.5 ω (two identical fermions), and the unphysical eigenenergies are absent. This shows (i) that the scattering lengths a 00 (identical bosons) and a 11 (identical fermions) are dominant in this regime, and (ii) that the unphysical eigenenergies are due to the higher partial wave contributions of V ps ( r). The latter can be understood as follows: The coupling strengths g l,l ′ (k) for two interacting dipoles are proportional to a l,l ′ (k)/k l+l ′ , and-since the a l,l ′ (k) are defined so that they approach a constant in the k → 0 limitdiverge as k goes to zero for l + l ′ > 0. A detailed analysis of the eigenequation for V ps ( r) shows that these divergences give rise to the unphysical eigenenergies for small |k|. No unphysical eigenenergies arise for larger |k|; in this regime, the 1/k l+l ′ factor in g l,l ′ (k) can be thought of as a simple "rescaling". Furthermore, the unphysical eigenenergies do not arise if the phase shifts are obtained for a short-range model potential whose scattering lengths are defined by − tan δ l,l ′ (k)/k l+l ′ +1 . Although the pseudo-potential reproduces the eigenenergies well, we note that the single-parameter description fails to describe the higher partial wave admixtures discussed in the context of Fig. 4. In summary, this paper considers the scattering and bound state properties of two interacting dipoles near resonance. Although our analysis has been performed for a simple model potential, we believe that the main conclusions hold more generally. Near resonance, the magnitude of all non-vanishing scattering lengths becomes large, with |a 00 | being largest for identical bosons and |a 11 | for two identical fermions. We have found that the wave function of weakly-bound two-dipole systems contains higher partial wave contributions, raising interesting perspectives for studying BEC-BCS crossover-type physics. Despite the admixture of higher partial waves, a single-parameter pseudo-potential treatment reproduces the eigenenergy of the two-dipole system very accurately.
2008-06-24T21:17:21.000Z
2008-06-24T00:00:00.000
{ "year": 2008, "sha1": "1f59214f652dfb38f1d2eb2c1e3e96bc0a826eeb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0806.3991", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f59214f652dfb38f1d2eb2c1e3e96bc0a826eeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260979910
pes2o/s2orc
v3-fos-license
Case Report: Durable therapy response to Osimertinib in rare EGFR Exon 18 mutated NSCLC Up to 20% of all non-small cell lung cancer patients harbor tumor specific driver mutations that are effectively treated with tyrosine kinase inhibitors. However, for the rare EGFR deletion-insertion mutation of exon 18, there is very little evidence regarding the effectiveness of tyrosine kinase inhibitors. A particular challenge for clinicians in applying tyrosine kinase inhibitors is not only diagnosing a mutation but also interpreting rare mutations with unclear therapeutic significance. Thus, we present the case of a 65-year-old Caucasian male lung adenocarcinoma patient with an EGFR Exon 18 p.Glu709_Thr710delinsAsp mutation of uncertain therapeutic relevance. This patient initially received two cycles of standard platinum-based chemotherapy without any therapeutic response. After administration of Osimertinib as second line therapy, the patient showed a lasting partial remission for 12 months. Therapy related toxicities were limited to mild thrombocytopenia, which ceased after dose reduction of Osimertinib. To our knowledge, this is the first report of effective treatment of this particular mutation with Osimertinib. Hence, we would like to discuss Osimertinib as a viable treatment option in EGFR Exon 18 p.Glu709_Thr710delinsAsp mutated lung adenocarcinoma. Introduction Lung cancer is a global health problem as it is the most common cause of cancer related deaths worldwide.The prognosis of lung cancer remains poor, as most patients initially present with distant metastases.During the past decades, therapeutic options for lung cancer have improved.Modern treatment of lung cancer relies on multimodal therapeutic concepts and includes radiation, surgery, chemotherapy, immunotherapy and targeted therapies with kinase inhibitors.Patients with driver mutations such as mutations in EGFR, BRAF, ALK, RET, KRAS Gly12Cys, ROS1 and NTRK1/2/3 fusions have most outstandingly benefited from the development of targeted therapies.Yet, these patients represent only a minority (~20%) of the entire lung cancer patient collective (1).A major challenge for oncologists during their day-to-day clinical routine is to determine whether a rare mutational pattern in a non-small cell lung cancer (NSCLC) patient might be responsive to an unapproved tyrosine kinase inhibitor (TKI) therapy (2)(3)(4).The scarce clinical evidence available does however show that TKIs such as Afatinib and Osimertinib indeed have clinical efficacy in rare EGFR mutations (5-8).Furthermore, for many mutations it is still unclear if they even have an activating character or if they are mere incidental findings. Case presentation We present the case of a 65-year-old Caucasian male who was diagnosed with stage IV NSCLC in September 2021.The patient initially presented with symptoms of progressive dyspnea, exercise intolerability and recurring thorax pain.As the patient had a known history of cardiovascular disease, cardiac magnetic resonance imaging (MRI) was performed, revealing an incidental nodule of the left posterior inferior lung lobe.He was then referred to our lung cancer center.The patient teaches law as a professor at a university and there was no known family history of cancer.However, the patient presented with 40 pack years of cigarette smoking.The patient did not show any further risk factors for lung cancer such as exposition to asbestos, radiation or other potential hazards.Initial workup included a bronchoscopy with endobronchial ultrasound and transbronchial needle aspiration (EBUS-TBNA).However, two consecutive bronchoscopies failed to deliverer a malignant cytology sample of the tumor for further workup.A subsequent fluorodeoxyglucose positron emission tomography-computed tomography (FDG-PET-CT) revealed a hypermetabolic tumor of the left lung, various bone metastases of the spine and a singular metastasis of the left adrenal gland (Figure 1).Finally, the histology of the tumor was obtained through drainage of a pleural effusion of the left lung.Pathological examination revealed an adenocarcinoma of the lung and the initial staging of the patient resulted in cT1 cN1 cM1c and UICC IVB.Comprehensive molecular diagnostics fulfilling standards of the national Network for Genomic Medicine (Germany) were performed.Targeted next generation sequencing (NGS) with a TSO500 panel (Illumina) was performed to detect single nucleotide variants and small insertions or deletions in 523 genes recurrently affected by mutations in various cancer types.This analysis further evaluated copy number variants of 59 genes, microsatellite instability and tumor mutation burden.Additionally, the Archer FusionPlex Lung panel was used to detect fusion transcripts of 17 genes including ALK, ROS1, RET and NTRK1-3.Fluorescence in situ hybridization (FISH) was performed to detect MET amplifications.These studies revealed an EGFR Exon 18 mutation (p.Glu709_Thr710delinsAsp), a neomorph U2AF1 mutation and a likely inactivating mutation in Frontiers in Oncology frontiersin.orgPPC6, a negative regulator of MEK.Further, likely and known inactivating mutations in ATM, AR, DDX41 and variants of unknown significance in six further genes were detected (Table 1). No ALK, ROS1, RET or NTRK1/2/3 fusion transcripts and no MET amplifications were found.Tumor mutation burden was 8.6 variants/ megabase pair (Mbp).At the same time, there was no expression of programmed death-ligand 1 (PD-L1) on tumor cells.After primary diagnosis of the NSCLC in September 2021, we initiated standard of care first-line treatment.The initial regimen was Cisplatin (75 mg/ m²) and Pemetrexed (500 mg/m²) administered every three weeks, starting mid October 2021.The patient received two cycles of therapy in total without any major side effects.The bone metastases were additionally treated with intravenous infusions of zolendronic acid every other month, commencing in October 2021.The decision to waive radiation therapy in this patient was based on the absence of significant symptom burden associated with bone metastases such as pain or hypercalcemia.Furthermore, there were no osteolytic lesions at risk of fracturing detectable.The patient exhibited good tolerance to zolendronic acid, which was utilized as an adjunctive therapy alongside all systemic treatments thereafter.To monitor therapeutic success, we conducted a computed tomography (CT) scan in December 2021.Unfortunately, this follow-up scan revealed a progression of the primary tumor according to RECIST criteria. Although none of the distant metastases progressed, the patient's pleural effusion required more frequent drainage.As the patient furthermore suffered from severe nausea and vomiting from cisplatin, we decided to end chemotherapy and initiate TKI-therapy with Osimertinib.This decision was based on case reports previously describing the use of TKIs for this particular EGFR mutation with variable success (9)(10)(11)(12).We began treatment with Osimertinib at the beginning of December 2021, starting with 80 mg taken orally once daily.The patient tolerated the administration of Osimertinib well and did not have any clinical signs of side effects or toxicities at first follow-up.Nevertheless, it was necessary to reduce Osimertinib dosing to 40 mg daily as the patient developed worsening thrombocytopenia (nadir of 114 giga/l) three weeks in to his TKI treatment.After dose reduction, the thrombocyte count remained stable at >120 giga/l.We conducted a short-term CT follow-up examination in January 2022, which revealed comprehensive therapeutic response of the NSCLC to Osimertinib therapy.The various bone metastases displayed increasing sclerosis compatible with a notable therapeutic response.The aforementioned pleural effusion likewise regressed.Additionally, the patient continued to tolerate Osimertinib without any further notable toxicities.Follow-up CT scans were conducted in March and August of 2022, which showed stable disease based on Response Evaluation Criteria in Solid Tumors (RECIST).However, the patient again developed a progressive pleural effusion in August 2022.The effusion was initially solely monitored using ultrasound.Regrettably, tumor progression was eventually noted on a further follow-up CT scan in November 2022 and Osimertinib therapy was discontinued.The pleural effusion was now treated with pleurodesis.Again, malignant NSCLC cells were detectable in the pleural fluid and we repeated a comprehensive pathological and molecular workup using NGS (TSO500).Here, PD-L1 status could be assessed to 5% on tumor cells in the newly acquired sample but otherwise the mutation pattern was identical to the initial analysis we conducted.As the patient reported a history of 40 pack years and no prior treatment with immunotherapy, the decision was made for a chemoimmunotherapy re-induction third line therapy regimen.The therapy was initiated in late November 2022 and consisted of Carboplatin AUC 5 (550mg absolute dose), Pemetrexed 500 mg/m² and Pembrolizumab 200mg administered every three weeks (Figure 2).The patient received two cycles of this treatment and tolerated it well.In January 2023, a follow-up CT scan revealed a mixed response to the applied cycles of chemo-immunotherapy.The patient again exhibited progressive pleural effusions on both sides, while the primary tumor in the left lung remained constant.The size of mediastinal lymph nodes was decreasing, but some lymph nodes in the retroperitoneal and mediastinal regions showed minor Discussion and conclusion For patients with classical EGFR mutations in exon 19 and 21 TKI therapies have had a remarkable impact on progression free and overall survival.The optimal therapy for rare EGFR Exon 18 p.Glu709_Thr710delinsAsp mutated NSCLC patients has yet not been determined.This case report emphasizes the importance of sophisticated genetic testing via NGS in NSCLC patients.Unfortunately, not all lung cancer patients receive NGS prior to therapy initiation.At the same time, growing knowledge and therapeutic possibilities with newly developed TKIs are becoming more challenging for clinicians as it becomes increasingly complex to make ideal treatment decisions.This is especially true for applying targeted therapies in rare mutations with unclear or yet unknown clinical implication.For this reason, various databases have been developed during recent years to catalog available knowledge on rare mutations and support clinicians in making appropriate treatment decisions.However, it is also imperative for clinicians to share new insights in applying targeted therapies.Numbers of patients with uncommon mutations will continue to be considerably low and structured clinical trials for such mutations will likely remain rare.Therefore, case reports may offer valuable insight for such mutations.Additionally, structured large-scale national or international investigations such as by the national network for genomic medicine in Germany or the French ERMETIC-IFCT network (13, 14) are even more important.Regarding this clinical case, it is notable that we conducted a second NGS examination with pleural fluid obtained at tumor progression in November 2022.This examination revealed an identical mutation pattern in comparison to the assay we conducted at first diagnosis.The only marker differing from the initial workup was PD-L1.Nevertheless, the tumor progressed regardless of Osimertinib therapy.Several previous case reports have been published describing the use of TKI-therapy in NSCLC patients with rare EGFR mutations.However, the effectiveness of these treatments have often been limited.Moreover, all patients in these reports received either firstor second-generation EGFR-TKIs and their clinical characteristics and demographics differed significantly from the patient described in this report.Ackermann et al. (15) presented the case of an 88-year-old female non-smoker who received Erlotinib and exhibited a partial response lasting for 4 months.Sousa et al. (10) described the case of a 66-year-old female with a smoking history who was treated with Gefitinib and showed a progression-free survival of 4 months and an overall survival of 24 months.Furthermore, Xu et al. ( 4) conducted an analysis of Chinese patients with various rare EGFR mutations, comparing the effectiveness of first or second generation EGFR-TKIs to chemotherapy or a combination of chemotherapy and TKIs.This study suggested that a combination of first generation TKIs and chemotherapy could be equally effective as treatment with Afatinib as a second generation TKI alone.Additionally, Wei et al. (12) reported successful treatment of EGFR Exon 18 Insertion p.Glu709_Thr710delinsAsp mutated NSCLC with Afatinib, followed by Almonertinib after tumor progression.The progression-free survival for Afatinib was 23 months, which was nearly twice as long as in our reported case.However, the patient in this report had different demographics and clinical characteristics, including Asian ethnicity and different tumor stage.Previously both Osimertinib and Afatinib have shown efficacy in clinical trials with rare EGFR mutations.During the LUX Lung trials, patients treated with Afatinib showed an overall response rate (ORR) of up to 70% whereas patients in the UNICORN study treated with Osimertinib showed an ORR of 60% (5, 6, 8).However, none of the patients included carried an EGFR Exon 18 Insertion p.Glu709_Thr710delinsAsp mutation making it yet unclear to judge which TKI provides the greatest therapeutic benefit for this particular mutation.However, as NSCLC commonly metastasizes to the brain, we decided to implement Osimertinib instead of Afatinib due to its superior intracerebral efficacy.Our decision was furthermore based on its more favorable profile regarding adverse effects.Additionally, the use of immunotherapy as initial treatment for this patient is similarly debatable.Our decision was to refrain from administering immunotherapy as first line treatment due to the identified EGFR mutation of uncertain clinical significance.In addition, the absence of PD-L1 expression in the tumor cells of the pleural effusion likewise influenced our decision.However, the extent to which these tumor cells from the pleural fluid resemble the primary NSCLC lung tumor remains likewise debatable.Nonetheless, initiating immunotherapy upfront would have been justifiable in this case, given the patients smoking history of 40 pack years.Another point of discussion revolves around the re-induction therapy regimen following treatment failure of Osimertinib.Applying the IMPOWER150 regimen, comprising of Carboplatin, Paclitaxel, Bevacizumab and Atezolizumab would have also been a viable therapeutic option.In conclusion, the optimal treatment approach for this particular mutation remains undecided and might also depend on individual patient characteristics.To our knowledge, this is the first description of a successful therapeutic response for Osimertinib treatment in an EGFR Exon 18 p.Glu7-09_Thr710delinsAsp mutated NSCLC patient.This case report contributes to the understanding of this rare mutation and we would like to propose Osimertinib as a feasible treatment option. FIGURE 1 PET FIGURE 1 PET-CT scans from initial presentation in october 2021.(A, B) The patient presents with a 18 F-fluourodeoxyglucose (FDG)-positive lesion in the left superior lobe and an ipsilateral pleural effusion.(C) ipsilateral FDG positive lymphnode in the aortopulmonary window.(D, E) FDG positive bone lesions in the 10 th thoracic vertebra (D) and the 2 nd lumbar vertebra (E), resulting in a clinical classification of T1 N1 M1c, Stage IVB, according to the 8th UICC edition. progression.At the same time, bone lesions remained stable compared to previous CT scans and no further distant metastases were detectable.The CT examination was classified as stable disease based on RECIST.As the patient consistently showed good therapy tolerance, two additional cycles of chemo-immunotherapy were administered with unchanged dosage.In February 2023, another follow-up CT scan yet again showed a stable disease state based on RECIST.Both the primary lung tumor and lymph nodes displayed no significant changes in size.Notably, the bone metastases demonstrated progressive sclerosis further indicating therapy response.After completing four cycles of chemo-immunotherapy, Carboplatin and Pemetrexed were discontinued, while Pembrolizumab monotherapy was continued every three weeks.As of May 2023, the patient underwent another CT follow-up examination, which once more showed a stable disease state with a minor reduction of primary lung tumor size.No new distant metastases or other irregularities were observed.As of June 2023, the patient is currently continuing Pembrolizumab monotherapy.
2023-08-19T15:13:45.761Z
2023-08-16T00:00:00.000
{ "year": 2023, "sha1": "45f44018c6253ba5bd5532fb066cb874acebdd57", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1182391/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c94f3a4f045e8a6bab53f9720d06dcfd0c2b614d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1942235
pes2o/s2orc
v3-fos-license
A Mouse with an N-Ethyl-N-Nitrosourea (ENU) Induced Trp589Arg Galnt3 Mutation Represents a Model for Hyperphosphataemic Familial Tumoural Calcinosis Mutations of UDP-N-acetyl-alpha-D-galactosamine polypeptide N-acetyl galactosaminyl transferase 3 (GALNT3) result in familial tumoural calcinosis (FTC) and the hyperostosis-hyperphosphataemia syndrome (HHS), which are autosomal recessive disorders characterised by soft-tissue calcification and hyperphosphataemia. To facilitate in vivo studies of these heritable disorders of phosphate homeostasis, we embarked on establishing a mouse model by assessing progeny of mice treated with the chemical mutagen N-ethyl-N-nitrosourea (ENU), and identified a mutant mouse, TCAL, with autosomal recessive inheritance of ectopic calcification, which involved multiple tissues, and hyperphosphataemia; the phenotype was designated TCAL and the locus, Tcal. TCAL males were infertile with loss of Sertoli cells and spermatozoa, and increased testicular apoptosis. Genetic mapping localized Tcal to chromosome 2 (62.64–71.11 Mb) which contained the Galnt3. DNA sequence analysis identified a Galnt3 missense mutation (Trp589Arg) in TCAL mice. Transient transfection of wild-type and mutant Galnt3-enhanced green fluorescent protein (EGFP) constructs in COS-7 cells revealed endoplasmic reticulum retention of the Trp589Arg mutant and Western blot analysis of kidney homogenates demonstrated defective glycosylation of Galnt3 in Tcal/Tcal mice. Tcal/Tcal mice had normal plasma calcium and parathyroid hormone concentrations; decreased alkaline phosphatase activity and intact Fgf23 concentrations; and elevation of circulating 1,25-dihydroxyvitamin D. Quantitative reverse transcriptase-PCR (qRT-PCR) revealed that Tcal/Tcal mice had increased expression of Galnt3 and Fgf23 in bone, but that renal expression of Klotho, 25-hydroxyvitamin D-1α-hydroxylase (Cyp27b1), and the sodium-phosphate co-transporters type-IIa and -IIc was similar to that in wild-type mice. Thus, TCAL mice have the phenotypic features of FTC and HHS, and provide a model for these disorders of phosphate metabolism. Phenotypic Identification of Tumoural Calcinosis (TCAL) Mice Plasma biochemical analysis, at 12 weeks of age, of 14 G3 progeny (10 males and 4 females) derived from matings between parents and their offspring to yield autosomal recessive phenotypes revealed three mice (2males and 1 female) to have plasma phosphate concentrations of 3.53 mmol/l, 3.10 mmol/l and 2.87 mmol/l, which represented values that were .+3 standard deviations (SD) above the mean plasma phosphate for matched wild-type G3 other unrelated cohort controls (mean 6SD = 1.9060.28 mmol/l, n = 80 (28 males and 52 females). Radiography revealed these 3 mice to have widespread soft tissue opacities ( Fig. 2A). Thus, these mutant mice which had ectopic calcification in association with hyperphosphataemia, displayed phenotypic traits reminiscent of TC and the phenotype was designated TCAL and the locus, Tcal. Dental and retinal abnormalities were not identified in these TCAL mice. Breeding of affected TCAL males (2 mice from the original G3 progeny and 2 newly bred affected G3 mice, aged 10-16 weeks) with 8 different wild-type C3H females failed to yield any pregnancies, thereby suggesting that the TCAL males were infertile. However, TCAL female mice were fertile, and interbreeding of their progeny confirmed that TCAL was inherited as an autosomal recessive trait. TCAL Mice have Ectopic Calcification, Testicular Abnormalities and Increased Apoptosis Von Kossa staining of tissues from the 3 TCAL affected G3 mice, described above, and 3 unaffected littermates (2 males and 1 female) revealed ectopic calcifications in subcutaneous tissues, cutaneous striated muscle, heart, aorta, kidney, tongue (Fig. 2B) and testicular artery (Fig. 2C) only in those sections from TCAL mice. In addition, haematoxylin and eosin (H&E) staining of TCAL mouse testes revealed disorganisation of the seminiferous tubules with a marked reduction of Sertoli cells and spermatozoa (Fig. 2C), consistent with a significant loss of germ cells and the observed infertility of these male TCAL mice. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining of testicular sections, revealed the TCAL male mice to have increased apoptosis in the lumen and periphery of seminiferous tubules, which likely involved spermatozoa, and Sertoli cells or spermatocytes (Fig. 2D), respectively. In addition, TUNEL staining of kidney sections revealed that TCAL male mice had increased apoptosis involving the interstitial cells in the renal medulla (Fig. 2E). is formed by residues 20 to 37; a glycosyltransferase domain which is formed by residues 188 to 374; and a carbohydrate-binding domain which is formed by residues 506 to 630, and contains two QXW repeats formed by residues 587-589 and 625-627, respectively. Human GALNT3 has four potential N-linked glycosylation sites (shown as branches) at amino acid residues 132, 297, 484 and 619, respectively [61]; whereas mouse Galnt3 has two potential N-linked glycosylation sites (not shown) at amino acid residues 297 and 484 (NetNGlyc 1.0). Twenty five GALNT3 mutations (10 missense, 6 nonsense and 9 frameshift/deletion) have been reported in patients with familial tumoural calcinosis (FTC) and hyperostosishyperphosphataemia syndrome (HHS) (asterisked); and details of these 25 GALNT3 mutations are provided in Table 1. Four GALNT3 mutations (Glu281Gly, Leu366Arg, Arg438Cys and 464-508 deletion) have been reported in patients with FTC and HHS (bold and asterisked), thereby indicating that these 2 disorders are allelic variants [3,6,18,19,24]. The location of the ENU-induced mouse TCAL Trp589Arg mutation, which involved an evolutionary conserved Trp (W) residue (Fig. 3D) 3A). This interval contained 95 genes, which included Galnt3 [2]. DNA sequence analysis of the Galnt3 gene revealed a T to A transversion at codon 589 that resulted in a missense mutation Trp589Arg (Fig. 3B). The mutation was confirmed using the amplification refractory mutation system (ARMS) PCR method [33]. Thus, PCR using wild-type (WT)-specific primers yielded a 307 bp product only in DNA from unaffected mice (WT or heterozygous (Tcal/+)), whereas mutant-specific primers yielded a 230 bp product only in DNA from TCAL affected mice (Tcal/ Tcal) or unaffected heterozygotes (Tcal/+) (Fig. 3C). The Trp589Arg mutation was found to involve an evolutionary conserved Trp (W) residue (Fig. 3D) that is part of the first of two QXW repeats within the carbohydrate-binding domain of GALNT3 (Fig. 1). In vitro and in vivo Functional Characterization of Mutant Galnt3 To investigate the functional consequences of the Trp589Arg Galnt3 mutation in vitro, WT and mutant enhanced green fluorescent protein (EGFP)-tagged Galnt3 cDNA constructs were transfected in COS-7 cells and their sub-cellular localization assessed by immunofluorescence and confocal microscopy. WT Galnt3-EGFP, which co-localized with the Golgi marker, GM130 (Fig. 4A), was found to be expressed in the Golgi apparatus, whereas the expression pattern of the Arg589 mutant Galnt3 showed predominant co-localization with the endoplasmic reticulum (ER) marker, protein disulphide isomerase (PDI) (Fig. 4A), thereby suggesting impaired trafficking and ER retention of the mutant protein. Further investigation of the in vivo functional consequences of this Arg589 Galnt3 mutation revealed an effect on glycosylation ( Fig. 1 and 4B). Thus, incubation of kidney homogenates from WT littermates, Tcal/+ and Tcal/Tcal mice in the presence or absence of the deglycosylating enzyme PNGase F and examination of the products by Western blot analysis using an anti-GALNT3 antibody, revealed that the kidney homogenates from both WT littermates and Tcal/+ G3 mice had three processed Galnt3 products, one of which was undetectable upon PNGase F digestion, thereby indicating that this was a glycosylated product; in contrast, the kidney homogenate from Tcal/Tcal mice lacked the glycosylated form of Galnt3, thereby indicating a defective glycosylation of the mutant protein (Fig. 4B). Effects of Galnt3 Mutation on Gene Expression in Bone and Kidney Femora and kidneys were obtained from 3 WT littermates (2 males and 1 female) and 3 Tcal/Tcal (2 males and 1 female) adult G5 mice that were .18 weeks of age. RNA was extracted and gene expression investigated by quantitative reverse transcriptase-PCR (qRT-PCR). Tcal/+ mice were not studied, as plasma biochemistry (Fig. 5) and areal BMD [34] analysis had not revealed any significant differences when compared to WT littermates. Data from Tcal/Tcal male and female mice were combined as analysis of plasma biochemistry (Fig. 5) and areal BMD [34] had revealed similar abnormalities when compared to WT littermates. The expression of Galnt3 and Fgf23 was studied in femora, and this revealed that Tcal/Tcal mice had significantly increased expressions of Galnt3 (Fig. 7A) and Fgf23 (Fig. 7B) by 1.8fold and 19-fold, respectively, when compared to that in WT littermates. The higher Fgf23 expression contrasts with the lower circulating concentrations of Fgf23, suggesting that there is a loss of negative feedback in the Tcal/Tcal mice (Fig. 5E). The effects of the reduced circulating concentrations of Fgf23 on the renal expression of Klotho (Kl) [26] (Fig. 7C), vitamin D 1-alpha hydroxylase (Cyp27b1) (Fig. 7D) and the renal sodium-phosphate co-transporters (Npt2a) [28] (Fig. 7E) and Npt2c [29] (Fig. 7F), were investigated; however, these were found to be similar in Tcal/Tcal mice and WT littermates. Thus, the observed plasma biochemical abnormalities in phosphate ( Fig. 5A and 5B) and 1,25-dihydroxyvitamin D (Fig. 5F) homeostasis, could not be attributed to any possible effects of reduced plasma Fgf23 concentrations on renal expression of Npt2a, Npt2c, and Kl; or Cyp27b1, respectively. Discussion Our study describes a mouse model (TCAL) with an ENUinduced Galnt3 mutation that has similarities to familial tumoural calcinosis (FTC) in man (Table 3). Thus, TCAL mice had hyperphosphataemia in association with ectopic calcification. Moreover, TCAL mice had increased circulating concentrations of 1,25-dihydroxyvitamin D, and decreased plasma intact Fgf23 concentrations. TCAL was inherited as an autosomal recessive disorder, consistent with the inheritance of FTC, and due to a missense Trp589Arg Galnt3 mutation ( Fig. 3B and 3C) that was induced by ENU, which is known to induce multiple mutations simultaneously [32]. However, the likelihood that another genetic defect within the 8.47 Mb region that was established to be the location of the Tcal locus (Fig. 3A), could be the underlying cause of TCAL is ,0.01, based on the following reasoning. The nominal ENU induced base pair mutation rate for potentially functional mutations has been estimated to be 1 in 1.82 Mb of coding DNA in the F1 founder animals [35] and given that ,2.5% of the mouse genome is coding, it has been calculated that the probability of two functional mutations arising within a 5 Mb genomic region is ,0.002 [36]; thus, the likelihood of the Galnt3 Trp589Arg and another functional mutation arising within the 8.47 Mb containing the Tcal locus is ,0.004. This indicates that the Galnt3 Trp589Arg mutation, which was shown also to result in ER retention of the mutant protein (Fig. 4A), as well as defective glycosylation (Fig. 4B), is highly likely to be the sole genetic defect causing TCAL. Although the Trp589Arg missense Galnt3 mutation associated with TCAL in the mouse has not been identified in patients with FTC, it is important to note that Trp589 is conserved in both species and that the Trp589Arg mutation is representative of 40% of GALNT3 abnormalities which are also missense mutations in patients with FTC and HHS [3,[14][15][16][17][18][19]. During the course of our study, a Galnt3-deficient mouse was reported [31], and this mouse model and TCAL had some phenotypic features in common (Table 3). Thus, TCAL and Galnt3-deficient mice are characterized by the presence of hyperphosphataemia, decreased plasma alkaline phosphatase activity, reduced circulating intact Fgf23, increased Fgf23 gene expression in bone, increased whole body BMD in male mice, and male infertility due to loss of spermatozoa in seminiferous tubules. However, there are also important differences between TCAL and the Galnt3-deficient mice (Table 3) and these include: an absence of growth retardation in TCAL mice; elevated plasma 1,25dihydroxyvitamin D concentrations in TCAL mice (Fig. 5F); normal plasma concentrations of calcium and PTH in TCAL mice; increased areal BMD in female TCAL mice; and ectopic calcification ( Figs. 2A & 2B) in TCAL mice, which is a hallmark of FTC in man [4], but was notably absent in Galnt3-deficient adult mice, even when aged to 1 year [31]. The basis of these differences between TCAL and Galnt3-deficient mice remains to be elucidated. A possible explanation may involve strain-specific differences as the TCAL mice were on a mixed C57BL/6J and C3H background, whilst the Galnt3-deficient mice were on a C57BL/6J and 129SvEv background [31]. In addition, ENU-induced mouse models have been reported to differ in phenotypic features when compared to the corresponding null mice, generated using targeted gene ablation strategies [32]. For example, mice deficient for the fat mass and obesity associated (FTO) gene (FTO 2/2 ) have been reported to have phenotypic differences when compared to mice that were homozygous for the ENU hypomorphic mutant FTO I367F . Thus, FTO 2/2 and FTO I367F mice both have reduction in adiposity and weight, but only FTO 2/2 mice show perinatal lethality, and age-related reduction in size and length [37]. Another possibility that may contribute to these differences in severity of the phenotype may be related to the functions of other GALNTs, e.g. Galnt6 that can partially compensate for the loss of Galnt3 [31]. However, it is also important to note that there is significant variability in the clinical manifestations amongst FTC patients (Table 3). For example, FTC, in man, has a variable age of onset with variation in the severity of calcified lesions, such that some patients suffer from large extra-skeletal lesions that require surgery [4,38], whilst others have mild disease that may be asymptomatic [14]. In addition, GALNT3 mutations in man, may result in the hyperostosis-hyperphosphatemia syndrome, (HHS) [15,17,23,24], in which cortical hyperostosis is a notable feature. However, the same GALNT3 mutation may be associated with FTC and HHS in members of the same family or in unrelated families [6,18,19,24]. Indeed, FTC and HHS are considered to be allelic variants and the situation between TCAL and Galnt3deficient mice may be analogous. Thus, TCAL mice had ectopic calcifications (Fig. 2) and thickening of cortical bone (Table 2), consistent with FTC and HHS, whilst Galnt3-deficient mice did not have soft tissue calcification, but only had thickening of cortical bone, consistent with isolated HHS. The three most notable differences between human FTC and mouse TCAL are the findings of decreased plasma alkaline phosphatase activity, and male infertility in TCAL mice which are not found in man, and the occurrence of smaller tumoural calcinosis lesions in TCAL mice. Interestingly, the Galnt3-deficient mouse also was reported to have these differences, and the basis of these inter-species phenotypic differences remains to be elucidated. The observation of decreased plasma alkaline phosphatase activity has in the Galnt3-deficient mice been attributed to be associated with the reported increased bone mineralization in these mutant mice [31]. Given the reported increased BMD [34] (Table 3) in the Tcal/Tcal mice, it would seem probable that the decreased plasma alkaline phosphatase activity in the Tcal/Tcal mice is also a reflection of increased bone mineralization. Tcal/Tcal and Galnt3-deficient male mice had infertility, and it is important to note that recent studies indicate that this is not due to the hyperphosphataemia, as normalizing the serum phosphate concentrations in Galnt3-deficient mice, by use of a low phosphate diet failed to correct the infertility [39]. Infertility in males with FTC or HHS is not a notable feature. However, one boy with FTC, has been reported to have testicular microlithiasis which was associated with oligoazoospermia, and histology revealed that the calcifications were localized to the lumen of the seminiferous tubules and the interstitium [21]. The FTC in this boy and his family did not co-segregate with an autoimmune disorder, which resulted in arthralgia, vasculitis and chronic immune thrombocytopenic purpura, thereby indicating that the oligoazoospermia was not due to the autoimmunity [21]. GALNT3 is highly expressed in the testis, and its loss may cause deposition of calcium in the testis; indeed, it has been suggested that testicular calcification may be an The Tcal locus, which originated in a C57BL/6 ENU-mutagenised male and is hence inherited with the C57BL/6 alleles, was mapped to a 8.47 Mb region flanked by the SNPs rs28002552 and rs4223216 on chromosome 2C1.3-C2. This region contained 95 genes which included the Galnt3 gene. (B) DNA sequence analysis of Galnt3 identified a T to A transversion in codon 589, such that the wild type (WT) sequence, TGG which encodes an evolutionarily conserved tryptophan (Trp) residue was altered to the mutant (m) sequence, AGG which encodes an arginine (Arg) residue. (C) Amplification refractory mutation system (ARMS) PCR was used to confirm the presence of the mutation by designing primers (n, normal (WT) and m, mutant) that yielded 307 bp WT and 230 bp mutant PCR products, respectively. PCR amplification of Gapdh was used as a control for the presence of DNA. N = numbers of mice with each genotype. (D) Protein sequence alignment (CLUSTALW) of Galnt3 from 5 species revealed that the Trp (W) residue is evolutionarily conserved in the Galnt3 orthologues of mouse, human, monkey, xenopus and zebrafish. doi:10.1371/journal.pone.0043205.g003 , or EGFP-mutant (Arg589) constructs, and counterstained with anti-GM130 antibody, which immunostains the Golgi apparatus (red), or anti-PDI antibody, which immunostains the ER (red). DAPI was used to stain the nucleus (blue). WT Galnt3 co-localizes with GM130, but not PDI (data not shown), thereby revealing that it is targeted to the Golgi apparatus. However, the mutant Galnt3 co-localizes with PDI and is predominantly found in the ER. (B) Western blot analysis of kidney homogenates using anti-GALNT3 antibody, revealed that protein lysates from WT littermates, and Tcal/+ mice had three immunoreactive products (a, b and c) whereas those from Tcal/Tcal mice had only two products (b and c). underestimated feature of FTC [21]. The differences in the sizes of the calcinosis lesions between human FTC and the Tcal/Tcal and Galnt3-deficient mouse models, may in part be attributed to the observed variability of FTC lesions in man [4,14,38]. However, they may also be related to dietary phosphate intake. For example, Galnt3-deficient mice when placed on diets containing either 0.1% PNGase F treatment resulted in loss of the largest Galnt3 product (band a) observed in the lysates from WT littermates, and Tcal/+ mice, indicating that these were glycosylated products. doi:10.1371/journal.pone.0043205.g004 (low), 0.3% (low normal), 0.6% (normal) or 1.65% (high) phosphate developed a significant increase in serum calcium concentrations when on the high-phosphate diet [39], although Galnt3-deficient mice, aged to 1 year, have not been observed to develop ectopic calcification when on a 0.93% phosphate diet [31]. It has been postulated that the hypercalcaemia induced by the 1.65% (high) phosphate diet may likely contribute to the overall increase in calcium-phosphate products and subsequently ectopic calcifications. Thus, it seems possible that the variability in the size of the tumoural calcinosis lesions in man, may be related to dietary phosphate, with high intake being associated with the larger lesions. Another possibility that may contribute to the variability in the size of the tumoural calcinosis lesions, may involve a response to injury. For example, it has been suggested that early calcinosis lesions are triggered by injury and bleeding, with subsequent aggregation of foamy histiocytes, which become transformed into cystic cavities lined by osteoclast-like giant cells, and surrounded by monocytes and iron-loaded macrophages [40,41]. Studies investigating the responses to injury and the underlying inflammatory and immune mechanisms in the Tcal/ Tcal mice, which have calcinosis lesions, and in the Galnt3-deficient mice, which do not have calcinosis lesions (Table 3), may help to elucidate the basis of these differences. GALNT3 belongs to a large family of Golgi-resident glycosyltransferases that initiate mucin-type O-glycosylation, one of the most abundant forms of protein glycosylation found in eukaryotic cells [42,43]. Structurally, GALNT3 consists of an N-terminal transmembrane domain, a central catalytic (glycosyltransferase) domain and a C-terminal ricin (carbohydrate binding) domain (Fig. 1). FTC and HHS mutations are distributed throughout the GALNT3 gene, with no evidence for clustering. The Trp589Arg missense mutation identified in TCAL mice is situated in the carbohydrate binding domain (Fig. 1) which is characterized by the presence of QXW (glutamine-any amino acid-tryptophan) repeats [44], and two of these which are present in both human and mouse GALNT3 orthologues. The Trp589Arg mutation in TCAL mice alters the tryptophan residue in the first repeat. Each Figure 7. Analysis of gene expression in bone and kidneys. RNA from femora and kidneys was extracted from WT littermates (black) (2 males and 1 female) and Tcal/Tcal (white) (2 males and 1 female) adult mice, aged 18-20 weeks. Quantitative reverse transcriptase-PCR (qRT-PCR) was used to study the expression of: (A) Galnt3 and (B) Fgf23 in femora; and (C) Kl, (D) Cyp27b1, (E) Slc34a1, and (F) Slc34a3 in kidneys. Samples were analysed in triplicate (n = 3 mice for each group i.e. total of 9 samples) and mRNA levels were normalized to Gapdh and expressed as fold change (mean 6 SEM) compared to WT. The data from males and females were combined, as differences in plasma biochemical analysis between the genders had not been observed (Fig. 5). The expression of Galnt3 and Fgf23 was significantly increased in the bone of Tcal/Tcal mice when compared to that of WT littermates; however, the expression of the renal expressed genes Kl, Cyp27b1, Slc34a1 and Slc34a3 was not significantly different in the Tcal/Tcal mice compared to WT littermates. P-values are from unpaired Students t-test (*p,0.05, **p,0.01). doi:10.1371/journal.pone.0043205.g007 QXW repeat forms an omega loop and it has been suggested that these could be important for post-translational protein folding and stabilization, and for carbohydrate binding [44]. Indeed, our in vitro and in vivo studies of the Galnt3 Trp589Arg mutant, which alters the tryptophan in the first QXW repeat, demonstrated such roles for the QXW repeats by showing impaired trafficking of the mutant protein, with its retention in the endoplasmic reticulum (Fig. 4A) and defective glycosylation of the mutant Galnt3 protein in kidney lysates from Tcal/Tcal mice, respectively (Fig. 4B). Studies of null mouse models of FGF23 [30], vitamin D-1-alpha hydroxylase [45], klotho [46] and NPT2a [28] have established that FGF23 reduces serum phosphate levels by suppressing phosphate reabsorption in proximal kidney tubules [47], thereby playing a key role as a regulator of phosphate metabolism. FGF23 is Oglycosylated by GALNT3 to protect it from proteolytic cleavage [10,48], and the underlying molecular mechanism causing FTC and HHS in patients with GALNT3 mutations involves defective glycosylation of FGF23 resulting in enhanced cleavage and inactivation of FGF23 [47]. Our results, which reveal a reduction in circulating concentrations of intact full-length Fgf23 in Tcal/ Tcal mice, indicate that the Trp589Arg Galnt3 mutation is an inactivating mutation whose loss-of-function releases the inhibition on 1,25-dihydroxyvitamin D synthesis [49], as observed by increased plasma concentrations of 1,25-dihydroxyvitamin D. Furthermore, our in vivo results which show a 1.8 fold increase in bone expression of Galnt3 in Tcal/Tcal mice in response to chronic hyperphosphataemia are in agreement with in vitro studies which showed that GALNT3 gene expression can be induced by administration of extracellular phosphate to cultured human fibroblasts [50]. Moreover, our analysis, which revealed extensive apoptosis in testis (Fig. 2D) and kidney (Fig. 2E) in association with the prevailing hyperphosphataemia in Tcal/Tcal mice, is in agreement with in vitro studies that have reported that high levels of extracellular phosphate are a potent inducer of oxidative stress and apoptosis in cultured human endothelial cells [51] and osteoblast-like cells from human bone explants [52]. In summary, our study has identified a mouse model for autosomal recessive FTC due to an ENU-induced missense mutation (Trp589Arg) in Galnt3 and this will help to elucidate further the molecular mechanisms of FTC and provide a model for investigating novel treatments. Generation of Mutant Mice Male C57BL/6J mice were treated with ENU and mated with untreated C3H female mice [32]. The male progeny (G1) were subsequently mated with normal C3H females to generate G2 progeny. The female G2 progeny were backcrossed to the G1 fathers and the resulting G3 progeny [32] were screened at 12 weeks of age for recessive phenotypes. Mice were fed an expanded rat and mouse no. 3 breeding diet (Special Diets Services, Witham, UK) containing 1.15% calcium, 0.82% phosphate and 4088.65 units/kg vitamin D, and given water ad libitum. Wild-type littermates were used as controls, as these would have similar random assortments of segregating C57BL/6J and C3H alleles, to those of the mutant mice, thereby minimising any strain-specific influences. Plasma Biochemistry Blood samples were collected from the lateral tail vein of mice [53] that had fasted for 4 hours. Plasma samples were analysed for total calcium, inorganic phosphate, alkaline phosphatase activity, urea, creatinine and albumin on a Beckman Coulter AU680 semiautomated clinical chemistry analyzer using the manufacturer's instructions, parameter settings and reagents, as described [53]. Plasma calcium was adjusted for variations in albumin concentrations using the formula: ((albumin-mean albumin) 60.02) + calcium), as described [54]. For analysis of PTH, FGF-23, and 1,25-dihydroxyvitamin D, blood samples were collected from the retro-orbital sinus after terminal anaesthesia, and plasma was separated by centrifugation at 3000 g for 5 min at 4uC. PTH was quantified using a two-site ELISA kit (Immunotopics, California, USA), intact FGF-23 was quantified using a two-site ELISA kit (Kainos Laboratoties, Tokyo, Japan), and 1,25-dihydroxy vitamin D was measured using an assay system (Immunodiagnostic Systems, Boldon, UK) involving purification by immunoextraction followed by quantification by enzyme immunoassay. Imaging by Radiography and Micro-CT Scanning Anaesthetised mice were subjected to digital radiography at 26 kV for 3 seconds using a Faxitron MX-20 digital X-ray system (Faxitron X-ray Corporation, Lincolnshire, USA) [55]. Images were processed using the DicomWorks software (http://www. dicomworks.com/). For micro-CT scanning, formalin-fixed, undecalcified tibiae were used and analysed by a micro-CT scanner (model 1172a, Skyscan) at 50 kV and 200 mA utilizing a 0.5 aluminium filter and a detection pixel size of 17.4 mm 2 . The proximal tibia was scanned to measure trabecular bone [56], using a detection pixel size of 4.3 mm 2 , and images were scanned every 0.7u through a 180u rotation. Scanned images were reconstructed using Skyscan NRecon software and analyzed using the Skyscan CT analysis software (CT Analyser v1.8.1.4, Skycan). A volume of 1 mm 3 of trabecular bone 0.2 mm from the growth plate was chosen. Trabecular bone volume as proportion of tissue volume (BV/TV, %), trabecular thickness (Tb.Th, mmx10 22 ), trabecular number (Tb. N, mm 21 ) and structure model index (SMI) were assessed in this region using the CT analysis software. Mapping, DNA Sequence Analysis and Genotyping Genomic DNA was extracted from tail or auricular biopsies, as described [53]. For genome-wide mapping, genomic DNA was amplified by PCR using a panel of 91 single nucleotide polymorphic (SNP) loci arranged in chromosome sets, and the products were analysed by pyrosequencing [55]. Individual exons of Galnt3 were amplified from genomic DNA by PCR using genespecific primers and Taq PCR Mastermix (Qiagen, Crawley, UK), and the PCR products sequenced using BigDye terminator reagents and ABI 3100 sequencer (Life Technologies, Carlsbad, USA). For genotyping, DNA was amplified by ARMS PCR using Taq PCR Mastermix (Qiagen, Crawley, UK) and specific primers for the wild-type (F: GACCATCGCCCCTGGAGAACAGA-CAT, R: AGAAGTTTTTCACCTACAGAAGCCAAGCGT) and mutant (F: CTTGTTTTATTTTGCAACTGGGCACAC, R: GAGCCAATCACCTTCCGAATCTCTCT) Galnt3 sequences, and Glyceraldehyde 3-phosphate dehydrogenase (Gapdh) (F: CTCAGCTCCCCTGTTTCTTG, R: GGAAAGCT-GAAGGTGACGG), and separated by agarose gel electrophoresis before image acquisition using a Gel Doc TM UV transilluminator (Bio-Rad, Hemel Hempstead, UK) [33]. In vitro and in vivo Expression Studies of Wild-type and Mutant Galnt3 A full length mouse wild-type Galnt3 cDNA was amplified from an IMAGE clone (IMAGE: 5342768) with Pfu Ultra II fusion (Agilent Technologies, Stockport, UK) using the forward primer and the reverse primer (59-AGTGGATCCGA AT-CATTTTGGCTAAAAATCCATT -39), and the PCR product sub-cloned into pEGFP-N1 (Clontech, Saint-Germain-en-Laye, France) [58]. The Galnt3 mutation was introduced using sitedirected mutagenesis with the forward primer 59-GGAGAACA-GATAAGGGAGATTCGGA-39 and its reverse complement, and sequence analysis of the constructs was undertaken using previously reported methods [58]. The wild-type and mutant Galnt3 constructs were transiently transfected into COS-7 cells using FuGENE 6 reagent (Roche, Welwyn Garden City, UK) and 1 mg of each construct as previously described [59] and expression visualized by immunofluorescence [60]. Briefly, transfected cells cultured on glass coverslips were fixed, permeabilized, blocked and incubated with either mouse anti-Golgi matrix protein (GM130) (BD Bioscience, Oxford, UK) or mouse anti-protein disulphide isomerase (PDI) (Enzo Life Science, Exeter, UK) diluted 1:500. The secondary antibody was AlexaFluor 594 goat anti-mouse (Invitrogen, Paisley, UK) diluted 1:500. Coverslips were mounted onto slides in VECTASHIELDH mounting medium with DAPI (Vector laboratories, Peterborough, UK) and visualized by confocal microscopy using a Leica TCS SP5 confocal system, attached to a DMI 6000 microscope. Western blot analysis was performed using equal amounts of proteins from kidney homogenates that were pre-incubated for 1 h at 37uC in the presence or absence of peptide: N-glycosidase F (PNGase F), mixed with LDS sample buffer (Invitrogen, Paisley, UK) before separation by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and electroblotting onto nitrocellulose membrane (Schleicher and Schuell, Dassel, Germany) [60]. Membranes were probed with the rabbit polyclonal anti-human GALNT3 antibody (Sigma-Aldrich, Dorset, UK) followed by HRP-conjugated antirabbit IgG (Bio-Rad, Hemel Hempstead, UK) and ECL detection (GE Healthcare, Little Chalfont, UK). The membrane was stripped and re-probed with HRP-conjugated mouse anti-GAPDH antibody (Abcam, Cambridge, UK) as a loading control [55]. In vivo Gene Expression Studies Total RNA was isolated from kidneys using the RNeasy mini kit (Qiagen, Crawley, UK). For extraction from bones, femora were pulverised under liquid nitrogen, homogenised in QIAzol lysis
2016-05-16T09:12:21.859Z
2012-08-13T00:00:00.000
{ "year": 2012, "sha1": "67e289fb18d0f938da0fe0c7978066ff9d7512d0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0043205&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c16212880e5a4044994f784febd66cba77ca237", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Chemistry" ] }
229176243
pes2o/s2orc
v3-fos-license
Modelling the impact of a smallpox attack in India and influence of disease control measures Objectives To estimate the impact of a smallpox attack in Mumbai, India, examine the impact of case isolation and ring vaccination for epidemic containment and test the health system capacity under different scenarios with available interventions. Setting The research is based on Mumbai, India population. Interventions We tested 50%, 70%, 90% of case isolation and contacts traced and vaccinated (ring vaccination) in the susceptible, exposed, infected, recovered model and varied the start of intervention between 20, 30 and 40 days after the initial attack. Primary and secondary outcome measures We estimated and incorporated in the model the effect of past vaccination protection, age-specific immunosuppression and contact rates and Mumbai population age structure in modelling disease morbidity and transmission. Results The estimated duration of an outbreak ranged from 127 days to 8 years under different scenarios, and the number of vaccine doses needed for ring vaccination ranged from 16 813 to 8 722 400 in the best-case and worst-case scenarios, respectively. In the worst-case scenario, the available hospital beds in Mumbai would be exceeded. The impact of a smallpox epidemic may be severe in Mumbai, especially compared with high-income settings, but can be reduced with early diagnosis and rapid response, high rates of case finding and isolation and ring vaccination. Conclusions This study tells us that if smallpox re-emergence occurs, it may have significant health and economic impact, the extent of which will depend on the availability and delivery of interventions such as a vaccine or antiviral agent, and the capacity of case isolation and treatment. Further research on health systems requirements and capacity across the diverse states and territories of India could improve the preparedness and management strategies in the event of re-emergent smallpox or other serious emerging infections. INTRODUCTION India is the second-most populous country in the world, with several megacities, such as Mumbai, Delhi and Chennai, where people live in close proximity with high population density. Infectious disease epidemics are common in India, such as H1N1pdm09 which has been causing recurrent, severe epidemics since 2009. 1 2 A study of the phylogeography of influenza H1N1pdm09 in India showed that most transmission around the country was from Maharashtra. 3 Any respiratory transmissible infectious disease can spread rapidly especially in an urban population. A biological attack caused by a respiratory-transmissible agent such as smallpox could have a serious impact in India, due to high contact rates and population density. 4 The last natural case of smallpox occurred in 1977, but India was at that time the epicentre of smallpox globally. 5 Smallpox was declared eradicated globally in 1980, but recent developments in synthetic biology of orthopoxviruses have increased the risk of re-emergence of the variola virus (VARV). [6][7][8][9] Stocks of live VARV are currently held in two WHO collaborating centres in the USA and the Russian Federation, but the virus could be created synthetically. 7 10 Before smallpox eradication, nearly 60% of unvaccinated close contacts or secondary household contacts of smallpox were infected, and airborne transmission was also observed. 11 India was one of the most challenging settings for the global eradication campaign, which began in 1967 using mass vaccination as a strategy. 12 India had its first lymph smallpox vaccine in the 19th century, 10 13-16 and at the beginning of 20th century (1900-1947), many Strengths and limitations of this study ► The model takes into account heterogeneity of age, disease transmission and immunological levels. ► Age-specific rates of immunosuppressive conditions were estimated for Mumbai and included in the model. ► This study does not include different route of transmission than airborne. ► Other aspects that could influence transmission include seasonality, or vaccination effectiveness such as vaccine refusal were not included in the model. Open access research institutes in India started manufacturing smallpox vaccine as lymph. 17 In the previous century, the 'Government of India Act of 1919' introduced a system of dual government for the British India provinces with the principle of division of executive branch of each provincial government into popularly responsible section and authoritarian section. 18 19 This resulted in fragmentation of authority, due to transfer of various areas of administration from federal ministers to local-government, including education, agriculture, public works and public health. 18 19 Local governments were responsible for providing public health programmes, such as smallpox vaccination. 17 However, insufficient financial support from local authorities to finance vaccination led to low uptake of smallpox vaccination. 17 Patchy vaccination efforts continued till the start of World War II (1939) and became worse during the war. 17 World War II led to a further resurgence of smallpox in the period 1944-1945 in India. 17 However, an increased focus on smallpox vaccination after the war resulted in a decrease in cases. 17 Due to the inability to achieve high coverage with mass vaccination in India, the WHO made the decision to change from mass vaccination to 'surveillance contaminant searching' and ring vaccination first in Africa, followed by India; this thereafter became the mainstay of the global eradication strategy. 17 20 Ring vaccination strategy limits the spread of disease by vaccinating close or direct contacts of diagnosed cases, who are most likely to be infected. 21 Smallpox is a highly infectious disease, which can be caused by two different variants, variola minor and variola major. The first presents with much milder symptoms and a case fatality rate (CFR) of about 1%, while variola major had a CFR greater than 30% [22][23][24] and the risk of death higher among infants, 25 26 older people 27 and the immunosuppressed. The impact of smallpox reemergence is affected by residual vaccine immunity and immunosuppression. 28 Smallpox vaccine immunity wanes over time, possibly as rapidly as within 5 years. 29 30 People with multiple primary vaccinations may have greater protection, up to 10-20 years or longer. 31 32 However, it is unclear how long protection lasts after multiple vaccinations. Nearly 40 years since mass vaccination programmes ceased, residual vaccine immunity is likely to be minimal. 31 32 Routine smallpox vaccination has not occurred in India since eradication was declared in 1980. 29 Population immunity in India is therefore likely to be low. There has been limited research on population-level smallpox immunologic status and residual vaccine immunity in India. Furthermore, health capacity in India in the event of resurgent smallpox will be a challenge in remote, rural and urban settings, as well as in coordinating public health response across a largely privatised health sector. 33 Aims The aim of this study is to estimate the impact of smallpox re-emergence in Mumbai, India under different scenarios with available interventions. METHODS The scenario is a deliberate, large scale attack, with 1000 cases of smallpox occurring simultaneously in Mumbai. A large scale attack was used to test the worst case scenario. We assumed that the virus used in the biological attack is variola major, therefore the circulation of variola minor is not considered in this analysis. We used a susceptible, exposed, infected, recovered model for smallpox transmission 28 34 35 to simulate a smallpox outbreak in Mumbai. The model assumes an overall rate of transmission from person to person based on observed epidemiology as described below, but does not differentiate modes of transmission (such as airborne, fomite or contact). In the model, the population was categorised into vaccinated and unvaccinated compartments and these compartments were further split into severely immunosuppressed, mildly immunosuppressed and immunocompetent groups. The model contains ordinary differential equations to shift the population into different epidemiological transmissible states such as susceptible, infected, infectious, recovered and dead. 27 Susceptible and latent compartments in the model are a matrix of 6 rows and 18 columns, where the rows represent the different immunity levels or disease severity and the columns are the age groups, while the infectious compartment is a matrix of 4 rows and 18 columns, representing smallpox disease types and age groups, respectively. 27 Mumbai was selected because transmission studies of influenza (also transmitted by the respiratory route, such as smallpox) show it to be the epicentre of respiratory transmission. 3 We then simulated outbreak response in order to explore the duration of the epidemic, vaccination doses needed and required health capacity system in Mumbai. Vaccine efficacy pre-exposure was estimated and reported from WHO as between 91% and 97% for first-generation vaccines, used in the pre-eradication era, while the second-generation vaccines, stockpiled now in most countries, has an estimated efficacy between 96% and 99%. 36 In this study, we assumed a vaccine efficacy of 95% and 98% for people never vaccinated and previously vaccinated, respectively. In the case of vaccine as postexposure prophylaxis for contacts, we halved the efficacy to 50% and 53%, respectively. We estimated the total vaccine doses required and the number of hospital beds required for both best and worstcase scenarios. The number of doses were compared with the available WHO stockpile of smallpox vaccine 37 in order to determine whether the stockpile is sufficient to control the epidemic in Mumbai. We also determined the duration of the epidemic under different scenarios. The number of hospital beds were compared with the available beds in Mumbai 33 to identify in which scenario the beds would be insufficient. The model accounts for different infectivity and susceptibility for immunocompromised and healthcare workers (HCWs). 28 34 The population data and contacts rates for Mumbai, to inform the model, were estimated as follows. Open access Model diagram, differential equations and all parameters used (see online supplemental table 1) are listed in the online supplemental material 1. Population, healthcare workers and hospital beds We used an estimated total population for Mumbai, India in 2019, 38 with age distributed following the age-specific percentages of Mumbai population. 39 The model uses 18 age groups, 5-year wide up to 84 years old with an additional age group, 85+ years. We estimated the number of HCWs in India, consisting of physicians, nurses and midwives, who accounted for 0.29% of the total Indian population in 2015 and used this estimate for Mumbai population. 40 To distribute the HCWs across age groups, we used the age distribution of nurses in Mumbai, 41 as they represent the biggest part of the HCW population. We estimated the number of hospital beds in Mumbai, and the proportion of beds in private hospitals using available data sources. 42 43 Contact matrix We used an estimated age-specific contact matrix for India. 4 The matrix is represented in 5-year age group starting from 0 to 4 years old to 70-74 years old and the rest of the contacts are presented in one upper age group (75+ years). 4 Since our model uses a 15 age-group contact matrix and the India matrix is available in 16 age groups, we took the mean of the last two age groups and fitted 15 age group contact rates in the model (see online supplemental table 2). 4 27 Previously vaccinated population We assumed that about 70% of the population in 40-69 years age group in India (born before 1977) were previously vaccinated (see online supplemental table 3), 17 considering the fact that smallpox was epidemic in India in 1974 44 and a higher proportion of Indian populations born before 1977 are vaccinated. 20 We considered vaccination had stopped in India after 1977 since the last case of smallpox in India was seen in May 1975 20 and India was declared free from smallpox in 1977. 17 The immunity against smallpox wanes 1.41% per year after vaccination. 27 30 Using this rate of waning, we calculated the age-specific residual protection by multiplying 1.41% with the number of years from vaccination and then subtracted from 100% effectiveness, for vaccinated people 40-69 years. 27 We considered that vaccine immunity wanes over time and people vaccinated prior to 1980 and now aged >69 years have zero residual immunity against smallpox. 27 Immunosuppressed population A minimum estimate of immunosuppression in India was made using HIV infection, cancer chemotherapy, steroid treatment for asthma and chronic obstructive airways disease, organ transplantation and autoimmune diseases, using a previously published method for estimating immunosuppression. 27 The total patients with HIV in India was estimated to be 2.14 million in 2017, which is 0.1597% of the total population. 45 We used an estimated age-specific distribution from 2009. 46 However, this study divides the HIV prevalence in only three age-groups, which we further divided equally for our 18 age groups. We estimated the cancer prevalence to be 0.08391% in India in 2015. 47 We distributed this across age groups in the model using age-specific cancer prevalence data from 2014. 47 Around 7715 solid organ transplants were made in India in 2015 48 representing 0.00058938% of the total population. We have used age-specific transplants distribution estimated for the US population, as Indian data were not available. 49 Asthma and chronic obstructive pulmonary disease (COPD) were estimated to be 5.47% and 1.30% of the total Indian population in 2015. 50 51 However, most patients in India would not receive oral or inhaled corticosteroids compared with a high-income country, therefore we assumed one-third of asthma and patients with COPD that is, 1.82% and 0.43%, respectively would be treated with corticosteroids. We estimated the number of people with asthma and COPD for the given age groups and divided equally for 5-year age groups, respectively, and then estimated the prevalence percentage with respect to the population in that age group. Persons living with autoimmune diseases in India are estimated to be 7.96% of the total population. [52][53][54][55][56][57][58][59] As most people in India with the autoimmune disease would not have access to immunosuppressive drugs, we assumed one-third of the total, that is, 2.65% of people would be treated with immunosuppressive drugs. We distributed this prevalence using the average age-specific distribution estimated from Spain and the US rheumatoid arthritis incidence. 60 61 They divide the incidence into seven age groups over the entire population, which we adapted to 18 age groups. Accordingly, the model was fitted with the immunosuppressed proportion estimated for India adjusted to the Mumbai population. Smallpox disease types Once infected we assumed four different types of disease for smallpox: vaccine-modified, ordinary, flat and haemorrhagic smallpox. We assumed that each disease type has a different infectivity (R0), a different CFR and a different age distribution rate depending on the immunological status of the infected person, as outlined in our previous study and in online supplemental table 6. 34 Infection with haemorrhagic and flat smallpox have the highest infectivity with R0=10 62 ; however, we used an R0=5 to account for the isolation of severely ill patients. For ordinary smallpox, we assumed R0=7.96, estimated from a detailed study of an outbreak in Nigeria in an unvaccinated community, 63 and for modified smallpox, we assumed R0=5.3 (2/3 of the R0 estimated for the ordinary type). Because of milder symptoms, we accounted for isolation and halved R0 from the third and fourth day for ordinary and modified smallpox, respectively. Data from historical outbreaks 31 64 shows that persons infected with haemorrhagic, flat and vaccine-modified smallpox have a CFR of 100%-95%, 90% and 0%, respectively, while for ordinary smallpox, infection is Open access age-specific. 31 In our study, to take into account better access to healthcare, we assumed the same CFR for ordinary and vaccine-modified smallpox, but a slightly lower CFR for haemorrhagic and flat cases, being 90% and 75%, respectively. All CFRs are shown in the online supplemental material 1. Distribution rates of each disease type for healthy unvaccinated people are derived from available data from Rao study 31 by linear interpolation of the available age groups. While for severe immunosuppressed we assumed them to have only haemorrhagic smallpox, for mild immunosuppressed people we doubled the rates of haemorrhagic and flat estimated for healthy unvaccinated people. For the previously vaccinated subgroup we estimated 25.3% of vaccinated persons get vaccine modified smallpox 31 and we applied a waning immunity rate of 1.41% per year following vaccination. 65 Age-specific rates of each disease by immunity level are shown in the online supplemental material 1. Sensitivity analysis A sensitivity analysis conducted on the number of initial infected, percentage of case isolation and of contacts vaccinated and time to starting the response (see online supplemental table 4). We varied the initial attack size between 50 and 100 000 to determine the effect of the epidemic without intervention in Mumbai. The total recovered, infected and death rates were estimated for Mumbai for several scenarios. We tested 50%, 70%, 90% of case isolation and contacts traced and vaccinated (ring vaccination). We varied the start of intervention between 20, 30 and 40 days after the initial attack. We defined 'epidemic control' as being able to reduce the daily number of new infected per infectious person, so in this study, an epidemic is defined under control when the infectious incidence is decreasing and we estimated the critical threshold proportion of case and contacts to be isolated and traced, respectively, to be able to reduce transmissions. 9 The threshold value was also estimated at which epidemic control is lost 9 through simulation of the model at values between 50% and 60% of case isolation and contact vaccination. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research. RESULTS The population of Mumbai is over 20 million. Mumbai has a very young population compared with developed countries, with 83% of the total population aged between 0 and 49 years, 38 39 which is the age group with the highest transmission rates. 8 We estimated 58 537 HCWs in Mumbai. We estimated there are 40 000+ hospital beds in Mumbai, with 50% of those beds in private hospitals. Immunosuppressed population We estimated a rate of 5.14% immunosuppression in India, with a higher percentage of immunosuppressed people in the 50+ age group (see online supplemental table 5) and the highest immunosuppression observed in persons 85+ years old. Impact of response time and interventions For a fixed number of 1000 initial infected by the attack, with the best case scenario of 90% of contacts vaccinated and isolation of 90% of infectious cases (figure 1), at time to starting interventions of day 40 after the attack, the infectious incidence peaks at 1456 people and it takes over 176 days, respectively, to contain the epidemic. On day 40 of intervention, a total of 22 040 people will be infected, with a corresponding increase in required vaccine doses (table 1). A total of 2319 deaths by day 50, 7646 deaths by day 100 and 9472 deaths by day 150 will be observed when the intervention starts on day 40. Figure 1 shows that as rates of case isolation and contact vaccination fall, the epidemic becomes more severe, with a large difference between 70% and 50% rates. If ring Figure 1 Epidemic response for 1000 initial attack at a fixed time of intervention T=40, with 90%, 70% and 50% of cases isolated and contacts traced and vaccinated. Open access vaccination and isolation decreases to 50% of the contacts and cases, respectively (figure 1), the epidemic is more severe resulting in a very high infected and death rates with a very long period of the epidemic and a total of 761 900 people infected. The large difference between the two scenarios such as case isolation and degree of vaccination at 70% and 50% rates compared with 90% and 70% suggests that epidemic control is lost somewhere between 70% and 50%. The epidemic impact was also examined at varying proportions each of case isolation and ring vaccination. With a decrease in the percentage of case isolation and vaccination, the infection and deaths incidence increase and the epidemic takes longer to control. Figure 2 illustrates the time to control the epidemic in Mumbai by varying rates of case isolation and ring vaccination, with initial attack size of 1000 infected people. The epidemic will end in less than half a year at 90% case isolation and ring vaccination rate, and, within 1 year at 70% of each rate. However, if the rates are 50% each, the epidemic will continue for more than 6 years, as shown in figure 2. Vaccine doses A total of 16 813, 37 092 and 82 486 vaccine doses will be needed with 90% of each case isolation and ring vaccination, at time of intervention on day 20, 30 and 40, respectively. However, at 50% of each rate, a maximum of 2 228 600, 2 336 600 and 2 548 800 vaccines are needed at time of intervention on day 20, 30 and 40, respectively (table 1). Vaccine requirements for epidemic control more than double with every 10 days of delay in intervention at higher case isolation and vaccination (70% or 90%). Figure 3 shows the maximum number of beds that will be needed in Mumbai by case isolation and contact vaccination rates, and time of starting interventions, for an attack size of 1000. The required hospital beds more than double with every 10 days of delay. In all the above cases, at the initial attack size of 1000, the maximum beds required do not exceed the total available hospital beds of Mumbai (40 000 beds). However, in the worst case, almost all beds will be used just for smallpox cases. Figure 4 shows that if the attack size is 10 000, available beds will be exceeded in almost all scenarios. Hospital beds The threshold initial attack size found to be 5000 (at time of intervention T=40) above which the number of hospital beds needed exceeds available beds in Mumbai at 90% case isolation and 90% vaccination. Mixed scenario When the base case input parameters were used, with 1000 initial infected starting interventions after 30 days and assuming that 70% of cases presenting symptoms get isolated, reducing the contact traced percentage to 30% and 20%, the epidemic can still be controlled with 216 790 and 981 310 vaccination doses used, resulting in 55 402 and 380 584 deaths, respectively. However, in the case of only 10% of contacts traced for infected person, the epidemic is not controlled ending with 1 082 121 deaths using 1 337 100 vaccination doses. The results are shown in figure 5. DISCUSSION In the event of re-emergent smallpox in Mumbai, there are several approaches to mitigating the impact, which will be proportional to the size of the attack. Transmission of infection would be intense because Mumbai has high population density, a young population age Open access structure, coupled with higher contact rates among younger people. 4 66 Smallpox has a mean incubation period of 12 days, so that starting vaccination and case isolation at day 20 means, in reality, starting the response 8 days after the first case becomes symptomatic, which, even in the best-resourced country, would be a challenge. It is also likely that the diagnosis may be delayed, given the unfamiliarity of current clinicians with smallpox and many examples of missed or delayed diagnoses of serious infections, such as Ebola, Middle East Respiratory Syndrome (MERS) coronavirus and smallpox. 67 Every 10 days of delay results in a worsening epidemic. However, diagnosing smallpox may not be that difficult given it is a typical clinical presentation including the centrifugal distribution of the rash, high fever, ocular complications. Rapid response will depend on early diagnosis, availability of vaccine stockpiles, as well as physical and surge capacity including human resources for isolating infectious cases, tracking contacts and managing the epidemic. The influential predictors of epidemic size are initial attack size, time to start of the intervention, residual vaccine immunity and percentage of cases isolated, contacts traced and vaccinated. While the initial attack size is not within our control, factors that can be controlled include rapid response, high case isolation and high rates of contact tracing and vaccination. Achieving high case isolation and vaccination rates in Mumbai is critical, as failure to do so will increase the epidemic test the health system capacity. In scenarios with a delayed response, low case isolation and vaccination rates, the duration of the epidemic may be more than 6 years. Due to high density and contacts number, a smallpox outbreak could infect hundreds of thousands of people in Mumbai in a very short time and, unless it is quickly controlled, it can easily spread to the rest of India and globally. This work shows the importance of rapid response, which includes vaccination, contact tracing and case isolation. The already overstretched health infrastructure with respect to available healthcare workforce, hospital beds and health system capacity in a metropolitan area with a population over 20 million will be tested during any serious epidemic. There are over 40 000 hospital beds in Mumbai, with about 50% of these beds in private hospitals. 33 42 The total available beds in Mumbai would not be exceeded in the best-case scenario, when the initial attack size is 1000 and the case isolation and vaccination is high. However, all scenarios will require surge capacity, and will affect the ability to provide care for other non-smallpox illness. With low case isolation or vaccination, or a large attack size, the maximum beds required will exceed the entire capacity very early in the epidemic. Given the large private hospital sector in India, coordination of pandemic planning with private hospitals may be important. India has a more privatised health system than many other countries, with at least 70% of care provided in the private sector. 68 This is a challenge not just for epidemic control, but also for establishing representative disease surveillance across both public and private sectors. The re-emergence of infectious diseases is a real possibility, not just due to synthetic biology and genetic engineering, but also due to laboratory accidents. In September 2019, a gas explosion occurred in one of the two sites known to house variola, The Russian State Research Centre of Virology and Biotechnology building (Vector) in the city of Koltsovo. 69 Despite Russian government denials, there was a real risk of the aerosolized virus being propagated through shattered windows in the Vector building by the shock wave of the explosion 69 and a need for preparedness. Koltsovo, where the Open access explosion occurred is in the Southern part of Russia, bordering China, Mongolia and Kazakhstan and less than 2500 km from Jammu and Kashmir in India. Health system capacity for detecting unusual epidemics early and responding as rapidly as possible is critical. A rapid and well-coordinated response will require both physical space for case isolation and quarantine of contacts, as well as health workers and personnel for contact tracing and for accomplishing vaccination drives. 9 While India has the potential for large the surge in personnel, this will require protection of health workers and incentivisation of community volunteers to conduct contact tracing and case finding. Enough vaccine should be reserved for the health workforce, as well as for community volunteers. During smallpox eradication, India was the most challenging setting, with the failure of mass vaccination attempts. 70 When the strategy was switched to contact tracing and ring vaccination, community volunteers were paid financial incentives. 17 This approach may be required in the event of smallpox re-emergence in India. The first Biosecurity Level (BSL) 4 laboratory in India, established in Pune, will enhance capacity for diagnostics and surveillance. A review of smallpox vaccine stockpiles and manufacturing capacity is also important. This study also has lessons for COVID-19 vaccination in India, as the incubation period is similar and it is also caused by a respiratory transmissible virus. Limitations of this study include unavailability of some data, such as age-specific rates of organ transplants and autoimmune diseases for the Indian population. The agespecific distributions from other countries were adapted for India (such as incidence data of rheumatoid arthritis) to distribute organ transplants and autoimmune diseases for estimation of immunosuppression. 27 However, we still used a minimum estimate of the immunosuppressed population and did not include diseases such as diabetes, malaria or the presence of malnutrition, all of which are highly prevalent in India and would worsen the impact of an epidemic. The contact matrix was derived from a study which estimated the age-specific contact rates for India. 4 We estimated 70% of people over the age of 40 in India were vaccinated against smallpox before 1977. 17 20 44 However, there is uncertainty around the degree of the waning of vaccine immunity. 17 Finally, we looked at a large, densely populated city, Mumbai and studied the epidemic consequences. This may not be generalisable to other parts of India, as almost 70% of the population of India lives in rural areas and small towns where the transmission would be less intense because of lower population density. However, healthcare facilities, diagnostics and health workers are in short supply in rural areas. 71 72 Further limitations are the lack of consideration of seasonality in the virus transmissions and vaccine refusal. We did not set a particular time of the year for the scenario tested, however, smallpox transmission varies with season and is most likely enhanced by dry weather. 73 This could influence the outcomes of a smallpox outbreak in India, where there is only a dry and wet season. Regarding vaccine refusal, we did not account for this parameter in our study, although many recent outbreaks of vaccinepreventable diseases have been linked to under vaccinated communities. 74 However, in India, accessibility due to long distances from healthcare facilities is the main factor linked to under-vaccination, but the level of vaccine acceptance is found to be still high, 75 with only 16% of the vaccine-hesitant people refusing vaccination. 76 Finally, for this study, we tested the sensitivity of results to variations in parameters involved in the public health response, such as contacts traced and cases isolated, time to start the intervention and number of doses delivered each day. This helps inform policy making for the most effective response within limited resources. However, we acknowledge that we could not vary every parameter involved and this can represent a further limitation. CONCLUSION In summary, we have shown a range of possible scenarios of re-emergent smallpox in Mumbai. Speed of response, stockpiling, vaccination, human resources for health and physical space for smallpox treatment and isolation are all influential factors. This study tells us that if smallpox reemergence occurs, it may have significant health and economic impact, the extent of which will depend on the availability and delivery of interventions such as a vaccine or antiviral agent, where these are needed the most, and the capacity of case isolation and treatment. Further research on health systems requirements and capacity across the diverse states and territories of India, across public and private health systems and inter-sectoral engagement especially the involvement of the community, could improve the preparedness and management strategies in the event of re-emergent smallpox or other serious emerging infections. Contributors CRM designed the study and developed the research questions, supervised the research, drafted and revised the manuscript, gave final approval of the manuscript. VC participated in the literature review and study development, developed research questions, modelling analysis and drafted and revised the manuscript. BM conducted a literature review, collected the data, performed modelling analysis and drafted and revised the manuscript. AAC helped with the development of the study, participated in the literature review and manuscript drafting and revision. JN participated in the literature review, study development, drafted and revised the manuscript. AD collected the data, participated in the literature review and drafted the manuscript. Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement All data relevant to the study are included in the article or uploaded as supplemental information. All data used in this study are publicly available online and listed in the references. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content
2020-12-15T21:59:32.715Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "5fb617e904417536db8cda905fe691c0630f2da5", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/12/e038480.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "2adc03219c51bb9b7717792cff62621514d521ae", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
139491557
pes2o/s2orc
v3-fos-license
EVS 24 Stavanger , Norway , May 13-16 , 2009 Advanced Lead-Acid Batteries – the Way forward for Low-Cost Micro and Mild Hybrid Vehicles The Advanced Lead Acid Battery Consortium has been researching into VRLA batteries since 1992, initially for electric vehicle (EV) applications, where it achieved significant life extension in deep cycle duty. More recently it has focussed its work on hybrid electric vehicle (HEV) applications where the battery has to operate in High-Rate Partial State-of-Charge (HRPSoC) conditions. Whereas in EV operation failure occurs in the positive plate, in HEV duty, failure is due to negative plate sulfation, resulting in rapid loss of capacity. Ways of overcoming this have been investigated successfully and the ALABC is undertaking vehicle demonstration programmes to publicise this work. Introduction In a hybrid electric vehicle, the battery has to be maintained in a partial state-of-charge (PSoC) so that it can both accept regenerative charging and also deliver power to assist in propulsion of the vehicle without the battery becoming overcharged or over-discharged.In this type of application, the failure mode of the lead-acid battery occurs in the negative plate and is due to a progressive build up of lead sulfate in the negative active material.This gradually leads to a loss of capacity and hence the ability to provide power when needed. Therefore ALABC research activity has concentrated on ways of avoiding this degradation of the negative plate.These included: Periodic conditioning of the battery i.e. routinely bringing it up to full state-ofcharge. Proper battery management to keep high voltage strings in balance.Improved grid design to enhance charge acceptance and discharge ability.Modifications to the negative plate chemistry and, more specifically, the addition of carbon to the negative active material. All this work has been very successful and has resulted in very significant lifetime and performance improvements to the lead-acid battery in this application.Only recently however has the key role of carbon in the negative plate begun to be appreciated even if it is not yet fully understood.Successful laboratory demonstration of significantly improved performance has resulted in the Advanced Lead Acid Battery Consortium committing not inconsiderable funds towards demonstrating these batteries in vehicles.The objectives in undertaking this demonstration programme can be described as follows: To evaluate the performance of the new generation of advanced lead-acid batteries in extended and representative duty.To build an in-depth understanding of the hybrid application.To develop the system solution that will be required by the OEMs To raise awareness of the significantly more cost-efficient energy storage option provided by advanced lead-acid batteries. The success of these tests in Honda Insights is described in this paper together with details of the conversion of a current Honda Civic to operate with an Effpower bipolar lead-acid battery. The RHOLAB Project This project has been previously reported [1], [2] and in essence involved the development of a novel, spirally-wound 2V cell based on a Hawker Cyclon 8Ah cell to serve as the building block for a 144V HEV battery.These cells were mounted into specially designed 36V modules with full monitoring of voltage, current and temperature and controlled in such a way that each cell could be individually conditioned as necessary.At the time of conception of this project, the ALABC work on negative plate chemistry had not reached the state where the concept of conditioning could be abandoned.Figure 1 shows the four modules fitted in the vehicle. After extensive shake down work to resolve software issues in the battery management system and to integrate it with the Honda electronics, the vehicle was then ready for its extended road test at Millbrook Proving Ground in the UK.This started in August 2006 and after a few problems with its associated electronics and hardware, reached the intended target of 80,000 km on August 15 th 2007.As this was by this time seen as very much a 'generation 1' battery requiring periodic conditioning, the test was terminated.During the test period there were some minor battery pack issues, such as a relay shorting and a cooling fan failure but, by and large, the battery ran well despite not having been produced with the now preferred negative plate chemistry.As the trial went on, much was learnt about the battery management system and the integration into the vehicle electronics and the whole system was engineered to become much more reliable.This concept of utilising an VRLA battery with a battery management system, with routine conditioning of the battery, is being utilised by BMW in their EfficientDynamics Stop/Start system [3]. The Effpower Project The ALABC has used the expertise obtained in the RHOLAB project to convert two other Insights with state-of-the-art lead-acid batteries.One has been converted by EffPower in Sweden, with the assistance of Provector, and utilises a completely new bipolar lead-acid battery.In this case the battery was again fitted in four modules but because of the efficiencies of the battery design, was able to be fitted into the exact space utilised by the NiMH battery (Figure 2). While this conversion has not been subjected to a formal test programme as was done with the RHOLAB battery, it is in regular road use in the Gothenburg area of Sweden and some time ago had covered over 30,000 km without any problems. The UltraBattery Project The UltraBattery is a completely novel design of battery developed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia with the support of the ALABC.The novelty lies in the design of the negative plate which is part asymmetric supercapacitor and part conventional plate (Figure 3). This gives the battery the ability to both deliver and receive charge at very high rates as required in an HEV battery.After some outstanding laboratory testing which showed the battery outperforming NiMH cells in HEV cycling tests (Figure 4), a set of batteries was obtained from their licensees Furukawa for road trials in an Insight acquired for the purpose. Again the battery is relatively compact and was fitted as 12 x 12V modules into the exact space utilised by the 144V NiMH battery (Figure 5). It started testing at Millbrook in April 2007 and is pictured in Figure 6 running with the RHOLAB vehicle. With the combination of having a 'generation 2' battery and updated electronics, this vehicle performed outstandingly in its test -frequently running a three shift day with very stable battery voltages and temperature.In fact the vehicle reached its initial target of 80,000 km a day later than the RHOLAB car and without any equalisation or conditioning of the battery being carried out. As a result of this performance, it was decided to extend the test to 160,000 km -way beyond normal warranty distance for the NiMH battery.This milestone for advanced lead-acid batteries was reached on January 15 th 2008.Thus the vehicle had covered the 160,000 km in barely 9 months.Also, at the end of the test, the battery had still not been equalised or conditioned. The Test Cycle As well as being equipped with the battery management system, the vehicle has very comprehensive data logging system, capturing information on items such as module voltage, current, state of charge and battery temperatures as often as four times per second.Data is also recorded from the vehicle's on-board diagnostic system (OBD) as well the Global Positioning System (GPS).Thus as well as recording any abnormalities in the data, it is possible to locate the vehicle's position and also how it was being driven -by looking at throttle position, engine rpm etc.By logging all this information, a massive amount of data has been obtained on how the battery behaves under the hybrid duty cycle.The actual test cycle used is a proven OEM motorway simulation driving cycle on the Millbrook high speed bowl.This is capable of moving the battery state-of-charge around as well as putting on the miles quickly to keep testing costs realistic.This test took place on the high speed bowl at Millbrook and Figure 7 shows a recorded GPS speed trace of the test. The data monitoring has also been such that it has been easily possible to identify differences in driver skills in adhering to the test cycle -or, on occasions, where driver errors have resulted in issues with the vehicle. Typical Data Figure 8 shows module maximum and minimum voltages for each of the 12 modules plotted on top of each other during a run.It can be seen that the module voltages overlay each other well indicating uniform operation. Figure 9 shows the maximum and minimum currents plotted in the same way while Figure 10 shows the state of charge of each of the 12 modules during a run. It is quite surprising how uniformly these are tracking as it is more common for batteries in a string to diverge in SoC quite rapidly as was discussed in the paper given to EET2007 [4] when the ISOTEST programme was presented.With this trial of the UltraBattery, the modules were within 1.5% of each other at the end of the testa truly remarkable result bearing in mind no equalization of the batteries took place at all. The paper given at EET2007 [4] looked at the voltage v current plots for NiMH batteries as against the lead-acid batteries used in the RHOLAB modules.It was stated that a key objective in replacing the NiMH battery is to maintain a flat curve in these plots.This is particularly important if the control software in the vehicle's Motor Control Module cannot be modified, as in these projects, but also it is a measure of system efficiency.The curve for the standard Honda Insight NiMH battery recorded on an emissions test is shown in Figure 11. It is interesting to look at comparable plots for the RHOLAB battery.When operated at a stateof-charge of around 70%, the lead-acid battery exhibits higher voltages and a higher apparent impedance on charging as seen in Figure 12. It is undesirable if the voltage exceeds 2.5V per cell for more than a few seconds at a time as this can act to dry out the cell.However, when operated at a rather lower state-of-charge the RHOLAB cells have a characteristic which is very like the NiMH battery with no undesirable voltage peaks and a flat characteristic as seen in Figure 13. As can be seen in Figure 14, the characteristic curve for the UltraBattery is rather different in that there are voltage peaks during the recharge events.However, it is felt that one effect of the capacitive negative plate is to make the battery less susceptible to problems associated with these high voltages such as dry out.The curve is very flat over the rest of the current range, and on many occasions at high charge currents, and is much closer to NiMH behaviour than the earlier batteries. The vehicle ran well during the testing and no problems with the car during running were batteryrelated.The overall fuel consumption during the test was 4.73 l/100km -a fraction under 60 mpg.It is not possible to relate this performance to the vehicle with the NiMH battery because, when the original RHOLAB car was investigated at Millbrook, it was to a different test cycle reflecting the need to gather a wide range of data in as short a period as possible. Current Work While the demonstrations of these three battery types have clearly shown that advanced lead-acid batteries are capable of the performance and life required in this HEV application -this model Insight is perhaps no longer seen as a current vehicle platform.In the period during which these tests were being carried out, NiMH technology has moved on and the batteries have been further optimized for power and are capable of higher rates of charge and discharge than experienced in the Insight. The EALABC acquired during 2007 a new Honda Civic Hybrid and this had a NiMH battery of 158V 5.5Ah as compared with the 144V 6.5Ah in the Insights.The battery also fits compactly in the space between the rear seat and the luggage compartment.The initial investigation showed the battery recording extreme peak discharges of up to 130A (24C) and charge currents of 80A (15C) (Figure 15) as compared with the 15C and 8C recorded in the Insight. The vehicle was converted in Sweden to use an Effpower bipolar battery (Figure 16).In order to try and marry up with the Honda electronics, the battery was fitted in two blocks -one of 72V and the other of 86V -matching the 158V of the original NiMH battery.This new battery has itself have been improved from the one used in the Insight discussed earlier to have its negative plate formulation modified with additional carbon as a result of ALABC research.As well as inhibiting the formation of lead sulfate in HEV operation, it also significantly enhances the ability of the battery to deliver and accept charge at high rates. After initial shake-down work the car started testing at Millbrook at the beginning of March 2008.It operated on a mix of aggressive high speed and city course running as indicated by the speed plot in Figure 17.The intention was to track test the vehicle for 40,00km and in the meantime to carry out extended testing on a smaller module in the laboratory using a cycle derived from real data gained on the track.The battery management system on the Honda is designed to protect the battery by limiting the over-voltage of the battery during regenerative charging.As NiMH is able to tolerate higher levels of overcharge than a typical VRLA batteryand it was not possible to adjust the Honda electronics to reflect this -it was necessary to fit a voltage clamp to try and limit the voltage of the lead-acid battery to around 2.36V per cell. The vehicle completed the planned 40,000 km but there were signs of battery deterioration.There had been two failures of the voltage clamp which resulted in over-voltage of the battery.It was also apparent that the temperature of the battery had been considerably in excess of that planned and the voltage clamp had not been programmed to allow for this.As a result there was increased chemical activity in the battery leading to gassing at elevated temperatures when the voltage limit should have been lowered.It was also apparent that the lower amount of cooling in the centre cells of the modules aggravated these temperature problems. Tear-down results confirmed dry-out and sulfation in the battery. In the meantime testing continued of the laboratory module and this completed cycling equivalent to 160,000 km without any problems.As a result of this successful test it has been decided to repeat the vehicle trials.For this next stage of the work, the voltage of the battery will be increased slightly for greater efficiency and less reliance on the voltage clamp, the battery will be split into smaller units for improved cooling and there will be improved temperature monitoring and linkage to battery voltage control. Many of these problems result from having to demonstrate these batteries in vehicles which have electronics designed for different chemistries but nevertheless, it has been shown that advanced lead acid can perform very well in this application.It should also be noted that the ALABC has approved plans to try the UltraBattery in a Honda Civic in another project based in the USA. Summary Advanced lead-acid batteries have been shown to be a low cost option for hybrid electric vehicles.While it is always difficult to obtain definitive cost comparisons because of commercial sensitivities, some indications are occasionally made available in conference papers.For example, J. German in a paper given at AABC 08, indicated a NiMH battery price of $2,000 kWh [5].At the same conference H. Takeshita indicated 'target' prices for Li-Ion at $900 kWh but not yet realised [6]. Predictive prices for an advanced lead-acid battery of this size are in the region of $250-350 per kWh.Li-Ion battery prices may ultimately be lower than NIMH but the battery has to be managed at the cell level for safety reasons which add very complex electronics and cost.The recent tests with the advanced lead-acid batteries show this level of control not to be necessary. Recycling issues are also relevant in that over 90% of lead-acid batteries are profitably recycled world wide into raw materials capable of being re-used in battery manufacture.This cannot be said to be true of competing chemistries. Another factor frequently held against lead-acid is weight.While lead-acid does have a lower specific energy than both NIMH and Li-Ion batteries, its power density is very good.This programme has already demonstrated that the advanced lead-acid designs can be fitted into the same space as the original NiMH batteries.The same is also true of the Civic programme.What should also be remembered is that because NiMH batteries are notoriously bad in cold conditions it is necessary to provide a 12V lead-acid battery, in addition to the DC to DC converter, in order to start the car in cold weather.With a lead-acid HEV battery in place this secondary battery probably becomes unnecessary, or at worst a small back-up battery may be necessary for some loads.With the removal of this battery and the unnecessary starter motor, the weight impact of the lead-acid HEV battery becomes almost neutral. Thus the ALABC remains confident of the future of lead-acid batteries in this environment and is continuing its research and demonstration programmes to ensure that there are a range of different battery options available for the different hybrid types.In this way the cost of hybrid vehicles should reduce, making them more attractive to the consumer, and hence help in the fight to reduce emissions of greenhouse gases. The Effpower Demonstrations The assistance of Effpower in providing batteries for the two demonstrations mentioned in this paper. The ALABC and EALABC have provided assistance through Provector Ltd in fitting and integrating these batteries into the vehicle.The ALABC is also funding the testing of the Honda Civic at Millbrook Proving Ground. The UltraBattery Demonstration This has been coordinated and funded by the EALABC and ALABC. The advice and assistance of CSIRO and Furukawa, the developers of the battery is acknowledged.The battery was fitted and integrated into the vehicle by Provector Ltd.Thanks are also due to the staff at the Millbrook Proving Ground in the UK for their advice and assistance during the testing of the car. Figure 9 : Figure 9: Maximum and minimum currents plotted during a test run.Assist currents are positive. Figure Figure 7: A GPS speed plot Figure 11 :Figure 12 :Figure 14 : Figure 11: Voltage vs Current plot for a NiMH battery Figure 16 : Figure 16: The Effpower bipolar battery (right) in the Civic
2019-04-30T13:07:35.388Z
2009-03-27T00:00:00.000
{ "year": 2009, "sha1": "72d6fed3a68082044b573477f8773c0ac6ae9408", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2032-6653/3/1/61/pdf?version=1526290117", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "72d6fed3a68082044b573477f8773c0ac6ae9408", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
119173862
pes2o/s2orc
v3-fos-license
Generalized $\beta$-Gaussian Ensemble Equilibrium measure method We investigate $\beta$-Generalized random Hermitian matrices ensemble sometimes called Chiral ensemble. We give global asymptotic of the density of eigenvalues or the statistical density. We investigate general method names as equilibrium measure method. When taking $n$ large limit we will see that the asymptotic density of eigenvalues generalize the Wigner semi-circle law. Introduction The generalized β-Gaussian ensemble, generalize the classical random matrix ensemble: Gaussian orthogonal, unitary and symplectic ensembles (denoted by GOE, GUE and GSE for short, which correspond to the Dyson index β = 1, 2 and 4), from the quantization index to the continuous exponents β > 0. These ensembles possess the joint probability density function (p.d.f.) of real eigenvalues λ 1 , ..., λ n with the form where Z n can be evaluated by the using the Selberg integral . Recently, Dumitriu and Eldeman have construct a tri-diagonal matrix model of these ensembles see [3]. Basing on the p.d.f. of eigenvalues P n , the (level) density, or onedimensional marginal eigenvalue density scaled by the factor 1 √ 2n converge weakly to the famous Wigner semi-cercle law as follows: for every bounded continuous functions f on R h n (λ 1 ) = R n−1 P n (λ 1 , ..., λ n )dλ 2 ...dλ n . Many others work in this direction of random matrices and asymptotic of eigenvalues has been developed in the last years, one can see [6], for a good reference. In this work we will study a generalization of the Gaussian random matrices ensemble which is called some times the Chiral-ensemble when β = 1, 2 or 4. We will consider the general case where β > 0, in that case the joint probability density in R n is given by: where Z n is a normalizing constant and λ is a positive parameter. Using a general method of logarithmic potential we will prove that, the statistical density of eigenvalues converge for the tight topology as n → +∞ to some probability density. Which generalized the Wigner semi-circle law. Such result has been proved in [1] for β = 2, by the orthogonal polynomials method. The paper is organized as follow. In sections 2 and 3 we gives some results about classical potential theory, which will be used together with some fact about boundary values distribution to characterized the Cauchy transform of some equilibrium measures. In section 4, we will describe the model to study, as physics model, and we give the joint probability density. Moreover we defined the statistical density ν n of eigenvalues and we explain how the eigenvalues must be rescaling by the factor √ n. Also we gives the first means result theorem 4.1, which state the convergence of the statistical density ν n to some probability measure ν β,c . We will prove that, the measure ν β,c is an equilibrium measure and we compute the exact value of the energy for general β, after calculating the energy for β = 2. In section 5 we gives the proof of the first result of theorem 4.1. Logarithmic potential The logarithmic potential of a positive measure ν on R is the function U ν defined by It will defined with value on ] − ∞, +∞] if ν is with compactly support or more general, if Observe that lim The Cauchy transform G ν of a bounded measure ν on R is the function defined on C \ supp(ν) by The Cauchy transform is holomorphic. Assume that supp(ν) ⊂] − ∞, a], and Then the function is defined and holomorphic in C\] − ∞, a]. Furthermore F ′ (z) = G ν (z), and In the distribution sense, We will use some properties of the boundary value distribution of a holomorphic function. Let f be holomorphic in C \ R. It is said to be of moderate growth near R if, for every compact set K ⊂ R, there are ε > 0, N > 0, and C > 0 such that Then for all ϕ ∈ D(R), defines a distribution T on R. It is denoted T = [f ], and called the difference of boundary values of f . One shows that the function f extends as a holomorphic function in C \ supp([f ]). In particular, if [f ] = 0, then f extends as a holomorphic function in C. For α ∈ C, the distribution Y α is defined, for Reα > 0, by The distribution Y α , as a function of α, admits an analytic continuation for α ∈ C. In particular Y 0 = δ 0 , the Dirac measure at 0. The function z α is of moderate growth near R, and In particular when α = −1 Proposition 2.1 Let ν be a bounded positive measure on R. (i) The Cauchy transform G ν of ν is holomorphic in C \ supp(ν), of moderate growth near R, and (ii) Assume that the support of ν is compact. Let F be holomorphic in C \ R, of moderate growth near R, such that Then F is holomorphic in C \ supp(ν). If further Equilibrium measure some basic results Let us first recall some basic facts about the tight topology. All the present result in equilibrium measure can be find in the good reference [11] and references therein. Let M 1 (Σ) be the set of probability measures on the closed set Σ ⊂ R. We consider the tight topology. For this topology a sequence (ν n ) converges to a measure ν if, for every continuous bounded function f on Σ, If ν is a probability measure supported by Σ, the energy E(ν) of ν is defined by which mean that By a straightforward computation we can prove that E(ν) is bounded below. Hence we defined where f is a continuous function with compact support ⊂ int(Σ). Then the potential U ν is a continuous function, and E * ≤ E(ν) < ∞. Furthermore there is a unique measure ν * ∈ M 1 (Σ) such that The support of ν * is compact. This measure ν * is called the equilibrium measure. Proposition 3.2 Let ν ∈ M 1 (Σ) with compact support. Assume that the potentiel U ν of ν is continuous and that there is a constant C such that The constant C is called the (modified) Robin constant. Observe that It is easy to see the action by linear transformation on the energy. Proposition 3.3 Let the transformation h(s) Then For the proof of the previous theorem and proposition, see for instance theorem II.2.3, proposition II.3.1 of [4]. Statistical of the generalized Gaussian unitary ensemble Let H n = Herm(n, F) be the vector space of square Hermitian matrices with coefficient in the field F = R, C or H. For µ > − 1 2 , we denote by P n,µ the probability measure on H n defined by. for a bounded mesurable function f , where m n is the Euclidean measure associated to the usual inner product < x, y >= tr(xy) on H n and C n is a normalized constant. which is given for d = 2 by We endowed the space H n with the probability measure P n,µ . The probability P n,µ is invariant for the action of the unitary group U (n) by the conjugation x → uxu * (u ∈ U (n)). Spectral density of eigenvalues Let f be a U (n)-invariant function on H n . Then by the spectral theorem there exist a symmetric function F in R n such that f (x) = F(λ 1 , ..., λ n ). If f is integrable with respect to P n,µ , then by using the formula of integration of Well we obtain More general we will consider n particles free to move in R n , in equilibrium at absolute temperature T . A fundamental postulate gives the p.d.f. for the event that the particles are at positions λ 1 , ..., λ n as: Here V n (λ 1 , ..., λ n ) denotes the total potential energy of the system, β := 1 and Z n is a normalizing constant. The term V n (λ 1 , ..., λ n ) is referred to as the Boltzmann factor and Z n := Z n n! is called the (canonical) partition function. Our first result is to study as n go to infinity the asymptotic of the Normalized Counting Measure (Density of States) ν n defined on R as follows: if f is a measurable function, where E n,µ n is the expectation with respect the probability measure on R n By invariance of the measure P n,µ n by the symmetric group, we have that the measure ν n is continue with respect to the Lebesgue measure Let compute the two first moments of the measure ν n : the second moment is: Since for all α > 0, and This suggests that ν n does not converge, and that a scaling of order n β 4 + µ n is necessary. We come to The mean result: the measure ν n converge weakly to some probability measure ν β,c which is an equilibrium measure. Moreover the energy of the equilibrium measure ν β,c is The convergence is in the sense that for every continuous bounded func- Equilibrium measure of generalized Gaussian unitary ensemble For c ≥ 0, β > 0, one considers on Σ = R, the potential The energy of a probability measure µ ∈ M 1 (R) is defined by and let U β,c be the potential of the measure ν β,c , We will give the value of the energy E * β,c in section 3.3. To prove the proposition we need same preliminary results and then applying proposition 2.2. For more convenient notation we shall denote c ′ = 2c β . Putting The function f is holomorphic on the domain C \ (S ∪ {0}), of moderate growth near S ∪ {0}. Proposition 4.3 The difference between the two limits values of , is given by . . There for f extended as holomorphic function on For z near 0, by Taylor expansion where g is an holomorphic function. It follows that Which complete the proof of the proposition. Let denote by G β,c ′ the Cauchy transform of the measure ν β,c ′ : for all z ∈ C \ S, It follows that, if ϕ is a holomorphic function in a neighborhood U of S, and γ is a path in U around S in the positive sense, then in particular, for if z is in the exterior of γ, then We will use the theorem of residues to derive the expression of G c ′ . with simple pole at ω = 0, ω = z and a pole at infinity. Let denote by U β,c ′ the logarithmic potential of the measure ν β,c ′ : for all x ∈ R, The function U β,c ′ is even and We will study the variation of the function The function ϕ is even and (It is not defined on the point x = 0). The last function vanished on S, therefore the function is constant on each connect components of S. Since the function ϕ is even therefore the constant is the same on each components. Let denoted it by C. By making use the proposition 2.2 the equilibrium measure ν * coincide with ν β,c ′ . Energy of equilibrium measure Consider the integral, For c ≥ 0 consider also the integral Recall that the energy for a probability ν is defined by We saw that lim See for instance (Faraut [4]). We will prove this result in proposition 4.6. for more general potential Remark that lim x→±∞ K n (x) = +∞ and lim x→0 K n (x) = +∞, the same hold in the diagonal of R n . Since the function K n is continuous except on the diagonal and 0 where it has as limit +∞. Hence it is bounded below and the minimum is realized at some point λ (n) = (λ (n) 1 , · · · , λ (n) n ), which means that inf R n K n (x) = K n (λ (n) ). Let denote by From proposition 4.2, if we replace c by α n the equilibrium measure of the potential 2 β Q α n is ν β,α n , where the density of the equilibrium measure ν β,α n is given by if t S n , S n = [−b n , a n ]∪[a n , b n ] and a n = β 2 1 + α n β − 1 + 2α n β , b n = β 2 1 + α n β + 1 + 2α n β . (1) The probability measure ν β,α n converge weakly to the probability ν β,c . ( where E * β,c is the energy of the equilibrium measure ν β,c . (2) The measure ρ n converge weakly the the equilibrium measure ν β,c . Proposition 4.7 Let (µ n ) n be a positive real sequence. Assume there is some constant c such that lim n→∞ µ n n = c. Then the energy E * β,c is given by For c = 0, one recover's the energy of the β-Gaussian unitary ensemble Proof of lemma 3.5. Step(1) : The probability measures ν β,α n and ν β,c have density respectively f β,α n and f c . It is easy to see that the density f β,α n converges Pointwise to the density f c . Then by applying Fatou lemma we deduce the convergence in the weak topology. Step (2) : We know by definition of the energy that and which can be writing as where E * β,α n = inf and (4.8) So it is enough to prove that the integrals go to 0 when n go to infinity. Recall that the probability measures ν β,α n and ν β,c are supported respectively by S n and S. Since the sequence b n converge to b hence there is some positive constant C such that for all n ∈ N, sup S n ∪S | log |x|| = max(logb n , log b) ≤ C. Take the limit in equation (3.7) and use the facts that ν β,α n and ν β,c are probability measures and the sequence α n converge to c we deduce that Proof of proposition 4.6. Since the positive sequence α n converge to c, then there is two positive constants a 1 , a 2 such that a 1 ≤ α n ≤ a 2 and Hence by Prokhorov criterium this proves that the sequence (ρ n ) n is relatively compact for the weak topology. Therefore there is a converging subsequence: ρ n k to ρ which means, for all bound continuous fonctions on R We will denote by ρ n the subsequence. By symmetry of the kernel k α n the last inequality is valid in R 4 . we obtain for (s, t) ∈ Σ, Hence if we take the infimum we obtain Moreover for the energy one gets, for all n ≥ n 0 aE ℓ β,c+ε (ρ n ) + bE ℓ β,c−ε (ρ n ) ≤ E ℓ β,α n (ρ n ). As n goes to infinity we obtain lim inf n aE ℓ β,c+ε (ρ n ) + bE ℓ β,c−ε (ρ n ) ≤ lim infτ n , hence by the weak convergence of the subsequence ρ n it follow applying the monotone convergence theorem, when ℓ goes to 0, it follows that Since ρ is a probability measure and using the values of a, b we obtain aE β,c+ε (ρ) + bE β,c−ε (ρ) = E β,c (ρ). hence E β,c (ρ) ≤ lim infτ n . We saw from proposition 4.2. that the minimum is realized at the probability measure ν β,c and the minimum is E * β,c . Hence It follows that in the last inequalities we used equation (4.13). Therefore This implies that ρ = ν β,c . We have proved that ν β,c is the only possible limit for a subsequence of the sequence (ρ n ). It follows that the sequence (ρ n ) itself converges: for all bounded continuous function and lim n→∞ τ n = E * β,c . Since the sequence (α n ) converge to c then lim Furthermore if µ is a probability measure then [a, b], the function f β,c (t) > 0 except on subset of S with measure zero. Applying Jensen inequality to the exponential function then From lemme 4.5 we have furthermore the last integral exist by the continuity of the function x log x near 0 and the continuous function f c is with compactly support S. So the integral is bounded by some constant say M. Then It follows that lim sup Since α n converge. Hence where γ µ n (k) is defined in equation (3.1). First step. Let n = 2m be an even integer. Then in the last equality we use the fact that Γ(x + 1) = xΓ(x). Take the logarithm of A 2m It is easy to see that for m large enough and by the fact that µ n = cn + o(n), we deduce, that log(k +µ 2m + 1 2 ) = log(k +µ 2m )+log(1+ 1 2(k + µ 2m ) ) = log(k +µ 2m )+ 1 2(k + µ 2m ) +o( 1 m ), By summing both side of (4.14), one gets and Applying Riemann sums for both sums S 1 m and S 2 m , we obtain ). (4.18) Now we will compute the limits of the others terms By simple computation it yields ). Hence Furthermore it is easy to see that the integral B n is a particular case of A n when we take µ n = nc. Then we have Second case β > 0. Define for ν ∈ M 1 (R) the energy where h(t) = 2 β t. Then by proposition 3.3, we obtains We saw from lemma 3.5 that From the first case β = 2 and simple computation we deduce the desired result. Proof of theorem 4.1 Recall the statistical distribution ν n is defined by: for all bounded continuous function f on R, where E n,µ n is the expectation with respect the probability on R n P n,µ n (dλ) = 1 Let Define on R n the function : where Q α n = x 2 + 2α n log 1 |x| and α n = µ n n . The probability P n,µ n concentrates in a neighborhood of the points where the function K n (x) attains its infimum: Proposition 5.1 Let ε > 0 and A n,ε = x ∈ R n | K n (x) ≤ (E * β,c + ε)n 2 . Then A n,ε is compact and lim n→∞ P n,µ n (A n,ε ) = 1. This proposition can be found in [4], lemma IV.5.2. We give the proof. Proof. Recall that h α n (x) = Q α n (x) − log(1 + x 2 ). Since h α n is lower semicontinuous and then A n,ε is closed and bounded hence it is compact. Let ε > 0, from the definition of A n,ε we have on R n \ A n,ε K n (x) > (E * β,c + ε)n 2 , then P n,µ n (R n \ A n,ε ) ≤ Using all those arguments we obtain for n large enough P n,µ n (R n \ A n,ε ) ≤ Γ(c + 1 2 ) + ε n e − ε 2 n 2 . Which complete the proof. This implies that the sequence σ n,ε is relatively compact for the weak topology. There is a sequence n j going to ∞ such that the subsequence σ n j ,ε converges in the weak topology: lim n→∞ σ n j ,ε = σ ε . We may also assume in the weak topology that lim j→∞ ν n j = lim sup n ν n .
2014-08-30T15:37:39.000Z
2014-08-30T00:00:00.000
{ "year": 2014, "sha1": "1e4088312181c67a4f209f8b10de5a0c6d48e11d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1e4088312181c67a4f209f8b10de5a0c6d48e11d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
4593650
pes2o/s2orc
v3-fos-license
Drain Current Stress-Induced Instability in Amorphous InGaZnO Thin-Film Transistors with Different Active Layer Thicknesses In this study, the initial electrical properties, positive gate bias stress (PBS), and drain current stress (DCS)-induced instabilities of amorphous indium gallium zinc oxide (a-IGZO) thin-film transistors (TFTs) with various active layer thicknesses (TIGZO) are investigated. As the TIGZO increased, the turn-on voltage (Von) decreased, while the subthreshold swing slightly increased. Furthermore, the mobility of over 13 cm2·V−1·s−1 and the negligible hysteresis of ~0.5 V are obtained in all of the a-IGZO TFTs, regardless of the TIGZO. The PBS results exhibit that the Von shift is aggravated as the TIGZO decreases. In addition, the DCS-induced instability in the a-IGZO TFTs with various TIGZO values is revealed using current–voltage and capacitance–voltage (C–V) measurements. An anomalous hump phenomenon is only observed in the off state of the gate-to-source (Cgs) curve for all of the a-IGZO TFTs. This is due to the impact ionization that occurs near the drain side of the channel and the generated holes that flow towards the source side along the back-channel interface under the lateral electric field, which cause a lowered potential barrier near the source side. As the TIGZO value increased, the hump in the off state of the Cgs curve was gradually weakened. Introduction Recently, amorphous indium gallium zinc oxide (a-IGZO), as a representative of an amorphous metal oxide-based semiconductor, has been widely investigated for use in the active layer of thin-film transistors (TFTs) due to its high electron mobility, good transparency in visible light, chemical and thermal stability, low temperature processing, and smooth surface [1][2][3][4]. The a-IGZO TFT with excellent electrical properties, such as high mobility (µ) of over 10 cm 2 ·V −1 ·s −1 and low values of subthreshold swing, has become one of the research hotspots for the advanced display application in next-generation active-matrix liquid crystal displays (AM-LCDs) and active-matrix organic light-emitting diodes (AM-OLEDs) [5][6][7][8]. Hitherto, AM-OLEDs driven by the a-IGZO TFTs involve two or three transistors and one capacitor current-biased voltage-programmed pixel circuit. Therefore, the stability of the a-IGZO TFTs under long-term current-bias is a critical issue for these circuits in AM-OLEDs. However, the a-IGZO TFTs inevitably suffer gate and drain bias stresses during practical operation conditions, leading to device instability and hindering their development for commercial products [9,10]. Fujii et al. [11] have investigated the increase in internal temperature of the IGZO TFTs when the device was operated in the saturation region. Choi et al. [12] have reported that the electron-hole pair generation by impact ionization near the drain side contributed to the negative shift of the threshold voltage of IGZO TFTs with wide channel width under a high gate and drain bias stress. Valdinoci et al. [13] have reported that the electron-hole pair generation by impact ionization near the drain region caused the floating body effect in high µ poly-Si TFTs. Consequently, the electrical stability under drain current stress was considered to be an important issue, especially for high-µ oxide TFTs. Moreover, the active layer thickness is an important parameter to adjust device electrical properties, such as on/off ratio, threshold voltage, and field effect mobility [14][15][16]. As reported in previous publications, the device performance is significantly influenced by the semiconductor/gate insulator (GI) interfacial density [17,18] and the active layer trap density [19], indicating that the total trap density increases with the increase in the active layer thickness [20], which can effectively modify the threshold voltage and field effect mobility. Therefore, the impact of the active layer thickness (T IGZO ) on the instability induced by the positive gate bias stress (PBS) and the drain current stress (DCS) in a-IGZO TFTs should be well investigated. In this study, the initial electrical properties and PBS and DCS-induced instabilities of a-IGZO TFTs with various T IGZO are investigated. Moreover, the DCS-induced instability in the a-IGZO TFTs with various T IGZO is revealed by the combination of current-voltage (I-V) and capacitance-voltage (C-V) measurements. Experimental A schematic cross-sectional view and fabrication process of the bottom-gate IGZO TFT are shown in Figure 1. The fabrication procedure for the a-IGZO TFT is as follows. A chromium (Cr) gate electrode is firstly formed on a glass substrate. A SiO x gate insulator (GI) with a thickness of 150 nm is then deposited at 350 • C by plasma-enhanced chemical vapor deposition (PECVD). The a-IGZO layer with thicknesses of 25 nm, 45 nm, 75 nm, and 100 nm are deposited at 160 • C from a sintered IGZO ceramic target by direct current (DC) magnetron sputtering with a mixed gas of Ar/O 2 = 29.4/0.6 sccm at a deposition pressure of 1 Pa. After patterning the IGZO film as an active channel, a SiO x film (200 nm) as an etch stopper is deposited by PECVD. Source and drain electrodes are formed using indium tin oxide (ITO) via contact holes. A 200-nm thick SiO x passivation layer is also deposited by PECVD. Finally, the IGZO TFTs are annealed in N 2 ambient at 350 • C for 1 h before electrical measurements. The channel width (W) and length (L) the IGZO TFTs are 50 µm and 20 µm, respectively. All of the I-V characteristics are measured using an Agilent 4156C precision semiconductor parameter analyzer. The C-V measurements, the channel capacitance (C gc ), the gate-to-source capacitance (C gs ), and the gate-to-drain capacitance (C gd ), are measured at 1 kHz and an alternating current (AC) level of 100 mV. All of the measurements are carried out at room temperature in ambient air. Results and Discussion To investigate the thickness impact on the chemical properties and bonding states of the IGZO films, an X-ray photoelectron spectroscopy (XPS, ESCALAB250Xi, Thermo Fisher Scientific, Waltham, MA, USA) measurement is performed. Figure 2 shows the O 1s XPS spectra of the IGZO films with various thicknesses. The O 1s spectra can be resolved into three nearly Gaussian distribution peaks approximately centered at 530.7 eV, 531.4 eV, and 532.6 eV. The peaks at the binding energy of 530.7 eV (labeled as O M ), 531.4 eV (labeled as O V ), and 532.6 eV (labeled as O H ) are attributed to the O 2− ions combined with the metal atoms, oxygen deficiency, and hydroxyl groups in a stoichiometric IGZO structure, respectively [21]. The positions, areas, and area ratios of the O 1s three peaks for the IGZO films with various thicknesses are summarized in Table 1 Figure 3 illustrates the C-V plot as a function of the thickness of IGZO in the ITO/IGZO/SiO 2 /Cr stack structure. It is noted that the increase in the IGZO thickness induces a negative shift of the flat band voltage (V FB ). The variation of the V FB in the negative direction implies that the threshold voltage (V th ) of the ITO/IGZO/SiO 2 /Cr stack structure-based TFTs can be adjusted by using the IGZO layer with various thicknesses. In addition, the maximum negative shift of V FB is observed for the IGZO film with the thickness of 100 nm, which contributes to the largest negative shift of the V th . Table 2. The saturation mobility µ sat is calculated by fitting a straight line to the plot of the square root of I DS versus V GS based on the following equation [22]: where W and L are the channel width and length, respectively, and C SiOx is the capacitance per unit area of the GI. When the T IGZO is increased from 25 nm to 100 nm, the µ sat is slightly degraded from 14.17 cm 2 ·V −1 ·s −1 to 13.04 cm 2 ·V −1 ·s −1 . The µ sat is affected by the quality of the active layer and the a-IGZO/GI interface. To confirm the influence of the T IGZO on the quality of the a-IGZO/GI interface, the hysteresis behaviors of the IGZO TFT with various T IGZO are extracted, as listed in Table 1. The identically negligible clockwise hysteresis is obtained regardless of the T IGZO , indicating that the good quality of the IGZO/GI interface is well kept during the fabrication processes for all of the IGZO TFTs. Moreover, the V on and SS values are significantly changed from 2.32 V and 323 mV/dec. in the 25-nm thick IGZO TFT to −0.33 V and 475 mV/dec. in the 100-nm thick IGZO TFT, respectively. The degraded SS value and the shifted V on in the negative V GS direction can be commonly interpreted as consequences of the total defect states and free carrier numbers being increased as the T IGZO values increased, which is consistent with previous publications [23,24] and in agreement with the C-V measurements in Figure 3. Generally, the SS value is an indicator of the maximum area density of state (N t ), including the interfacial (D it ) and the semiconductor bulk traps (N bulk ). The N t value can be extracted from following equation [25]: where q is the electron charge, and k is the Boltzmann constant. The N t values were 6.53 × 10 11 , 7.62 × 10 11 , 8.83 × 10 11 , and 1.03 × 10 12 cm -2 ·eV -1 for the IGZO TFTs with the T IGZO values of 25 nm, 45 nm, 75 nm, and 100 nm, respectively. Obviously, the N t is increased with the increase in the T IGZO value, which is consistent with the XPS results. The results exhibit that the change in the N t mainly originated from the N bulk , owing to the similar a-IGZO/GI interfacial quality. To confirm the uniformity and reproducibility of the a-IGZO TFTs with various T IGZO , the I DS -V GS curves of the 13 individual devices measured at V DS of 20.1 V are shown in Figure 5, respectively. The corresponding electrical properties, such as µ sat , V on , SS, and hysteresis, are listed in Table 3. Notably, the statistical distribution of all of the parameters has the same tendency as described in Table 1 and small standard deviations, thereby indicating very good reproducibility in the fabricated a-IGZO TFTs. To investigate the impact of the T IGZO on the stability of a-IGZO TFTs, the positive bias stress (PBS) is carried out. Figure 6a-d shows the variation in the transfer characteristics of the a-IGZO with various T IGZO under PBS with a V GS value of 20 V. The variation in V on (∆V on ) as a function of PBS duration for the a-IGZO TFTs with various T IGZO values is shown in Figure 5e. It is found that the transfer characteristics for all of the TFTs under PBS shift parallel in the positive V GS direction without SS degradation, indicating that the electrons are trapped at the interface of the a-IGZO or in the GI without introducing any defects. When the T IGZO is decreased from 100 nm to 25 nm, the ∆V on is remarkably increased from 0.52 V to 1.85 V after the stress duration of 10 4 s. The obtained results can be explained by the vertical electrical field distribution. Generally, the electric potential exponentially declines inside the active layer, and has a maximum transfer length called the Debye length. For the a-IGZO TFT, the calculated Debye length was~40 nm [19]. When the T IGZO is less than Debye transfer length (T IGZO = 25 nm), the surface potential will exponentially decline into the whole active layer. Therefore, with the decrease in the T IGZO value, the electrical field will be enhanced. Under PBS, the electrons in the thinner T IGZO will be accelerated by the enhanced surface field, which are accumulated by electrical field energy and are trapped at the interface of the a-IGZO/GI or in the GI under the positive bias, leading to the large positive V GS shift. When the T IGZO increased to more than the Debye length of 40 nm, the electric field at the front-interface becomes lower, contributing to the few electrons that are trapped at the front-interface, which exhibits the small ∆V on with the increase in the T IGZO value. To clarify the mechanism of the DCS-induced instability in the a-IGZO TFTs with various T IGZO , the C-V analyses of C gc , C gs , and C gd before and after DCS duration of 10 4 s are carried out, as shown in Figure 8. Compared with the C gc curves of the a-IGZO TFTs with various T IGZO values in the initial stage and after DCS duration, all of the C gc curves exhibited a positive V GS shift with distortion in the off state of the C-V curves. The shift of the C-V curves is weakened as the T IGZO value increases, which has a similar tendency to the I-V curves, as shown in Figure 7. However, the shift amplitude of the C-V curves is smaller than that of the I-V curves, indicating that the less free carriers are trapped, or the trapped carriers are partly de-trapped during the C-V measurement after the DCS. Furthermore, the hump phenomenon in the off state of the C-V curves becomes weakened as the T IGZO values increase, which is hardly observed in the I-V curves. To further investigate the origin of the hump phenomenon in the off state of the C-V curves, the C gs and C gd values before and after the DCS are measured. Note that the both C gs and C gd curves exhibit a parallel shift in the positive V GS direction. However, the hump phenomenon is only observed in the C gs curve rather than the C gd curve during the DCS. In terms of the a-IGZO TFT with a T IGZO of 25 nm under the DCS (V DS = V GS ), the electrons are transported from the source to drain side along the front-channel interface, which contributes to a depletion region near the drain side. Combined with the case of 25-nm thick IGZO TFT under the PBS, in the initial stage of DCS (<100 s), the electrons are accelerated to the front-channel under the high vertical electric field. Then, they are trapped at the interface of the a-IGZO/GI or injected into the GI, resulting in a significantly positive V GS shift of the transfer curve. Simultaneously, the electrons are accelerated from the source to the drain side under the lateral electric field, resulting in the impact ionization occurring at the drain side of the channel [12]. Subsequently, the electron-hole pairs are generated by impact ionization near the drain side. The generated electrons and holes are collected at the front-channel and the etch-stopper/IGZO (back-channel) interfaces, respectively. The generated holes flow towards the source side along the back-channel interface and cause a lowered potential barrier near the source side, leading to the additional charge response in the C-V measurement, which contributes to the hump in the off state of the C gs curve. The schematic diagram of DCS-induced degradation in the IGZO TFT with the T IGZO of 25 nm is illustrated in Figure 9a. In the subsequent stage (>100 s), with the extension of DCS duration, the more generated holes are accumulated near the source side, which contributes to the increase in the body potential. Therefore, the ∆V on of the transfer curve is weakened with the DCS duration. When the T IGZO value is increased to 45 nm, a similar phenomenon is observed in the a-IGZO TFT under the DCS of 10 4 s. Due to the reduction of the vertical electric field, the amount of the trapped electrons are decreased at the interface of the channel/GI or into the GI, leading to the smaller ∆V on of the transfer curves compared with the 25-nm thick IGZO TFT, which is in agreement with the I-V and C-V results. Meanwhile, the impact ionization occurs near the drain side under the lateral electric field. The electrons are accelerated from the source to the drain side, which induces the generation of the electron-hole pair near the drain side. The generated holes drift towards the source side along the back-channel interface. Due to the amount of free electrons that increase with the increase in the T IGZO , the recombination probability of the holes and electrons are enlarged during the hole drifting. The number of the collected holes at the source side is reduced, contributing to the small hump in the off state of the C gs curve. When the T IGZO is further increased to 75 nm or 100 nm, the positive V GS shift of the transfer curves is significantly decreased due to the weaker vertical electric field with the increase in the T IGZO value, contributing to the slightly positive V GS shift of the I-V and C-V curves. The schematic diagram of the mechanism of DCS-induced instability in the IGZO TFT with the thicker T IGZO is illustrated in Figure 9b. The generated holes induced by the impact ionization in the drain region are drifted from the drain to the source side along the back-channel under the vertical and lateral electric fields. The holes would suffer easily from the recombination with the more free electrons in the thicker IGZO layer. Therefore, the slight hump in the off state of the C gs curve is attributed to the few holes that are accumulated at the back-channel near the source side. Besides the T IGZO value, the architecture of devices also plays a critical role in the DCS-induced instability of the TFTs. On the basis of our previous publication [26], the role of impact ionization is strongly dependent on channel scale, and exhibits two types of dependences on channel length and width. When the DCS is applied to the TFTs with a fixed channel length and different channel widths, the stronger impact ionization can be observed for the wider channel width TFT, leading to the high heating temperature. On the other hand, when the DCS is carried out on the devices with a fixed channel width and different channel lengths, the stronger impact ionization can be obtained for the shorter channel length TFT. Therefore, besides the proper T IGZO , the a-IGZO TFTs with the relatively long length and short width may effectively minimize the impact ionization effect, improving the DCS-induced stability of the a-IGZO TFTs. Conclusions In this study, the initial electrical properties, PBS, and DCS-induced instabilities of a-IGZO TFTs with various T IGZO are investigated. As the T IGZO values increased, the V on decreased, while the SS slightly increased because the total defect states and free carrier numbers were increased as the increase in the T IGZO . It is found that the ∆V on under PBS is aggravated as the decrease in the T IGZO , which is due to the enhancement of the vertical electrical field in the channel. In addition, the DCS-induced instability in the a-IGZO TFTs with various T IGZO values is revealed by the combination of I-V and C-V measurements. The C-V results indicate that an anomalous hump phenomenon is only observed in the off state of the C gs curve for all of the a-IGZO TFTs. This is because the impact ionization occurs near the drain side of the channel and the generated holes flow towards the source side along the back-channel interface under the lateral electric field, which causes a lowered potential barrier near the source side. Since the amount of free electrons increase with the increase in the T IGZO values, the recombination probability of the generated holes and electrons are enlarged during the hole drifting, leading to the weakened hump phenomenon as the the T IGZO values increased. This study points out that material and fabrication engineering in the drain region should be well considered, even for the high-performance oxide TFTs.
2018-04-12T17:43:24.490Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "40ee02750c8e43410a6cbee859715ff281fcc61f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/11/4/559/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40ee02750c8e43410a6cbee859715ff281fcc61f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
266117583
pes2o/s2orc
v3-fos-license
Novel Mini-Invasive Surgical Technique for Treating Fifth Metacarpal Neck Fractures: A Case Report Patient: Male, 25-year-old Final Diagnosis: Fracture fifth metacarpal bone Symptoms: Swelling • pain • functional limitation and malrotation of the 5th finger Clinical Procedure: Surgical intervention • splinting • hand rehabilitation Specialty: Orthopedics and Traumatology • Plastic Surgery Objective: Unusual setting of medical care Background: Fracture of the fifth metacarpal of the hand is due to trauma to the clenched fist. The non-displaced fracture can be treated by splinting and immobilization, but fracture dislocation requires individualized management to ensure the return of function. The Jahss maneuver for reduction of volar displaced metacarpal neck fractures involves flexion of the metacarpophalangeal and proximal interphalangeal joint at 90°, with the proximal phalanx used to reduce the metacarpal head. This report is of a 25-year-old male Italian pianist with a displaced fifth metacarpal neck fracture successfully treated by reduction using the Jahss maneuver and K-wire attachment of subchondral bone to the metacarpal. Case Report: A pianist presented with a trauma to his right hand due to punching a wall. Radiograph images demonstrated an angulated, displaced right fifth neck fracture. A specific approach was decided, considering the complexity of the musical movements and the patient’s performance needs. After fracture’s reduction by the Jahss maneuver, 2 retrograde cross-pinning K-wires were inserted at the subchondral bone of the metacarpal head. Healing under splinting was uneventful, and the K-wires were removed after 45 days. At 4 months after surgery, the patient had complete recovery of both range of motion and strength. Conclusions: Our technique avoided piercing the metacarpophalangeal joint capsule, preventing extensor tendon damage, dislocation, instability, and pain and retraction of the extensor cuff. This novel mini-invasive technique successfully achieved early metacarpophalangeal joint motion, joint stability, and complete recovery of movements in all planes. Background Metacarpal fractures alone account for about 40% of all hand fractures, generally in young, healthy men, leading to absences from work [1].Fifth metacarpal neck fractures are common, usually resulting from a direct, axial trauma (eg, from punching with a closed fist); this fracture is sometimes called a boxer's fracture [2].Transverse metacarpal neck fractures often result in an apex dorsal angulation.Biomechanically significant functional decrease in flexor tendon efficiency occurs in the fifth metacarpal neck with angulations over 30°, due to slack in the flexor digiti minimi and third volar interosseous [3].Metacarpal neck fractures with serious rotation or shortening cannot be effectively controlled through entirely non-operative means.Only if there is no shortening, angulation, or rotational malalignment, will the fracture be eligible for closed methods [1][2][3].First, fracture reduction should be achieved by the Jahss maneuver, as recently proposed [4].Correcting rotational alignment is the most important factor in reduction.The common techniques for fixation of unstable metacarpal neck fractures that cannot be treated by casting alone are closed reduction and internal fixation and open reduction and internal fixation using plates and screws [3].Closed reduction and internal fixation is the treatment of choice for isolated metacarpal neck fractures not meeting the criteria for non-operative treatment; intramedullary pinning, percutaneous pinning, bouquet pinning, and minimally invasive pinning can be valid alternatives [5][6][7][8][9].All these techniques can however lead to complications and drawbacks [10][11][12][13][14][15].This report is of a 25-year-old male pianist with a displaced fifth metacarpal neck fracture successfully treated by reduction using the Jahss maneuver and K-wire attachment of subchondral bone to the metacarpal. Case Report A 25 year-old, right-handed, male professional pianist presented to the hand clinic with a trauma (punch against wall) to his right hand, sustained 2 days earlier.Physical examination demonstrated swelling, pain, functional limitation, and malrotation of the fifth finger.Radiographic images confirmed an angulated, displaced right fifth metacarpal neck fracture that did not involve the articular surface (Figure 1).The patient's pathological anamnesis, surgical history, and family medical history were not relevant for the case.The patient's functional ambition and aspirations were a complete return to pre-lesion range of motion and bone stability, especially the achievement of abduction and extension.The surgical procedure was performed under locoregional anesthesia with a pneumatic tourniquet.An intraoperative image intensifier was used.We reduced the fracture by the Jahss maneuver, through a 90° flexion of the metacarpophalangeal joint and proximal interphalangeal joint and pressing upward on the flexed finger to correct angulation.After confirmation by the image intensifier of the correct achievement of fracture reduction, a 1.6-mm K-wire, mounted in a wire-driver drill, was inserted into the metacarpal head in a retrograde direction from the ulnar side of the metacarpal head, while manually maintaining the reduction.Similarly, another 1.4-mm K-wire was inserted in a cross-retrograde direction from the radial side of the metacarpal head and drawn back gently from the base of the metacarpal bone until its distal tip was situated at the subchondral level of the metacarpal head.The image intensifier confirmed good fracture alignment and stability on passive mobilization (Figure 2).Final intensifier pictures were obtained (Figure 3).A customized ulnar gutter splint, including the forearm and hand, was applied with the wrist extended 20°, the metacarpophalangeal joint flexed 60°, and the interphalangeal joints in complete extension.At 1 week after surgery, the splint was changed to a thermoplastic one, and physical therapy at the piano was started.At 1 month after radiographic confirmation of bone healing, the first K-wire was removed; the second was removed 45 days after surgery.At 3 months after surgery, full range of motion was achieved and, in 4 months, complete recovery of strength (Figure 4).The patient agreed to participate in case and gave informed consent. Discussion Several surgical techniques [5,6] have been described for these kinds of fractures, and there is currently no consensus regarding the optimal fixation technique.We believe minimally invasive fixation could be promising and, in this report, describe a new minimally invasive surgical technique of a fifth metacarpal neck fracture in a high-demand young pianist.Bouquet pinning is performed by placing 3 K-wires with a dorsal bend [7].This minimally invasive technique provides good postoperative range of motion if prolonged immobilization prevented from tendon adhesion and joint contracture.However, bouquet pinning also represents a complex procedure with long learning curves.It is performed through a minimally open access, producing a dorsoulnar scar [8]. Intramedullary fixation is performed with antegrade placement or retrograde fixation improving range of motion, compared with pinning, and a lower incidence of shortening, but at a greater economic cost, higher risk of limited rotational stability and nonunion/malunion [8,10,11].Moreover, intramedullary fixation can jeopardize the terminal divisions of the dorsal ulnar nerve branch, causing neuritis [12]. Percutaneous pinning includes the anterograde and retrograde intramedullary technique, percutaneous transverse, and retrograde cross-pinning fixation [9,10].A cadaveric anatomical study demonstrated that this closed percutaneous approach can damage the surrounding tendons and neurovascular structures.In particular, retrograde pinning has been shown to produce injury both to the extensor digitorum communis and the extensor digitorum minimi tendons, the anterograde technique to the extensor carpi ulnaris, transverse pinning to the dorsal branch of the ulnar nerve, and retrograde cross-pinning to the digital branches of the dorsal cutaneous ulnar nerve [12]. Plate and screw fixation can be accomplished with retraction of the extensor tendons and subperiosteal exposure of the metacarpal neck.Volar cartilage can also make distal fracture fragment fixation difficult [13].This technique offers stable fixation and can be conducted when comminution precludes closed reduction and percutaneous pinning.Complications include higher rate of stiffness, metacarpal head avascular necrosis, extensor tendon injury, and adhesion [13][14][15]. In a recent report, the importance of minimally invasive pinning techniques was pointed out and a single-K-wire retrograde pinning technique was described [9].However, the authors describe surgical access at the base of the fifth metacarpal together with protection of the branches of ulnar nerve.We believe our approach may be better, as no surgical incision is needed, and this will produce no scar on the dorsum of the hand.Moreover, our technique does not need bending of K-wires, making their insertion technically easier.Also, the use of 2 wires that are removed at different times makes early mobilization easier, with a faster return to work (in our case, playing music). Our technique prevents traumatizing the extensor tendon cuff by piercing the metacarpophalangeal joint capsule, which can cause extensor tendon dislocation and instability and pain and retraction of the extensor cuff.For patients who need high-demand functionality of the metacarpophalangeal joint, such as pianists, these surgical complications can lead to loss of extension and abduction, movements whose minimal loss is a great harm to a pianist.This report has limitations.As it is a single case, further investigation and clinical studies should be conducted to evaluate whether this new technique could be a standard approach. Conclusions We describe a novel minimally invasive surgical technique of a displaced fifth metacarpal neck fracture in a young pianist with high-demand functionality.Unlike other techniques, our operative approach had the advantage of preventing extensor tendon dislocation and instability and nerve injury, and it allowed early joint motion, joint stability, and complete recovery of movements in all planes. Further studies should be conducted to evaluate whether this new technique could be a standard approach. Declaration of Figures' Authenticity All figures submitted have been created by the authors who confirm that the images are original with no duplication and have not been previously published in whole or in part. Faccio M. et al: Novel surgical mini-invasive technique Figure 2 . Figure 2. Intraoperative view showing reduction and fixation of the fracture. Figure 3 . Figure 3. Postoperative image intensifier picture confirming good fracture alignment and stability. Figure 4 . Figure 4. Follow-up image at 4 months showing full range of motion.
2023-12-09T16:19:33.454Z
2023-11-30T00:00:00.000
{ "year": 2024, "sha1": "6b8aaba77d2f370ef45f0b9a1d06335c815ba8a2", "oa_license": "CCBYNCND", "oa_url": "https://amjcaserep.com/download/inPress/idArt/941518", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f5395cbff3db0958420a8f6a7c520423ad0d77f7", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
137847254
pes2o/s2orc
v3-fos-license
Micromechanical analysis on anisotropy of structured magneto-rheological elastomer This paper investigates the equivalent elastic modulus of structured magneto-rheological elastomer (MRE) in the absence of magnetic field. We assume that both matrix and ferromagnetic particles are linear elastic materials, and ferromagnetic particles are embedded in matrix with layer-like structure. The structured composite could be divided into matrix layer and reinforced layer, in which the reinforced layer is composed of matrix and the homogenously distributed ferromagnetic particles in matrix. The equivalent elastic modulus of reinforced layer is analysed by the Mori-Tanaka method. Finite Element Method (FEM) is also carried out to illustrate the relationship between the elastic modulus and the volume fraction of ferromagnetic particles. The results show that the anisotropy of elastic modulus becomes noticeable, as the volume fraction of particles increases. Introduction Magneto-rheological elastomer is comprised of polymer matrix and the micron-scale ferromagnetic particles inside it. The ferromagnetic particles are magnetized in the presence of magnetic field, and the MRE presents macroscopic behavior that the elastic modulus alters with the magnetic field. Compared with the magnetorheological fluids, the merits of MREs are their controllable stiffness, chemical and physical stability, and no sealing problems that make them attractive in semi-active vibration control [1,2]. The elastic modulus of MRE can contribute to the magneto-induced modulus and zero-field modulus in the absence of magnetic field. The relations among magneto-induced modulus, ferromagnetic particle fraction, and magnetic field strength, have been investigated extensively [3]. In comparison with MRE whose particles are uniformly distributed, the MRE with aligned particles has a higher magneto-induced modulus [4]. Hence, it is very common to prepare the MREs where the magnetic particles are aligned orderly using magnetic field during curing process, resulting in anisotropic MREs. However the research on the zero-field modulus is still inadequate, since only the empirical equation is utilized to evaluate the zero-field modulus of MRE in previous work [5,6], and the particles distribution is merely taken in to consideration, which leads to an untrustworthy result. Many models are proposed in these years to help us understand the equivalent material properties, such as Given that particles in reinforced layer are seamlessly embedded in the matrix, and they are uniformly dispersed, the reinforced layer can be regarded as two phase composites. According to the relationship between stress and strain, the arithmetic mean symbol is defined as 〈·〉 = ∮· . Inside the reinforced layer, there are two phases: matrix phase and inclusion phase, where the average stress tensor 〈 〉 and the average strain tensor 〈 〉 of each unit cell satisfy the following equations: Where is the equivalent elastic tensor of a unit cell, r stands for the r phase of material. According to the definition of arithmetic mean symbol and the structure of reinforced layer, the average stress tensor of a unit cell can be also described as equation (3), which is the sum of average stress of each phase. Where V is the total volume, V r is the relative volume of phase r, = / is the volume fraction that phase r accounts for. The local relation between each phase is: where is the uniform strain on the unit cell boundary, B is the local tensor. The equations (1)-(4) reduce to the equivalent elastic modulus : According to Mori-Tanaka method, given that the strain around the certain inclusion is the average strain of matrix, and the average strain of ferromagnetic particles is expressed as: Where I is the fourth-rank tensor, = − is the difference between elastic modulus of inclusions and elastic modulus of matrix . Tensor P is related with the inclusion shape. When the inclusions are chosen as spherical shape, equation (7) is derived as: Where G 0 , K 0 are shear modulus and volume modulus of matrix phase, respectively. is Kronecker tensor. By considering equations (6) and (7), the equivalent modulus of reinforced layer satisfies the following equation: Where P r is the r phase's P tensor, by substituting equation (7) into equation (8), it can be concluded that reinforced layer is isotropic, while the equivalent volume modulus and equivalent shear modulus are expressed as: Equivalent modulus of MRE The reinforced layer and matrix layer are made up of isotropic materials. Assuming that MRE is transverse isotropic, the five independent transverse elastic constants of elastic modulus are described as equations (12)-(16). Along the x or z axis stretching the MRE uniaxially, the displacement is . Therefore, the resultant force generated by two layers is as: Where is the cross-sectional area of reinforced layer, is the cross-sectional area of matrix. is the normal stress of reinforced layer, is the normal stress of matrix layer. Considering that the two layers have the same normal strain = / , according to the equation (11), the vertical Young's modulus of MRE , are expressed as Where = /( + ) represents the volume fraction of reinforced layer. Similarly, along the y axis stretches the MRE, assuming that both layers share the same normal stress, so the horizontal Young's modulus can be obtained: Poisson coefficient affects the axial deformation when the material suffers from a uniaxial stretch. Assuming that the material is stretched along the direction which is perpendicular to the y axis, the total deformation of the elementary cell comes from the sum of vertical deformation of two elementary layers. Hence, the Poisson's ratio is derived as: When determining the shear modulus, given that a shear is along the x-y, and the shear stress of reinforced layer and matrix layer are the same, the x-y shear modulus is derived as: (15) When a shear is along the opposite side, the reinforced layer and matrix layer have the same shear strain. So, the x-z shear modulus is derived as: FEM simulation To further evaluate the theoretical model, the FEM is employed to obtain the elastic modulus of structured MRE. The model of a unit cell is shown as figure 3. When the composite suffers from the stretch along the i direction, a fixed constraint is applied on S efgh , S dfgc and S adef , and the boundary condition on S abcd is a prescribed displacement in the i direction, then we have: When the composite suffers a shear force on the i-j plane, the equivalent shear modulus of MRE satisfies equation (19). In summary, by varying the volume fraction and layer distribution in the finite element model of composite element, the stress and strain of composite element model in different working conditions are acquired. Hence the equivalent elastic modulus is derived by introducing equations (17)-(19). Figure 4(a) shows that E/E m varies with the particles volume fraction. Both of E 1 , E 2 increase with the particles volume fraction. Due to the present theoretical model does not consider the interaction among neighboring particles, the theoretical solution for equivalent Young's modulus is lower than the FEM simulation results. However, both methods show the equivalent Young's modulus changing with particle volume fraction in the same trend. Figure 4(b) describes the relationship between shear modulus and particles volume fraction. The shear modulus of G 12 , G 13 has the same increasing trend with elastic modulus. In the shearing condition, the space between particles are fixed, which reduces the interaction among particles. Hence, the theoretical solutions for shear modulus well agree with the FEM solution. Figure 4(c) illustrates that the difference between the Young's modulus E 1 in the x direction and the Young's modulus E 2 in the y direction. The result indicates that the elastic modulus along the reinforced direction increases significantly, as the particles proportion increases. 5. Conclusions  In the same particle volume fraction, structured MRE has an enhanced Young's modulus along the direction of axis x, and the Young's modulus decreases along the y axis.  As the particle percentage increases, the anisotropy of MRE becomes more noticeable.  Solutions of equivalent shear modulus match well with the finite element solutions. Both solutions of equivalent Young's modulus have the same trend, but the errors still exist and the theoretical solutions are slightly lower.
2019-04-29T13:13:07.971Z
2015-07-16T00:00:00.000
{ "year": 2015, "sha1": "2a33c075c61b17e8a54e5bdb6c136b1a7a2a8f92", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/87/1/012068", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "decdbd2e0afed36b553fae3043e4e77b9bc3f32a", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
230596580
pes2o/s2orc
v3-fos-license
Fuzzy Association of an Intrinsically Disordered Protein with Acidic Membranes Many physiological and pathophysiological processes, including Mycobacterium tuberculosis (Mtb) cell division, may involve fuzzy membrane association by proteins via intrinsically disordered regions. The fuzziness is extreme when the conformation and pose of the bound protein and the composition of the proximal lipids are all highly dynamic. Here, we tackled the challenge in characterizing the extreme fuzzy membrane association of the disordered, cytoplasmic N-terminal region (NT) of ChiZ, an Mtb divisome protein, by combining solution and solid-state NMR spectroscopy and molecular dynamics simulations. While membrane-associated NT does not gain any secondary structure, its interactions with lipids are not random, but formed largely by Arg residues predominantly in the second, conserved half of the NT sequence. As NT frolics on the membrane, lipids quickly redistribute, with acidic lipids, relative to zwitterionic lipids, preferentially taking up Arg-proximal positions. The asymmetric engagement of NT arises partly from competition between acidic lipids and acidic residues, all in the first half of NT, for Arg interactions. This asymmetry is accentuated by membrane insertion of the downstream transmembrane helix. This type of semispecific molecular recognition may be a general mechanism by which disordered proteins target membranes. ■ INTRODUCTION Upon binding to their partners, intrinsically disordered proteins span a continuum in the extent of order, from fully folded to partially ordered to fully disordered. The complexes in which disordered proteins remain disordered are termed "fuzzy". The fuzziness reaches an extreme when the partners are another disordered protein or nucleic acid and both subunits remain fully disordered. 1−5 A third class of partners for disordered proteins comprise membranes. 6−9 In a wellcharacterized case, membrane association of α-synuclein is accompanied by the formation of amphipathic α-helices. 9 A large fraction of transmembrane and peripheral membrane proteins contain disordered regions, 10 but there is little knowledge on any extreme fuzzy complexes with membranes. Here, we tackle the challenge of characterizing the extreme fuzzy membrane association of the disordered cytoplasmic Nterminal region of the transmembrane protein ChiZ, a member of the Mycobacterium tuberculosis (Mtb) divisome complex, by combining solution and solid-state NMR spectroscopy with molecular dynamics (MD) simulations. Many disordered proteins are enriched in charged residues, 11 and interactions between oppositely charged residues are crucial features of extreme fuzzy complexes between disordered proteins. 1,2 Likewise, the interactions between basic residues of proteins and acidic phosphate groups of nucleic acids are crucial for their high-affinity, fuzzy association. 3−5 The inner leaflet of the plasma membrane is highly acidic due to the asymmetric distribution of charged lipids, including phosphatidylserine, phosphatidylinositol, and the latter's phosphorylated variants, 12 and thus forms a target for polybasic proteins, including signaling molecules. 7 The Mtb inner membrane contains an abundance of acidic lipids, with phosphatidylglycerol, cardiolipin, phosphatidylinositol, and phosphatidylinositol mannosides present at roughly a 7:3 ratio to the neutral phosphatidylethanolamine (based on the composition in Mycobacterium smegmatis, a nonpathogenic model 13 ). This acidic surface provides ample opportunities for association by ChiZ and other Mtb divisome proteins with disordered cytoplasmic regions that are enriched in basic residues (Figure 1a and Figure S1). Very few fuzzy complexes between disordered proteins and membranes have been characterized at the residue level. The most intensely studied protein in this regard is α-synuclein, which forms amphipathic α-helices in the first 100 residues upon membrane association. 9 α-Synuclein preferentially binds to vesicles containing acidic lipids, 14 but membrane curvature also plays an important role. A disease-associated charge reversal, E46K, strengthened membrane binding but weakened selectivity for membrane curvature. 15 Conversely, increasing negative charges in the C-terminal tail weakened membrane association but enhanced curvature selectivity. 16 The entire 100 residues apparently do not bind to the same vesicle all the time; while the first 30 or so residues stably bind to a vesicle, the remaining segments can dissociate and even bind to a different vesicle, leading to vesicle clustering. 17 Even when the 100 residues were membrane-bound, MD simulations showed significant conformational heterogeneity for α-synuclein, although the helices remained intact. 18 By contrast, no information is available for how a basic region of the Wiscott−Aldritch Syndrome protein interacts with acidic lipids of the plasma membrane, even though the fuzzy interaction activates this protein for stimulating Arp2/3-mediated initiation of actin polymerization. 6 Likewise, the disordered intracellular region of the prolactin receptor, known to interact with inner leaflet-specific lipids via conserved basic clusters and hydrophobic motifs, 8 was modeled without considering membrane association due to lack of information. 19 Here, we report residue-level characterization of the fuzzy association of the ChiZ 64-residue N-terminal region (NT) with acidic membranes. In full-length ChiZ (ChiZ-FL), NT is followed by a 21-residue transmembrane helix; on the periplasmic side, a C-terminal LysM domain (residues 113− 165) is connected to the transmembrane helix by a 26-residue linker (Figure 1). In a previous study, 20 we showed that, in solution, the NT-only construct ChiZ1-64 is fully disordered without detectable α-helix or β-sheet formation, but with polyproline II (PPII) formation and intramolecular interactions including salt bridges concentrated in the first half of the sequence. Here, we investigated NT-membrane association by solution NMR in the context of ChiZ1-64 and by solid-state NMR on both ChiZ1-64 and ChiZ-FL. In addition, extensive MD simulations of these two constructs and ChiZ1-86 associating with membranes ( Figure 1b Figure 2c). These spectra show that ChiZ1-64 does not associate significantly with the neutral POPC and DOPC:-DOPE membranes, or even with a membrane containing 20% acidic lipids. In contrast, the HSQC spectrum in the presence of 7:3 POPG:POPE liposomes, which mimic the charge composition of Mtb membranes, 13 shows that the crosspeaks of most of the residues are broadened beyond detection ( Figure 2d). Of the remaining crosspeaks, based on the overlap with the counterparts in unbound ChiZ1-64, assignments could be made for Gly58, Ser61, Arg62, and Val64, all located at the C-terminus. Due to slight shifts, the few other crosspeaks could not be unambiguously assigned, but appear to be Nterminal residues, including Met1, Thr2, His8, Thr9, and Asn14 as well as possibly Gln31. So, the HSQC spectra demonstrate that ChiZ1-64 associates with membranes containing 70% acidic lipids, but residues at the two termini remain free. Membrane Association Is Fuzzy But There Is Hint for a Subpopulation with a Stable Binding Motif The solution NMR HSQC experiment is useful for indicating membrane association, but the loss of crosspeaks precludes further characterization of the association. We thus turned to magic-angle spinning (MAS) 13 C solid-state NMR experiments: insensitive nuclei enhanced by polarization transfer (INEPT) and cross-polarization (CP). The former is sensitive to dynamic sites, whereas the latter is sensitive to static sites. The INEPT spectrum of ChiZ1-64 bound to POPG:POPE liposomes shows an abundance of crosspeaks ( Figure 3a, black contours), indicating that most of the residues remain highly dynamic, and hence, the membrane association is extremely fuzzy. In fact, the dynamics apparently rival those in unbound ChiZ1-64 and result in the same, undispersed chemical shifts for a given pair of carbon−carbon sites (e.g., Arg Cβ-Cδ) at different positions along the amino-acid sequence. This spectral overlap is a strong indication that ChiZ1-64 does not fold upon membrane association and allowed the assignment of INEPT crosspeaks to types of carbon−carbon sites but not to specific residue positions. Both in unbound ChiZ1-64 20 Figure S2), we were able to recognize that, in unbound ChiZ1-64, these two crosspeaks were assigned to Arg residues with one distinction: whether the succeeding residue along the sequence is a Pro. We refer to these two groups of residues as RP Arg and non-RP Arg, respectively. The (56.3, 30.8) crosspeak belongs to nine non-RP Arg residues, whereas the (54.0, 30.4) crosspeak belongs to the four RP Arg residues: Arg5, Arg34, Arg39, and Arg62. Corroborating the INEPT result that most residues in POPG:POPE-bound ChiZ1-64 are dynamic, the CP spectrum shows only a few crosspeaks (Figure 3a, red contours). They largely overlap with crosspeaks in the INEPT spectrum and accordingly can be assigned to Arg Cα−Cβ and Cγ−Cδ, Pro Cγ−Cδ, Ala Cα−Cβ, and Leu Cα−Cβ. Interestingly, Arg Cα− Cβ appears as a single crosspeak in the CP spectrum ( Figure 3b, red contours); its overlap with the non-RP crosspeak in the INEPT spectrum suggests that this most prominent CP crosspeak comes from one or more non-RP Arg residues. Rather than appearing at isolated positions along the sequence, it is far more likely that the apparently static residues detected by CP form a contiguous stretch for overall stability. There is only a single such stretch, A 43 PLR 46 , and the Arg involved is indeed non-RP. The CP experiment thus hints at a stable motif, A 43 PLR 46 , that may form in a subpopulation of POPG:POPE-bound ChiZ1-64. Our MD simulations sampled a structure for this putative stable binding motif ( Figure S3). We further performed INEPT on ChiZ-FL reconstituted into POPG:POPE liposomes to determine whether NT remained dynamic when the protein was tethered to the membrane via the transmembrane helix. The INEPT spectra of ChiZ1-64 (black contours) and ChiZ-FL (red contours) essentially overlap (Figure 3c), showing that, in the context of the full-length protein, NT also does not fold upon membrane association. However, one clear distinction emerges for the Arg Cα−Cβ crosspeaks (Figure 3d). Whereas ChiZ1-64 Arg Cα−Cβ has both an RP crosspeak at (54.0, 30.4) and a non-RP crosspeak at (56.3, 30.8) (red contours), only the latter crosspeak is observed in the ChiZ-FL INEPT spectrum. The disappearance of the RP crosspeak means that the corresponding Arg residues (more precisely, their Cα and Cβ atoms) become more static upon membrane insertion of the transmembrane helix. Of the four RP Arg residues, rigid- JACS Au pubs.acs.org/jacsau Article ification is expected for Arg62, which is right next to the transmembrane helix in ChiZ-FL. The most N-terminal Arg residue, Arg5, could become static because the N-terminal Histag present in ChiZ-FL (but absent in ChiZ1-64) might attach to the membrane. 21 That still leaves two RP residues, Arg34 and Arg39, in the midsection unaccounted for. As the data from the next experiment indicate, even in the ChiZ1-64 construct, these two residues, along with other midsection Arg residues, interact with lipids, and the resulting loss in dynamics potentially prevented their detection by INEPT, but the loss in dynamics was incomplete so Arg34 and Arg39 were not detectable by CP either. Upon membrane insertion of the transmembrane helix, Arg34 and Arg39 in ChiZ-FL may interact more strongly with lipids and further lose some dynamics (see below), thereby evading detection by INEPT. Arg Residues Engage in Direct Interactions with Lipid Headgroups We used paramagnetic relaxation enhancement to identify NT residues that interact with membranes. By doping liposomes with lipids chelating the paramagnetic ion Gd 3+ (Figure 4a), neighboring ChiZ nuclei would relax much faster due to increased dipolar interactions with the spin label, resulting in line broadening and loss of signal intensity. One-dimensional 13 C direct-excitation spectra show that resonances of Arg side-chain carbons experience significant intensity loss in the presence of Gd 3+ -chelated lipids, while other resonances are largely unaffected (Figure 4b, c). This observation applies to both ChiZ1-64 bound to POPG:POPE liposomes and ChiZ-FL reconstituted into these liposomes and reveals that Arg residues are the major players in mediating NT association with membranes. To characterize NT-lipid interactions in more detail, we investigated paramagnetic relaxation enhancement in reconstituted ChiZ-FL by 1 H− 13 C correlation experiments with INEPT magnetization transfer. We took advantage of the spectral overlap between the solid-state INEPT and solution HSQC spectra ( Figure S4) and assigned the INEPT crosspeaks to types of carbon sites (e.g., Val Cγ; Figure 4d). In a few cases, assignment could be made to specific residues, either because there was only a single residue of a given type (Asn14, Glu28, or Gln31) in NT, or because it was the only NT residue of a given type (Ala43) that preceded a Pro. A comparison of the 1 H− 13 C correlation spectra between ChiZ-FL samples without (black contours) and with (red contours) the Gd 3+ spin label provides a global picture of the NT residues that are in contact with lipid headgroups. An immediate observation is that NT experiences a general loss in 1 H− 13 C signals in the presence of Gd 3+ . As the relaxation enhancement effect of the spin label may reach protons as far JACS Au pubs.acs.org/jacsau Article as 20 to 25 Å away, we interpret the general loss in signal as an indication that the spin label senses the entire NT sequence. In other words, when ChiZ-FL is reconstituted into POPG:POPE liposomes, no portion of NT appears to dissociate constantly from membranes. In the aliphatic region of the 1 H− 13 C correlation spectra, upon adding the spin label, Arg Cδ sites experience the strongest loss of intensity. In addition, the His Cβ crosspeak disappears altogether. The considerable intensity loss for Arg side chains indicates direct interaction with lipids; the signal disappearance of His side chains likely can be attributed to membrane attachment of the N-terminal His-tag. Similar effects of the spin label on Arg and His residues are also observed in the Cα region of the spectra. Together, the data from the different NMR experiments indicate that Arg residues away from the NT termini are the major mediators of the association with acidic membranes. The association is extremely fuzzy as NT remains highly dynamic and does not fold, apart from some hint for a subpopulation with A 43 PLR 46 as a stable binding motif. NT is Anchored to Membranes by Arg Residues in the Midsection As is clear from the foregoing presentation, our MD simulations were crucial in the interpretation of the NMR data. More importantly, the simulations reveal atomistic details about the extreme fuzzy membrane association of NT, which we now describe. As a first step, we calculated the probabilities that individual NT residues in the three ChiZ constructs are in contact with POPG:POPE membranes (i.e., <3.5 Å between heavy atoms; Figure 5a, b). We denote the membrane-contact probability of residue i by C i . In ChiZ1-64, the residues that contact membranes with relatively high probabilities (i.e., C i > 0.25, indicated by a horizontal dashed line in Figure 5b) are all Arg residues, in accord with the paramagnetic relaxation enhancement data in Figure 4b. There are nine such Arg residues, including Arg23, Arg26, Arg33, Arg34, Arg37, Arg39, Arg46, Arg49, and Arg56. In complete agreement with the 1 H− 15 N HSQC spectra of ChiZ1-64 reported in Figure 2d, the extreme N-and C-terminal residues do not frequently form contacts with POPG:POPE membranes. Indeed, except for Arg56, the frequent-contact Arg residues are limited to the midsection of NT with Arg37 having the highest contact probability at 60%. Furthermore, the distribution of the frequent-contact Arg residues along the sequence gives the first indication that the two halves of NT (denoted as N-and Chalf) are not equal in membrane association, with C-half playing a more prominent role. We will further explore this asymmetry below. A representative snapshot illustrating the membrane anchoring of NT by midsection Arg residues is shown in Figure 5c. Comparing the membrane-contact probabilities of ChiZ1-64 with those of the longer constructs (Figure 5b), the most obvious effect of membrane tethering of the NT C-terminus is the near 100% contact probabilities of the three most Cterminal residues, R 62 PV 64 . The effect of the membrane tethering is apparent up to residue Thr50, and small increases in membrane-contact probabilities are seen all the way to the start of C-half. These changes accentuate the asymmetry Figure 5. Membrane-contact probabilities of NT residues. (a) Contact status of individual residues in snapshots along a 1.9-μs molecular dynamics trajectory of ChiZ1-64. Green bars and blanks indicate that a residue either is or is not in contact with the membrane. (b) Membrane-contact probabilities of NT residues in the three constructs. The shaded bands represent standard deviations among the snapshots analyzed. The extreme N-terminal residues that show high membrane-contact probabilities in ChiZ1-86 and ChiZ-FL are from two MD trajectories where Met1 was started as nearly embedded in the headgroup region, mimicking in a small way potential membrane attachment of the N-terminal His-tag; Met1 eventually dissociated from the membrane. For these two constructs, residues 49−56 penetrated into the membrane in two trajectories. These events led to relatively large standard deviations in membrane-contact probability. (c) A snapshot of ChiZ1-64 at 1.56 μs from the same trajectory as in (a), illustrating the membrane anchoring of NT by Arg residues in the midsection. JACS Au pubs.acs.org/jacsau Article between the two halves of NT in membrane association. Additional evidence below will show that the effect of the membrane tethering even propagates into N-half. The resulting further loss in dynamics for Arg34 and Arg39 in ChiZ-FL explains why they, along with Arg5 and Arg62, are not detectable by INEPT ( Figure 3b). Lastly, we note that while the membrane-contact probabilities of NT residues are very similar between ChiZ1-86 and ChiZ-FL, there are subtle differences. A majority (11 out of 16) of the frequent-contact residues have slightly higher contact probabilities in ChiZ-FL than in ChiZ1-86 ( Figure S5a). This difference will also be further addressed below. Competition between Acidic Residues and POPG Contributes to Asymmetry between the Two Halves of NT in Membrane Association The scant involvement in membrane association by Arg residues in ChiZ1-64 N-half stands in contrast to their deep involvement in intramolecular salt bridges when ChiZ1-64 is unbound 20 ( Figure S6). The latter result has been explained by the fact that the salt-bridge partners, i.e., acidic residues (Asp11, Asp20, and Glu28), are all in N-half. Apparently, acidic residues and acidic lipids compete for interactions with Arg residues; when Arg residues (in particular, in N-half) engage in intramolecular interactions with acidic residues, they lose the ability to engage in intermolecular interactions with POPG lipids. Indeed, with the partners being either POPG lipids or acidic residues, the profiles of hydrogen bonding probabilities of Arg residues are mirror images of each other, with POPG lipids favored by C-half residues whereas acidic residues favored by N-half residues (Figure 6a, b). Expectedly, the probabilities that Arg residues hydrogen bond with POPG lipids (Figure 6a) track closely the corresponding membrane-contact probabilities (Figure 5b). Indeed, these two sets of data are highly correlated, with a slope of approximately 0.62 ( Figure S7). In other words, each time an Arg residue comes into contact with membranes, there is a 2/3 chance that it forms hydrogen bonds with POPG lipids, therefore indicating that Arg-POPG hydrogen bonds are the main driving force for membrane association. Non-Arg residues in ChiZ1-64 have minimal probability for hydrogen bonding with POPG ( Figure S8a). Seven of the nine Arg residues that most frequently hydrogen bond with POPG lipids are in C-half. In contrast, Arg residues that frequently hydrogen bond with acidic residues are all in N-half (Figure 6b). The most prevalent of these Arg residues are Arg5 and Arg25. The prevalence of Arg5 can be attributed to its proximity to Asp11 along the sequence, while that of Arg25 to its proximity to both Asp20 and Glu28. The frequent hydrogen bonding with Asp20 and Glu28 explains why Arg25 has lower probabilities than both of its neighbors, Arg23 and Arg26, for hydrogen bonding with POPG lipids and for membrane contact. Compared to unbound ChiZ1-64 ( Figure S6), Arg5 and Arg16 near the N-terminus have increased probabilities of hydrogen bonding with acidic residues upon membrane association, but Arg23, Arg26, and Arg33 have reduced probabilities of hydrogen bonding with acidic residues, showing that, for these latter Arg residues, acidic residues lose their competition against POPG lipids. Besides the acidic POPG, Arg residues can also hydrogen bond with the zwitterionic POPE, though at much lower (Figure 6c). Even after compensating for the fact that POPE is at a lower mole fraction in the membranes, Arg residues are still 1.5 to 2.0 times less likely to hydrogen bond with POPE than with POPG ( Figure 6d). On average, 2.5 NT Arg residues in ChiZ1-64 hydrogen bond with POPG lipids at each moment. This number increases to 3.2 in ChiZ1-86 and 3.0 in ChiZ-FL, mostly from C-half Arg residues starting at position 37 ( Figure 6a). In comparison, the average numbers of NT Arg residues that hydrogen bond with POPE lipids at each moment, after scaling up by a factor of 7/3, are only 1.3, 1.6, and 2.0, respectively, in ChiZ1-64, ChiZ1-86, and ChiZ-FL. Therefore, POPG lipids preferentially distribute around the membrane-associated NT (see Figure 5c). Such preferential distribution of acidic lipids around basic groups of membrane-associated proteins have been observed in previous MD simulation studies. 22,23 On average, each Arg residue engages with 1.1 to 1.2 POPG lipids in their hydrogen bonding. The average numbers of NT Arg residues that hydrogen bond with acidic residues range from 0.52 to 0.43 in the three ChiZ constructs, slightly less than the counterpart, 0.62, in unbound ChiZ1-64. The two halves of unbound ChiZ1-64 are asymmetric not only in salt-bridge formation but also in PPII propensity (there are very low propensities for helices and β-strands; Figure S9). 20 Three PPII stretches form with high probabilities (>50%), all in N-half: V 4 RP 6 , P 10 DP 12 , and A 27 EP 29 . In C-half, residues that sample the PPII region with the highest probabilities are P 44 L 45 at 35% and S 38 R 39 at 32%. In agreement with the NMR data, ChiZ1-64 does not gain any secondary structure upon membrane association ( Figure S9). In fact, while N-half largely preserves its PPII probabilities upon membrane association, P 44 L 45 in C-half suffers a modest reduction in its PPII probability, down to 31%. In ChiZ1-86 and ChiZ-FL, this probability further deteriorates to 26 and 25%, respectively. Similar losses in PPII probability are also seen for S 38 R 39 . So, NT sacrifices PPII formation in C-half to gain stability in membrane association. The asymmetry in NT's membrane association is dramatically illustrated by one of the ChiZ1-64 simulation runs (Movie S1). In this run, ChiZ1-64 initially binds to one leaflet via N-half. After only 20 ns, it dissociates but then quickly reassociates at 120 ns with another leaflet, this time via C-half. The association is stable for the rest of the 1.9-μs simulation. Apart from this brief episode in ChiZ1-64, NT in each of the three constructs is associated with membranes essentially all the time. When membrane contact is broken into N-and Chalves, we further find that C-half is membrane-bound constantly, whereas N-half is membrane-bound approximately 71% of the time in each of the three constructs. That both halves of NT spend at least 70% of the time on POPG:POPE membranes explains why the Gd 3+ spin label senses the entire NT sequence (Figure 4d). Both Transmembrane Helix and LysM Domain Contribute, Directly or Allosterically, to NT-Membrane Association Several characteristics of NT-membrane association have emerged from the foregoing analyses of MD simulations. The association is largely maintained by Arg-POPG hydrogen bonding. For ChiZ1-64, these Arg residues are mostly located in the midsection of the sequence, but there is also an asymmetry that favors C-half. This intrinsic asymmetry is partly due to competition between acidic residues, all in N-half, and POPG lipids for interactions with Arg residues, and partly due to high PPII propensities in N-half. This asymmetry is accentuated by the membrane tethering of the NT C-terminus via the transmembrane helix. As illustrated by Movie S2 for ChiZ1-64 and Movie S3 for ChiZ-FL, NT-membrane association is highly dynamic. At each given moment, several Arg residues hydrogen bond with the membranes, but the identities of the Arg residues rapidly change ( Figure 5a). As NT changes its conformation and hydrogen bond donors, the JACS Au pubs.acs.org/jacsau Article lipid acceptors, primarily POPG, also adapt to surround the Arg donors. At a given moment, the numbers of NT residues in contact with membranes are 7.0 ± 1.4, 10.9 ± 1.9, and 11.1 ± 1.4, respectively, in ChiZ1-64, ChiZ1-86, and ChiZ-FL; of these, 74, 82, and 83% are in C-half. To gain a deeper sense of which residues contact membranes at the same time, we calculated the probability, C ij , that two residues, i and j, contact membranes simultaneously. Figure 7a−c displays the C ij networks of the three ChiZ constructs as graphs, where circular nodes (with radii proportional to C i ) represent residues with C i > 0.25, and edge widths represent C ij (with C ij threshold at 0.20). It is clear that, relative the contact network of ChiZ1-64, the counterparts of ChiZ1-86 and ChiZ-FL are much more connected, with strong connections extending into N-half. The strengthened network connectivity of the longer constructs arises largely from the higher membrane-contact probabilities of the C-half residues ( Figure 5b), which in turn can be attributed to the membrane insertion of the transmembrane helix. This is the basis of the assertion made above that the effect of membrane tethering propagates all the way into N-half. The direct effect of the membranecontact probabilities can be removed by normalizing the cooccurrence probability: Ĉi j ≡ C ij /C i C j , where C i C j is the expected probability that residues i and j would contact membranes at the same time by chance. A Ĉi j that is greater than 1 indicates correlation between the two residues, and hence, we refer the Ĉi j − 1 network as the contact correlation network. The contact correlation networks no longer show a clear-cut difference in connectivity among nine common residues for ChiZ1-86 and ChiZ-FL and for ChiZ1-64 ( Figure S10a−c). On the other hand, closer inspection reveals that the network connectivity of ChiZ-FL is stronger than that of ChiZ1-86, in line with the slightly but consistently higher membrane-contact probabilities of ChiZ-FL shown in Figure S5a. This difference is made clearer by comparing the degree, d i , defined as the sum of C ij over all the partner (i.e., j) residues, of each node in ChiZ-FL and ChiZ1-86 ( Figure S5b). Of the 16 frequent-contact residues, 13 have higher d i in ChiZ-FL than in ChiZ1-86. As shown by the contact correlation networks ( Figure S10b,c), membrane-contact residues in ChiZ-FL also have a higher level of correlation than in ChiZ1-86. The stronger network connectivity of ChiZ-FL reveals that the periplasmic linker and LysM domain also contribute to the stability of NT-membrane association. Periplasmic residues only occasionally contact membranes ( Figure S8b) and thus do not influence NT's membrane association through their own membrane association on the opposite leaflet. Instead, we found that the positioning and tilting of the transmembrane helix are affected by the presence of the periplasmic linker and LysM domain (Figure S11a,b). In ChiZ-FL, the helix shifts toward the periplasmic side by approximately 1 Å, and the helix tilt samples a narrow range of angles. These make the transmembrane helix more deeply (from NT's perspective) and more stably inserted in the membrane. By these changes in the transmembrane helix, the periplasmic linker and LysM domain allosterically strengthen NT-membrane association. Lastly, we display a snapshot from the MD simulations of ChiZ-FL in Figure 7d to illustrate the extreme fuzzy membrane association of NT in the full-length protein. ■ DISCUSSION By combining solution and solid-state NMR spectroscopy with molecular dynamics simulations, we have characterized the extreme fuzzy membrane association of the disordered Nterminal region of ChiZ. The association is largely driven by hydrogen bonding between Arg residues and acidic POPG lipids. Not only the conformation of NT but also the residues that contact the membrane at a given moment are highly dynamic. As NT frolics on the membrane, lipids quickly redistribute, with the acidic POPG lipids preferentially taking up Arg-proximal positions. We refer to membrane association represented by the disordered NT as "semispecific", to be contrasted with specific binding between a protein and a macromolecular partner, with a defined interface, and nonspecific binding of proteins at high concentrations, where there is no clear demarcation between a bound state and an unbound state. Membrane association of the disordered NT is also distinct from that of folded domains such as C2 domains in synaptotagmin-1, which have one or more defined membrane-binding sites. 23 For these reasons, ChiZ NTmembrane association represents a new paradigm of biomolecular binding. Other disordered proteins that engage in semispecific membrane association include α-synuclein 9 and the Wiscott−Aldritch Syndrome protein. 6 The term "semispecific" is also fitting in the sense that NTmembrane association has mixed random and nonrandom characteristics, similar to fuzzy association between two disordered proteins. 1,2 While the random aspect is obvious from the highly dynamic nature of bound NT (see, e.g., Movies S2 and S3), the nonrandom aspect is also worth emphasizing. First, as already noted, it is largely Arg residues that drive the association. Second, for ChiZ1-64, the association-driving Arg residues are located in the midsection of the sequence. Third, the NT sequence codes for asymmetry between the two halves in membrane association. N-half contains all the acidic residues (which compete with POPG lipids for Arg interactions), and has high PPII propensities. N-half is therefore more recalcitrant while C-half is more adaptive to membrane association. Fourth, the intrinsic asymmetry between the two halves of NT is accentuated when its C-terminus is tethered to membranes via the subsequent transmembrane helix. Interestingly, NTs of ChiZ homologues in Mycobacterium species have a very conserved C-half, with 6−8 Arg residues (plus a rare Lys residue) and no acidic residues (other than a rare Asp), and a very variable N-half containing all the acidic residues ( Figure S12). The characteristics of NT-membrane association determined here for Mtb ChiZ thus largely apply to other Mycobacterium species, and the conservation of the features important for membrane association argues for a functional role of membrane association. Based on the foregoing information on ChiZ NTs, we may speculate that 6−8 Arg residues, minimally interrupted by acidic residues and distributed in a sequence of 30 or so amino acids, may be required for stable fuzzy association with highly acidic membranes. Of course, not all acidic lipids are alike. Although we used POPG as a representative of acidic lipids, the actual composition of the M. smegmatis inner membrane is approximately 35% cardiolipin, 35% phosphatidylinositol, and 30% phosphatidylethanolamine. 13 Our preliminary results from MD simulations of ChiZ1-64 binding to a membrane with this composition closely track those reported for POPG:POPE membranes ( Figure S13). However, lipids with higher negative JACS Au pubs.acs.org/jacsau Article charges, in particular phosphatidylinositol 4,5-bisphosphate, may have increased propensities for interacting with polybasic proteins, and hence, the number of Arg residues required for extreme fuzzy association might be reduced. Additional disordered membrane proteins need to be studied before we can establish the sequence requirements. For specific binding between two structured domains, the dogma is that sequence codes for structure, which in turn codes for specificity, but for fuzzy binding of intrinsically disordered regions including semispecific membrane association of ChiZ NT and others, how sequence codes for binding specificity is still an open question. Contrary to α-synuclein and other disordered proteins that associate with membranes through amphipathic helices, ChiZ NT does not gain any secondary structure upon membrane association (apart from some hint for an A 43 PLR 46 binding motif in a subpopulation). In the former cases, a mechanism to code for binding specificity is through amino-acid patterning that favors amphipathic-helix formation, i.e., by positive design, as exemplified by the KTKEGV motifs in α-synuclein. 9 Illustrated by the exclusion of acidic residues in C-half, the specificity of ChiZ NT-membrane association appears to be achieved partly by negative design. As found in our previous study, 20 the NT sequence codes for correlated segments, mostly in N-half, that are stabilized by salt bridges, cation−π interactions, and high PPII propensities. Just as we speculated previously, these correlated segments lead to the recalcitrance of N-half toward membrane association. Conversely, lack of strongly correlated segments in C-half allows it to be more adaptive to membrane association. Due to reduced dimensionality, membrane association increases the chances that proteins interact with each other. A main function of ChiZ is to halt cell division via overexpression under DNA damage conditions. 24 Overexpression may present ChiZ at a level where NTs of different copies come into contact at the membrane. The work presented here characterizing the conformations and dynamics of membrane-bound NT in a single copy of ChiZ lays a solid foundation for understanding interactions between multiple NTs as well as interactions of ChiZ NT and membrane-bound disordered regions of partner proteins, including FtsI and FtsQ 25 ( Figure S1). Protein Expression and Purification Expression and purification of ChiZ1-64 was performed as previously described. 20 13 C− 15 N labeled ChiZ-FL containing a noncleavable Nterminal 6× His-tag was expressed in Escherichia coli BL21 Codon Plus RP competent cells. Cells were grown at 37°C in LB media until OD at 600 nm reached 0.7. Cells were pelleted and transferred to M9 media containing 1 g of 15 N-ammonium chloride and 2 g of 13 C uniformly label glucose (Cambridge Isotope Laboratories). After transfer, cells were incubated at 37°C for 30 min before adding IPTG to a final concentration of 0.4 mM to induce protein expression for 5 h. Cells were then pelleted and resuspended in a lysis buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl) for cell lysis using a French press. n-Dodecylphosphocholine (DPC; Anatrace) was added to the lysate to a final concentration of 2% (wt/vol) and then incubated overnight at 4°C with agitation. Cell lysate was centrifuged at 250 000g for 30 min. Protein purification was performed using Ni-NTA resin (Qiagen) equilibrated with the lysis buffer containing 20 mM imidazole. The column was washed using the lysis buffer containing 0.5% (wt/vol) DPC and 60 mM imidazole. Protein was eluted with the same buffer but containing 400 mM imidazole. Reconstitution of ChiZ-FL into Liposomes ChiZ-FL samples in MAS solid-state NMR experiments were reconstituted into POPG:POPE (7:3) liposomes at a protein to lipid molar ratio of 1:80. Methyl-β-cyclodextrin (MβCD; Sigma-Aldrich) was used to remove the DPC detergent from the protein− detergent−lipid mixture. Specifically, POPG and POPE lipids in chloroform were mixed, and the solvent was removed using nitrogen stream and extensive vacuum. Lipid films were resuspended in 20 mM Tris-HCl (pH 8.0) and sonicated. DPC was added until the solution became clear. Then, ChiZ-FL was added, and the mixture was incubated for 1 h at room temperature. To remove DPC, a solution of MβCD in 20 mM Tris-HCl (pH 8.0) was added to the protein− detergent−lipid mixture at a DPC to MβCD molar ratio of 1:1.5. Proteoliposomes were collected by centrifugation at 250 000g for 3 h at 8°C. The pellet was resuspended in 20 mM Tris-HCl (pH 8.0), and an MβCD solution containing 10% of the previous level was added to remove residual detergent. Proteoliposomes were finally collected by centrifugation at 100 000 rpm in a TLA-100 rotor at 8°C for 16 h and washed with 20 mM Tris-HCl (pH 8.0) at least twice. ChiZ-FL proteoliposomes were packed into a 3.2 mm MAS rotor for solid-state NMR experiments. Samples for paramagnetic relaxation enhancement experiments were doped with 1% PE-DTPA-GD. NMR Spectroscopy Solution NMR experiments of ChiZ1-64 mixed with liposomes were performed in 20 mM sodium phosphate (pH 7.0) containing 25 mM NaCl, 50 μM sodium trimethylsilylpropanesulfonate (DSS; NMR standard) and 10% D 2 O. 1 H− 15 N and 1 H− 13 C heteronuclear single quantum coherence (HSQC) spectra were collected at 25°C on an 800 MHz NMR spectrometer equipped with a cryoprobe. Chemical shift assignments of ChiZ1-64 have been reported previously (BMRB accession # 50115). 20 MAS solid-state NMR experiments of reconstituted ChiZ-FL and liposome-bound ChiZ1-64 were performed at 25°C on a 600 MHz NMR spectrometer equipped with a Low-E MAS probe with a spinning rate of 12.2 kHz. Glycine carbonyl carbon with a chemical shift frequency of 178.4 ppm was used as 13 C chemical shift reference. One-dimensional 13 C direct-excitation spectra were collected using a 13 C 90°pulse of 62.5 kHz and proton decoupling at 75 kHz using the SPINAL64 decoupling sequence. 13 C− 13 C (and 1 H− 13 C) correlation spectra using cross-polarization (CP) and INEPT-based pulse sequences were collected using the same proton and carbon frequencies as for one-dimensional experiments. For CP-based experiments, the PARIS pulse sequence was used. 26 Molecular Dynamics Simulations Three ChiZ constructs (Figure 1) were modeled and simulated: (i) ChiZ1-64 bound to a 7:3 POPG:POPE bilayer; (ii) ChiZ1-86 with the 22-residue transmembrane helix inserted in a 7:3 POPG:POPE bilayer and NT bound to the inner leaflet; and (iii) ChiZ-FL, which JACS Au pubs.acs.org/jacsau Article extended the ChiZ1-86 system by the periplasmic linker and LysM domain. The simulations of the three systems consisted of 20, 20, and 16 replicate trajectories, respectively; the production lengths of these trajectories were 1.9, 1.8, and 1.29 μs, respectively. The production simulations were preceded by preparatory simulations. The force field combination was AMBER14SB 27 for proteins, TIP4P-D 28 for solvent (water plus ions), and Lipid17 29 for membranes. The membrane-bound ChiZ1-64 simulations were prepared starting from nine ChiZ1-64 models selected from the simulations of the unbound system. 20 A membrane plus solvent system (220 lipids per leaflet with POPG and POPE at 7:3 ratio) was built using the CHARMM-GUI server. 30 The output was converted to AMBERformatted coordinate and topology files using the charmmlipid2amber.py script and tleap in AmberTools17. 31 Upon aligning N-half of ChiZ1-64 to the inner leaflet of the bilayer, ChiZ1-64 was inserted into the system using PARMED with clashing solvent removed. Using tleap, neutralizing ions plus 25 mM NaCl were added, and the combined system was built into AMBER topology. The final system size was 122 × 122 × 140 Å with 261 493 atoms. Preparatory simulations starting from the nine ChiZ1-64 models were run in NAMD 2.12 32 with AMBER topology. Energy minimization (10 000 cycles of conjugate gradient) was followed by the six-step CHARMM-GUI equilibration protocol 30 with gradually decreasing restraints on the protein and lipids. Bond lengths involving hydrogens were constrained by the SHAKE algorithm. 33 The time step was 1 fs in the first four of the six-step protocol but 2 fs in the last two. The durations of the six steps were 25,25,25,200,200,2000 ps. van der Waals interactions were force-switched starting at 10 Å and cut off at 12 Å. The same cutoff was used for calculating short-range electrostatic interactions; long-range electrostatic interactions were treated by the particle mesh Ewald method. 34 The first three steps were under constant temperature (300 K) and volume, whereas the last three were under constant temperature and pressure (1.0 atm). Temperature was regulated by the Langevin thermostat with a friction coefficient of 1.0 ps −1 ; pressure was regulated by the Langevin piston 35 with an oscillation period of 50.0 fs and decay of 25.0 fs. Here, and below, whenever pressure was regulated, semi-isotropic scaling in the x−y plane was applied to maintain the constant ratio of the two dimensions, with no added surface tension. Following the sixstep equilibration, the nine simulations continued under constant temperature and pressure for 40 ns. A total of 20 snapshots, i.e., the nine at the start and the nine at the end of 40 ns simulations, plus two in between, were restarted to run AMBER production simulations for 1.9 μs on GPUs (see below for further details). ChiZ1-86 models were built using MODELER 36 with residues 65− 86 modeled as a helix. Ten models were selected for insertion into a POPG:POPE bilayer (at 7:3 ratio with a total of 220 lipids per leaflet) using CHARMM-GUI, with 25 mM NaCl and neutralizing ions added. The final system size was 135 × 135 × 251 Å with 446 510 atoms. Preparatory simulations of the 10 ChiZ1-86 models were the same as for ChiZ1-64 with the following exceptions. (i) Pressure was regulated by the Monte Carlo barostat; (ii) the durations of the last three steps of the equilibration were 100 ps each; (iii) the subsequent NAMD run was replaced by an AMBER GPU simulation of 1 ns. The 10 final snapshots were each restarted with two random seeds to run AMBER production simulations for 1.8 μs on GPUs. ChiZ-FL models were built using MODELER by combining eight homology models of the LysM domain (residues 113−165) from SWISS-MODEL 37 with eight of the ChiZ1-86 starting models. The rest of the ChiZ-FL preparations was the same as for ChiZ1-86. The system contained 300 lipids per leaflet (with POPG and POPE at 7:3 ratio) with a size of 147 × 149 × 235 Å and 522 825 atoms. Each of the eight final snapshots in the preparatory simulations was restarted with two random seeds to run AMBER production simulations for 1.29 μs on GPUs. Production simulations were on GPUs using pmemd.cuda 38 in AMBER18. Temperature was held at 300 K using the Langevin thermostat with a friction coefficient at 1.0 ps −1 . Pressure was held at 1.0 atm using the Berendsen barostat. 39 For van der Waals interactions, the force-switch distance was 9 Å and cutoff was 11 Å. The latter was also used for dividing direct calculation of electrostatic interactions from a particle mesh Ewald treatment. Bond lengths involving hydrogens were constrained by the SHAKE algorithm. The time step was 2 fs. Snapshots were saved every 10 ps in the ChiZ1-64 simulations and every 20 ps in the ChiZ1-86 and ChiZ-FL simulations. The first 2000 saved snapshots for each system were discarded. MD Trajectory Analyses Heavy atom contacts, hydrogen bonds, distances along the z axis, and secondary structures were calculated with cpptraj. 40 Further analyses and plotting were performed using in-house python scripts. Two heavy atoms between a protein and lipids were in contact if they were within 3.5 Å. Hydrogen bonds were defined as formed when the donor−acceptor distance was less than 3.5 Å and the donor− hydrogen−acceptor angle was greater than 135°. The membrane-contact probability C i and the probability C ij that two residues contact membranes at the same time were calculated after pooling all the saved snapshots of each system. From C i , C ij , and C ij /C i C j − 1, the python module networkx was used to build the membrane contact networks and the contact correlation networks. The SHIFTX2 41 software was used to calculate the chemical shifts of all atoms on snapshots taken at 200 ps intervals. The seaborn plotting module in python3 was implemented to create the violin plots. Images of structures were rendered using ChimeraX, 42 and movies were composed using Blender. 43 ■ ASSOCIATED CONTENT * sı Supporting Information
2020-12-10T09:02:02.487Z
2020-12-09T00:00:00.000
{ "year": 2020, "sha1": "e6d981a8afd5eda8fe23512f53e32b3176e9752f", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/jacsau.0c00039", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1120557f7e17512c6d739f24e4db66b3d36a1de1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
22912704
pes2o/s2orc
v3-fos-license
Repurposing of the anti-malaria drug chloroquine for Zika Virus treatment and prophylaxis One of the major challenges of the current Zika virus (ZIKV) epidemic is to prevent congenital foetal abnormalities, including microcephaly, following ZIKV infection of pregnant women. Given the urgent need for ZIKV prophylaxis and treatment, repurposing of approved drugs appears to be a viable and immediate solution. We demonstrate that the common anti-malaria drug chloroquine (CQ) extends the lifespan of ZIKV-infected interferon signalling-deficient AG129 mice. However, the severity of ZIKV infection in these mice precludes the study of foetal (vertical) viral transmission. Here, we show that interferon signalling-competent SJL mice support chronic ZIKV infection. Infected dams and sires are both able to transmit ZIKV to the offspring, making this an ideal model for in vivo validation of compounds shown to suppress ZIKV in cell culture. Administration of CQ to ZIKV-infected pregnant SJL mice during mid-late gestation significantly attenuated vertical transmission, reducing the ZIKV load in the foetal brain more than 20-fold. Given the limited side effects of CQ, its lack of contraindications in pregnant women, and its worldwide availability and low cost, we suggest that CQ could be considered for the treatment and prophylaxis of ZIKV. infected with Brazilian strain ZIKV (ZIKV BR , Brazil-ZKV2015), a common preclinical model for ZIKV research. However, the severity of disease precludes the use of AG129 mice for the investigation of vertical ZIKV transmission. To develop a suitable model for this purpose, we used SJL mice, which have a normal IFN signalling response and have previously been used for the study of ZIKV pathogenesis 4 . We found not only that SJL mice support chronic ZIKV BR infection but also that the virus can be transmitted vertically, making this a more relevant model of ZIKV infection in humans 14 . Notably, administration of CQ to pregnant SJL mice during mid-late gestation markedly reduced ZIKV BR infection in the foetal brain. Collectively, our data suggest that CQ could be effectively and readily employed for the treatment and prophylaxis of ZIKV infection in humans. CQ protects human neural progenitors from ZIKV infection. Human foetal NPCs are the major target of ZIKV in the developing brain 5,15 . To examine the effect of CQ in vitro, we infected monolayer cultures of primary human foetal NPCs with ZIKV BR (Brazilian strain ZKV2015) and cultured them in the absence or presence of up to 40 μM CQ. Consistent with work by others 12 , we found that CQ efficiently (90% inhibition at 6 µM) reduced ZIKV BR infection of primary human foetal NPCs. To mimic ZIKV BR infection in the context of the 3-dimensional architecture of the developing human brain, we examined neurospheres derived from human iPSCs (Fig. 1a). We found that CQ treatment reduced both the percentage of ZIKV BR -positive cells (Fig. 1b) and the level of apoptosis in the neurospheres with an IC 50 of ~10 μM ( Fig. 1c and d). CQ attenuates acute ZIKV-induced mortality in AG129 mice. To corroborate the in vitro findings, we first examined AG129 mice, which lack receptors for type I (α/β) and type II (γ) IFNs and have previously been used to model ZIKV infection 16,17 . To test the prophylactic effects of CQ, mice were administered 50 mg/kg/ day CQ in drinking water for 2 days and then infected with ZIKV BR (2 × 10 3 PFU retro-orbitally). CQ treatment was continued at the same dose for 5 days and then at 5 mg/kg/day until the end of the experiment. Control mice received drinking water alone. We observed that CQ extended the average lifespan of ZIKV-infected AG129 mice to 15 days (p < 0.01, log-rank Mantel-Cox test; Fig. 2a) and significantly attenuated ZIKV-induced weight loss (p < 0.01, unpaired t-test with Welch's correction; Fig. 2b). Overall animal health was assessed using a modified 6-point scoring system 15 , which showed that CQ-treated mice remained in good health and survived for longer than the vehicle-treated mice (80% vs 0% of animals alive on day 13, respectively) ( Fig. 2c and d). These results indicate that CQ attenuated disease severity in ZIKV-infected AG129 mice, which is considered the most severe model of ZIKV infection 16 . SJL mice support chronic infection with ZIKV. Mice deficient in IFN response genes, such as single knockout (Ifnar1) A129, double knockout (Ifnar1, Ifnar2) AG129, and triple knockout (Irf3, Irf5, Irf7) TKO mice 16 , succumb to ZIKV within a few days of infection making it difficult to investigate vertical transmission of ZIKV in such an aggressive disease model. Therefore, we explored the SJL mouse model, which we have previously used to study foetal transmission with high doses of ZIKV 4 . SJL males and females at 3 months of age were infected with ZIKV BR (1 × 10 8 PFU retro-orbitally), and circulating ZIKV RNA levels were analysed by qRT-PCR over the following 50 days. Our qRT-PCR assay is only 10-fold less sensitive than a laborious and time consuming plaque forming unit assay. Using qRT-PCR we could detect the levels of ZIKV as low as 10 plaque forming units per sample. In testing our samples, we did not record any ZIKV in the samples obtained from uninfected control mice. We found that the viral titres fluctuated over time in both males and females, ranging from 5 × 10 3 to 4 × 10 5 genome copies/µg total RNA. However, the mean titres were maintained between 10 4 and 10 5 genome copies/µg total RNA ( Fig. 3a and b). Previous work has shown that ZIKV inoculation of wild-type C57BL/6 mice treated with a single dose of IFNAR1-blocking monoclonal antibody leads to infection of and damage to the testes 18 . We therefore investigated ZIKV titres in the testes of chronically infected SJL mice (3 months post-infection) and found readily detectable levels (10 3 -10 4 ZIKV genome copies/1 µg testis RNA) (Fig. 3c). Collectively, these data indicate that SJL males and females support sustained ZIKV infection and display no signs of morbidity at 3 months post-infection. The mice therefore represent a physiologically relevant model for studying paternal and maternal vertical transmission of ZIKV BR . Vertical and horizontal transmission of ZIKV in SJL mice. We examined horizontal transmission by infecting 3-month-old SJL males and females with ZIKV BR (10 8 PFU retro-orbitally) and allowing them to mate with uninfected mice of the opposite sex. After 14 days, the uninfected males were separated and bled and circulating viral titres were measured by qRT-PCR. To avoid stress during pregnancy dams were allowed to deliver and then bled on the next day and circulating viral titres were measured by qRT-PCR. Interestingly, we observed efficient transmission of ZIKV from infected males to females but not vice versa (Fig. 4a). This is strikingly similar to the mode of horizontal transmission in humans, where female-to-male transmission is relatively rare [19][20][21] . Because we could not detect any ZIKV transmission from infected female mice to uninfected males, we concluded other routes of transmission such as via saliva or ocular secretions are insignificant. Our previous study investigated foetal development in SJL females directly infected with a 4 × 10 10 PFU/ml of ZIKV BR on E12.5 4 . Here, we investigated whether the virus could be transmitted vertically from ZIKV-infected dams and sires to their offspring through the natural breeding process. To this end, 3-month-old female and male SJL mice were infected with ZIKV BR (10 8 PFU retro-orbitally) and immediately allowed to breed with uninfected mice of the opposite sex. After regular delivery, the 1-day-old pups were euthanized and tissue samples were analysed for viral RNA by qRT-PCR. We found that all dams efficiently transmitted ZIKV to their pups (Fig. 4b). Notably, transmission from the infected sires to their pups occurred in fewer animals was less efficient (Fig. 4b), possibly reflecting variable ZIKV titres in the semen and variations in the levels of sexual transmission of ZIKV from infected sires to dams. The molecular and cellular mechanisms of ZIKV infection during pregnancy are poorly understood 22 , and such knowledge is critical for the development of treatment to limit ZIKV infection during pregnancy. Our results thus demonstrate that SJL males and females can transmit ZIKV vertically through the natural mating process and thus represent a unique physiological mouse model for testing of drugs that could suppress vertical viral transmission. CQ suppresses vertical transmission of ZIKV. Next, we examined the effect of CQ treatment on vertical transmission of ZIKV in SJL mice using our previously published protocol 4 . Pregnant SJL mice (2-3 months of age) were infected with ZIKV BR (2 × 10 5 PFU retro-orbitally) on day E12.5. This dose is sufficient to cause a robust ZIKV infection in SJL mice. Infected dams were provided with CQ (30 mg/kg/day in drinking water) starting on day E13.5 and were euthanized on E18.5, at which point maternal blood and foetal brain samples were collected and analysed by qRT-PCR (Fig. 5a). This lower, 30 mg/kg, dosage of CQ was specifically use to protect pregnant mice from potential negative effects of the drug. We found that treatment with CQ reduced by ~20-fold the ZIKV titre in both maternal blood (Fig. 5b) and foetal brain (Fig. 5c). To confirm these results, whole embryos were immunostained with an anti-ZIKV envelope protein antibody. Consistent with the qRT-PCR data, this analysis revealed a significant reduction in the ZIKV immunostaining intensity in the foetuses of CQ-treated pregnant mice compared with the untreated mice ( Fig. 5d and e). Discussion CQ has been used worldwide for more than half a century for anti-malaria prophylaxis and therapy without evidence of foetal harm [23][24][25][26] . CQ can cross the placental barrier and would be expected to reach similar concentrations in maternal and foetal plasma 27 . The side effects of CQ have been thoroughly evaluated in a malaria prophylaxis study (400 mg/week), which found no increase in the incidence of birth defects 11 . High CQ concentrations (up to 500 mg/day) were administered to pregnant women with severe lupus or rheumatoid arthritis. Although a few instances of spontaneous abortion were observed (likely a consequence of the disease itself), all term deliveries resulted in normal healthy newborns 28 , suggesting that high doses of CQ do not interfere with foetal development in humans. The dosages of CQ we employed in our study were either comparable or significantly lower relative to the acceptable and widely used dosages in humans. Studies in rodent models have found that brain concentrations of hydroxychloroquine (CQ analogue) are 4-30 times higher than plasma concentrations 29 , suggesting that it has a favourable pharmacokinetic profile for inhibition of ZIKV infection in NPCs. In arthritis patients, plasma CQ concentrations reached 10 µM after daily administration of 5 mg/kg/day for a week 30 . Given that the half-life of CQ in humans is approximately 40 days 31 , people treated with 5 mg/kg/day CQ for 7 days will build up over 30 mg/kg of CQ, which is comparable to the regimen used in our animal studies. CQ treatment can be associated with retinopathy, but the reported threshold dose in humans, 5.1 mg/kg/day 30 , thus allowing sufficient accumulation of CQ (see above). Moreover, eye disease was not detected in a study of more than 900 rheumatoid arthritis patients treated with up to 4.0 mg/kg/day CQ for an average of 7 years 30 . Therefore, a level of CQ sufficient to protect SJL mice from ZIKV could be safely build up in a human body in relatively short, 7 days, time period and then maintained for many weeks or months with a minimal intake of CQ. The pharmacokinetics of CQ thus make it an excellent candidate for prophylaxis in individuals at high risk of ZIKV infection (e.g., residents or visitors in ZIKV endemic areas). Our results demonstrate that CQ effectively reduces ZIKV infection in primary human foetal NPCs and in two mouse models, and that CQ at doses comparable to or less than those broadly used in humans can markedly reduce maternal and foetal infection. ZIKV infects cells through receptor-mediated endocytosis and membrane fusion within acidic endosomes 32 . CQ is thought to affect acidification of the endosomes and thus obstructs fusion of the flaviviral envelope protein with the endosomal membrane 32 . Cellular proteases, including furin, are essential for cleavage of the flaviviral prM during viral egress 33 . This transition is pH-dependent, and alterations in the intracellular pH may result in the release of less infectious virions 34 . Clearly, additional studies are required to determine the precise pharmacological mechanism by which CQ counters ZIKV activity. Neurosphere infection and treatment. Neurospheres were dissociated with Accutase (Thermo Fisher) and counted. For each assay, ~20 neurospheres/condition were treated as follows: uninfected (MOCK on figures), uninfected and treated with DMSO, infected with ZIKV BR at a multiplicity of infection of 1, or ZIKV infected and treated with CQ at 5, 20, or 40 µM. CQ was added during viral adsorption (1 h at 37 °C). Medium containing the appropriate concentration of CQ was changed after 2 days. At 96 h post-infection, neurospheres were transferred to polyornithine/laminin double-coated plates and maintained for 1 week to initiate neuronal maturation. Medium supplemented with CQ was changed every 2-3 days. Histology and immunohistochemistry. Mouse embryos obtained on E18.5 were fixed for 48 h in 4% formaldehyde in phosphate-buffered saline (PBS), transferred to sucrose, and embedded in paraffin. Serial sections (5 μm) were cut along the sagittal axis of the embryo. Slides were deparaffinised and rehydrated using xylene and graded ethanol. Antigen retrieval was performed in a pressure cooker at 7.5 psi in 0.1 M Tris-HCl buffer (pH 9.0) for 15 min. Slides were rinsed with water 6 times at room temperature and washed for 5 min in PBS. Endogenous peroxidase activity was quenched by incubation in 3% hydrogen peroxide in PBS for 30 min at room temperature. Slides were incubated for 16-18 h at 4 °C with primary anti-Flavivirus Group Antigen (Millipore, #MAB10216) diluted 1:250 in Dako Antibody Diluent with Background Reducing Components (Agilent, #S3022). After rinsing in PBS 3 times for 5 min each, the slides were incubated with a horseradish peroxidase-conjugated goat anti-mouse secondary antibody (Abcam, #ab2891) for 30 min at room temperature. Slides were washed again in PBS, incubated for 3 min with DAB complex (ImmPACT DAB Peroxidase Substrate, Vector Laboratories,
2018-04-03T00:40:34.317Z
2017-11-17T00:00:00.000
{ "year": 2017, "sha1": "19fc3df4769d0d4ef89b2f737786beffa5f7d3e1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-15467-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5affdb239b6d26b94e10c7447827a7e96c1457e7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201293262
pes2o/s2orc
v3-fos-license
Ultra-high resolution and broadband chip-scale speckle enhanced Fourier-transform spectrometer Recent advancements in silicon photonics are enabling the development of chip-scale photonics devices for sensing and signal processing applications. Here we report on a novel passive, chip-scale, hybrid speckle-enhanced discrete Fourier transform device that exhibits a two order-of-magnitude improvement in finesse (bandwidth/resolution) over the current state-of-the art chip-scale waveguide speckle and Fourier transform spectrometers reported in the literature. In our proof-of-principle device, we demonstrated a spectral resolution of 140 MHz with 12-nm bandwidth for a finesse of $10^4$ that can operate over a range of 1500-1600 nm. This chip-scale spectrometer structure implements a typical spatial heterodyne discrete Fourier transform waveguide interferometer network that is enhanced by speckle generated from the wafer substrate. This latter effect, which is extremely simple to invoke, superimposes the high wavelength resolution intrinsic to speckle generated from high NA waveguide with a more broadband but lower resolution DFT modality of the overarching waveguide structure. This hybrid approach signifies a new pathway for realizing chip-scale spectrometers capable of ultra-high resolution and broadband performance. I. INTRODUCTION Chip-scale optical spectrometers are envisioned as key elements for next generation remote sensing systems and precision on-chip wavelength monitoring. A primary example is the development of optical devices for detection of chemical species on remote platform [1][2][3][4][5][6]. For space applications, a compact, high-resolution, and alignment free spectrometer with no moving parts or exposed surfaces is highly desirable. Recent advances in silicon photonics are enabling the fabrication of such devices using CMOS compatible commercial foundries and offer significant size, weight, and power (SWAP) along with cost advantages over traditional spectrometers. In this Article, we report on a first demonstration of a speckle-enhanced DFT (SDFT) chip-scale spectrometer which combines both modalities on a single chip to achieve a finesse that exceeds the current state-of-the-art performance of either chip-scale spectrometer types by two orders of magnitude. This combination of DFT and speckle modalities yields a device with the broader bandwidth characteristics of the DFT and the higher resolution characteristics of the MMW without increasing the number of structures or increasing the size of the device. A. Device Layout Traditional Fourier-transform spectrometers incorporate an unbalanced Michelson interferometer where light intensity is monitored at an output port while scanning the relative path length difference between the two arms. By taking the Fourier transformation of the recorded intensity, the unknown input spectrum is reconstructed. An analogous discrete Fourier transform spectrometer, also referred to as spatial heterodyne FT spectrometer, implements an array of unbalanced Mach-Zehnder interferometers with predefined relative path length differences [7][8][9][10]. In this manner, no active scanning mechanism or alignment procedures are required, making them more robust. Data throughput is potentially improved due to parallel processing needed for this approach. The DFT functionality of our SDFT spectrometer is implemented using such spatial heterodyne interferometers on a SOI platform. The layout for the SDFT spec-arXiv:2001.08105v1 [physics.app-ph] 12 Jan 2020 trometers was developed using an open-source PDK developed by SI-EPIC [18] and fabricated using a CMOS compatible commercial foundry (Applied Nanotools). The Mach-Zehnder interferometers (MZIs) were fabricated using single-etch electron-beam lithography on a 220 nm thick layer of silicon. Arrays of 64 and 128 MZIs were built using 500 nm wide single-mode waveguides (that sustain both T E 00 and T M 00 modes) with a 50-µm stepwise incremental pathlength differences. A schematic diagram of the SDFT spectrometer along with a microscope image of the fabricated device are shown in Fig. 1. The DFT functionality used in this study is enclosed by the red box which consists of 64 MZIs. The MZIs in the array are optically coupled via a cascaded network of 3 dB Y-splitters. The output of the MZIs are terminated at the end of the chip where the light emission is imaged on a camera array. A 2 µm thick oxide layer is grown on top of the interferometers for protection and reduce thermal sensitivity. The optical input waveguide is off-centered from the field of view of the camera imaging the output waveguides to minimize any leaked light propagating on top of the device that otherwise would saturate the detector arrays. The speckle functionality is derived from the 8.5 mm long 675 µm thick silicon handler wafer which behaves as a strongly guiding planar MMW that sits below the DFT waveguides. The SOI waveguide sustains several hundred thousand optical modes which yields highly developed speckle at the output of the chip. Light is coupled to the MZIs and the MMW through one end of the chip using a high-NA single-mode fiber (NA=0.41) and the output of the chip is imaged using a 4x microscope objective and a high-speed InGaAs camera array (GoodRich SU640KTS). Trenches etched along the perimeter of the chip along with inverse taper structures at the output of the waveguides facilitate coupling of the input light into the chip [19,20]. Due to a slight mode mismatch between the fiber and the tapered waveguide structure, a fraction of the input light is leaked into the substrate MMW. This results in the simultaneous propagation of the optical beam through both the MZIs (DFT spectrometer) and the MMW (Speckle spectrometer), thus forming a hybrid SDFT spectrometer. B. Theory of Operation The electric field output of the optical modes propagated through an ideal substrate waveguide with length L can be written as [13] where ψ m is the spatial profile of the mth mode that has initial amplitude C m and phase φ m with propagation constant β m and is measured at (x,y) coordinate at the output facet of the waveguide. A large width slab waveguide sustains several thousand optical modes and the interference between those modes results in a speckle pattern. In addition, the propagation constant is wavelength dependent and as a consequence, any change in the input wavelength results in the modification in the output interference pattern that generates a unique wavelength dependent fingerprint. Similarly, the electric field of the MZI array output is (2) where ψ n (x, y, λ) is the spatial mode distribution at the output of the waveguide and ∆L n is the relative pathlength difference of nth MZI. The total output power recorded by a camera at the output facet of the chip can be calculated by taking the modulus square of the sum of the output electric fields from both the MMW and MZI array. The SDFT device can be effectively treated as a transformation operator (A SDF T (x, y, λ)) that maps the input spectral information (S(λ)) to the output spatial dependent intensity pattern, where the transformation operator is the sum of wavelength-spatial responses from the component MZI arrays and the MMW substrate (speckle), and any cross term that could arise from the interference between modes from both components. An exact first principles calculation of such transformation matrix is a complicated task; however, one can experimentally measure the wavelength-and spatial-dependent transmission matrix (A SDF T ), or calibration matrix, by tuning a narrow bandwidth single-frequency laser and recording the intensity pattern with a camera. III. RESULTS: DEVICE CHARACTERIZATION A. Calibration Figure 2 (a) is an experimentally recorded image of the output of the SDFT chip. The dashed white box represents a SDFT region consisting of both MZI and speckle output and the red box consists of speckle-only contribution from the MMW. The intensity distribution of the output of the MMW recorded by the camera is plotted in Fig. 2(b) along with a negative-exponential decay fit to it, where x-axis is the intensity normalized by the mean and y-axis is its distribution. This negative-exponential decay in the intensity pattern is a characteristic of fully developed speckle resulting from a large number of modes interference [21]. The region of the chip corresponding to the SDFT and speckle-only output are summed column-wise to generate a 1D pixel array of the calibration matrix at a discrete wavelength step for the corresponding device. Figure 2 (c) is an example SDFT calibration matrix recorded from the output of a 64-element SDFT chip over a 100 nm spectral window generated by scanning a narrow band continuous-wave (CW) laser. For a wavelength dependent 2D intensity output of the SDFT device see Supplementary 2. Figure 2 (d) shows the transmission profile of two MZIs with ∆L = 0 µm and 50 µm recorded over a 100 nm spectral window and consists of intensities from both MZIs and the MMW, where the speckle corrupts the MZIs transmission as a high-frequency noise, forming a unique wavelength dependent fingerprint for the combined SDFT spectrometer within the bandwidth of Intensity distribution recorded at the speckleonly region of the chip fitted with a negative exponential decay shows a fully developed speckle pattern arising from the multimode interference of the beam propagating through the substrate waveguide [21]. (c) Wavelength dependent transmission matrix recorded from the output of the spectrometer recorded over 100 nm spectral window. (d) Transmission profile of a ∆L = 0 µm and 50 µm MZIs each recorded from a single pixel of the camera by tuning wavelength over 100 nm spectral window. The output intensity has the speckle contribution overlapped to it. the DFT spectrometer. When reconstructing a broad spectrum using such a calibration matrix, the speckle contribution of the MMW is minimal-as the algorithm averages it out as noise-thus allowing one to accurately reconstruct a broad spectrum. On the other hand, when the reconstruction is performed within a smaller spectral window within the resolution limit of the DFT-only device, the output of the MZIs acts as a slowly varying DClike offset, while the contribution of the speckle pattern on the calibration matrix becomes significant. This behavior allows the algorithm to robustly reconstruct both a broad and high-resolution spectrum and circumvent the resolution-bandwidth tradeoff. With the knowledge of such a calibration matrix, the spectral content of an unknown light input to the spectrometer can be reconstructed by solving S = A + P out , where A + is the pseudoinverse of matrix A. If the number of measurements is smaller than the wavelength points to be reconstructed, the system of linear equations is underconstrained. Such constrained linear equations can be solved using least square minimization [22], such as the elastic-net regularization technique [11], where l 1 and l 2 are regularization hyperparameters that are appropriately selected depending on the density of the reconstructed spectrum. l 2 = 0 gives a well known lasso regularization used for compressive sensing on sparse signal, and l 1 = 0 gives a 2-norm Tikhonov regularization (or ridge regression) appropriate for reconstructing a dense signal. Such regulariztion technique allows a robust reconstruction of the input signal over noisy or unconstrained data. B. Statistical Analysis of the Device The performance of the SDFT spectrometers and the information content available for reconstructing the spectrum can be studied by performing statistical and linear algebra analyses of the images recorded at the output of the chip. A 1D intensity values recorded from the SDFT region and speckle-only region are plotted in Fig. 3 (a) and (b). In this example, the relative contribution of the speckle on the MZI array is ∼15% in this measurement. The speckle contribution to the SDFT device can be increased by intentionally misaligning the input fiber such that more light passes through the substrate. The black curve shows the contribution of the background and detector noise. The spectral resolution of the device is set by the wavelength correlation of the output intensity pattern and is given by [13,23], where I(λ, x) is the intensity recorded by the camera at position x for an input optical wavelength λ averaged over all spatial positions. The spectral resolution (δλ) of the device is set by the correlation width at which the speckle correlation drops to half. Figure 3 (c) is the normalized wavelength correlation of the SDFT spectrometer measured by averaging over multiple pixels recorded at the output of the chip. The data shows two distinct features, where a rapid intensity de-correlation is overlaid on top of a slow de-correlation. The narrow peak circled around the plot is due to the contribution of rapidly de-correlating effect of speckle and the slowly varying feature is due to the DFT spectrometer. To further probe the fast de-correlation behavior of the device, we performed a high-resolution spectral scan using a radio-frequency (RF) scan technique developed by Scofield et al. [16]. A CW laser is modulated to suppress the carrier frequency and the sidebands are scanned in 10 MHz steps using a computer controlled RF driver. This results in a new calibration matrix that is the sum of two mirror wavelengths that are detuned equally from the carrier frequency. Using this technique, a frequency dependent calibration matrix is recorded where the SDFT matrix is constructed by summing rows of pixels from the SDFT region. The high-resolution normalized correlation matrix measured from the device is plotted in Fig. 3 (d). From the data, we obtain the resolution of the combined SDFT device to be 140 MHz, ∼ 160 FIG. 3. A 1D array of the intensity values recorded (a) at the output of the SDFT and (b) speckle-only region. The black curve in (b) shows the contribution of background along with detector noise. (c) Normalized spectral correlation averaged over all pixels generated from the SDFT region of the spectrometer consisting of both MZI and speckle output. The narrow peak circled around the plot is due to the highly decorrelated speckle pattern and the slowly varying feature is due to the DFT spectrometer. (d) Averaged intensity correlation data generated from a separate high-resolution RFscan measured by stepping the RF frequency with 10 MHz step. The decorrelation width of the SDFT spectrometer is 140 MHz, which sets the true resolution of the device. (ef) Comparison of the singular values for MZI-only (theory), speckle-only (exp), SDFT spectrometer (exp) and the background noise (exp). The comparison is shown for (e) coarse scan over a broad wavelength window and (f) high-resolution fine scan over a small bandwidth. times better than the DFT-only spectrometer. The obtained resolution is ∼ 2 times better than the predicted resolution (250 MHz) for an 8.5 mm long MMW. This further enhancement in the resolution could be due to the additional dispersion from the sub-wavelength scale rough bottom surface of the wafer. A more detailed comparison of the 2D spatial and frequency correlations along with the impact of length of MMW and the device structure on the decorrelation width is given in the Supplementary Sections (3)(4)(5). Further insights on the information content carried by the SDFT matrix can be gained by performing singular value decomposition (SVD) analysis of the calibration matrix [15,22,24]. A rectangular calibration matrix (A SDF T ∈ R n×m ) built by scanning m wavelength steps and summing 2D camera pixels column wise to generate n measurements can be decomposed as A = U ΣV T . U and V are left-and right-eigenbasis of the matrix and Σ is an m × n diagonal matrix containing r non-zero singular values, where r is the rank of the calibration matrix and corresponds to the uncorrelated eigenvectors for sig-nal reconstruction. Singular values are square roots of the eigenvalues of the A T SDF T A SDF T matrix and are arranged in descending order. The larger singular values capture most of the signal information contained in the calibration matrix and the values closer to zero simply add noise to the system. To perform comparative studies between speckle-only, DFT-only, and SDFT spectrometers, we generated calibration matrices for each device by summing rows of pixels from SDFT region and speckle-only region as shown in Fig. 2 (a). The calibration matrices are normalized to have a unit Frobenius norm so that the relative magnitude of the singular values can be compared [15]. The DFT-only calibration matrix is numerically simulated [7]. Figure 3 (e) is the comparison of singular values for various devices at a wider bandwidth regime (10 nm with 0.01 nm steps). A large number of distinctive eigenfunctions with larger values allows for better signal reconstruction [15,24,25]. The SDFT spectrometer consists of 64 large singular values corresponding to the 64 MZI channels that provide coarse resolution and additional ∼ 500 smaller eigenvalues corresponding to highfrequency eigenfunctions that provides enhancement in spectral resolution. As the data indicates in Fig. 3 (e), the addition of speckle extend the number of eigenvectors available for signal reconstruction (blue curve) for SDFT spectrometer over the rank deficient DFT-only spectrometer (purple curve). The extended singular values are above the background noise level (yellow). This added contribution results in the increase in the resolution of the device over the traditional DFT spectrometer and bandwidth increase over the speckle-only spectrometer [15]. Figure 3 (f) is the comparison of singular values calculated from a calibration matrix recorded over a smaller wavelength region (50 pm) but with a finer spectral scan steps (0.08 pm). As can be seen, the speckle calibration matrix has near identical property as the SDFT spectrometer. This is a result of the contribution from MZI array changing very little within such narrow wavelength scan window thus demonstrates that addition of DFT does not degrade the functionality of MMW speckle. This dual feature indicates that combined coarse resolution, higher bandwidth DFT and high resolution, low bandwidth speckle spectrometer enables one to perform both high-resolution and broad-bandwidth spectra reconstruction using a single device. In this study the input signals were reconstructed using 2D camera output projected to form a 1D array of up to 640 elements. However, for a much denser input signal, the entire 2D pixel array from 2 (a) (∼ 150 × 640) can be used, making all 96,000 spectral channels readily available for spectral reconstruction. In order for the SDFT spectrometer to provide a unique high-resolution fingerprint over a wide wavelength range, the resolution of the MZI needs to be commensurate with the free spectral range of MMW. To achieve both high-resolution and broad-bandwidth reconstruc- tion, first a coarse spectrum limited by the resolution of the DFT can be reconstructed using a larger regularization parameter. Once the coarse spectrum is identified, the high-resolution spectrum can be reconstructed using a subset of the calibration matrix limited within the narrow spectral bandwidth using the above mentioned signal processing techniques. A single high-resolution broad bandwidth calibration matrix and a single measurement is sufficient for signal reconstruction. The bandwidth of this hybrid device is estimated to be B ∼ N/2 × ∆λ, where ∆λ is the bandwidth of the speckle spectrometer and N is the number of MZIs. IV. SPECTRUM RECONSTRUCTION To demonstrate the functionality of the device, a series of two-tone reconstructions are performed using two monochromatic tunable lasers. Two-tone tests were performed by simultaneously sending one fixed and one tunable laser through the spectrometer. A series of intensity patterns at the output of the SDFT region were recorded while scanning the relative detuning between the two input lasers. The laser is scanned at 1 pm step from 1555 nm to 1557 nm. Using the calibration matrix and the recorded output intensity pattern, we are able to reconstruct the dual wavelength input spectrum as the tunable laser is stepped across the entire range. A series of reconstructed spectra using the intensity pattern recorded from the SDFT device is plotted in Fig. 4 (a) where the y-axis corresponds to collects at different wavelengths of the tunable laser. The vertical line represents the fixed wavelength laser, where the diagonal line represents the tunable laser. The data is reconstructed using l2 regularization with a small regularization parameter, where the weight of the regularization is directly related to the reconstruction resolution [26]. See Supplementary figure for comparison of the reconstruction using other regularization techniques. A detailed analysis on the signal reconstruction technique and the effect of regularization on computational spectrometers are given in Refs. [11,24]. To compare the performance of the device, we theoretically simulated a calibration matrix and intensity pattern generated by the DFT-only spectrometer with comparable MZI parameters. The reconstructed spectrum is plotted in Fig. 4 (b) for comparison. As the figure demonstrates, the SDFT spectrometer far outperforms the resolving capacity of the DFT-only spectrometers and is able to resolve two closely spaced spectrum. Figure 4(c) is a zoomed-in reconstruction of the spectrometer where two lines separated by 3 pm (374 MHz) are resolved by the SDFT spectrometer, far beyond the resolution limit (182 pm, 22.7 GHz) expected from the DFT-only spectrometer. The measured resolution is limited by the laser scan step jitter. The reconstruction was performed for 2000 spectral steps of the tunable laser at 1 pm per step within a 2 nm (250 GHz) window, plotted in Fig. 4 (d). The bandwidth far exceeds the bandwidth of the speckle-only spectrometer with comparable resolution [17]. To experimentally demonstrate the spectral range of the SDFT device, we repeated the two-tone experiment over 10 and 30 nm window. The calibration matrix of the SDFT device is recorded by summing the pixel arrays from the dashed white region in Fig. 2(a), where the output is an overlap of MZIs and speckle modes. To compare the performance of the device with the speckle-only spectrometer, speckle calibration matrix is generated by recording pixels below the MZIs array (dashed red region in Fig. 2(a)) where the multimode speckle data has minimal to no contribution from the MZI output. The twotone reconstruction experiment is repeated for both systems, where the data is acquired simultaneously within the same shot of the measurement. The reconstructed spectrum using the SDFT and speckle-only spectrometers are plotted in Fig. 5 (a) and (b). As can be seen, the SDFT device is able to reconstruct the spectrum for a larger bandwidth region, whereas the speckle-only reconstructs multiple false spectral spectra in addition to the true spectra. This is due to the limited number of unique speckle fingerprint available as the input spectral window is increased. A detailed theoretical comparison on the performance of the two devices is given in Supplementary Section 7. To display the spectral range of the SDFT, we collect data over a 30 nm sweep of the tunable laser. As can be seen in Fig. 5 (c), the reconstruction pattern repeats after 12 nm, thus indicating the bandwidth of the device. FIG. 5. Two-tone spectrum reconstructed over a 10 nm window using (a) the hybrid SDFT and (b) speckle-only device. The speckle data is recorded from below the MZIs array where there is minimal to zero overlap from the MZIs output. As the spectrum indicates, the hybrid SDFT device is able to reconstruct the spectrum, whereas the speckle-only device reconstructs multiple false spectral contents. This is due to the limited unique speckle patterns as the input spectral window is increased. (c) Two-tone spectra reconstructed over a 30 nm window where one laser is fixed near 1556 nm and the second laser is scanned from 1540 to 1570 nm. Due to the limited bandwidth, or free spectral range, of the DFT spectrometer the algorithm reconstructs multiple false lines that are separated by 12 nm from the true spectrum corresponding to twice the free-spectral-range of the DFT component of the SDFT spectrometer. The orange arrows indicate the location of the true input spectra. The red box is a 12 nm window where only the two spectra are present. This corresponds to the true bandwidth of the device. The arrows indicate the true spectral location of the input lasers with the enclosed red box consisting of true spectra within the free-spectral range of the DFT spectrometer (12 nm). The measured bandwidth is twice the predicted bandwidth for a DFT spectrometer [9]. This additional enhancement in bandwidth is attributed to the combination of the speckle and the reconstruction algorithm [15]. All the data reported in this Article are taken in an open lab setting without precise temperature stabilization of the chip. The test data is typically taken within 30 minutes of the calibration. The device has been tested over an input power range of 1 to 6 mW. The dynamic range of the reconstructed spectrum is measured to be ∼ 12 dB, which is limited by the measured signal-tonoise (SNR) of the output intensity and noise induced by system instability. This should be partly mitigated by packaging the device to thermally and mechanically isolate it from external perturbations and measured signal can be increased by coating the unused surfaces of the chip with high-reflecting mirrors. Speckle generated by multimode fibers are known to be particularly sensitive to small strain or temperature fluctuations [13]. A detail analysis of the effect of temperature drift of such speckle and DFT spectrometers are reported in Refs. [17,27,28]. The small footprint of the device (∼ 1cm 2 ) with a shorter path length partially mitigates the thermal and stain issues that a multimode fiber speckle spectrometer suffers from. In addition, it has been reported that the speckle patterns generated by input light with wavelength λ at temperature T + δT and by input light at wavelength λ+δλ at temperature T are the same [17]. Thus any spectral drift due to a small change in temperature can be compensated by a single correction offset in the reconstructed spectrum. Alternatively, the temperature of the spectrometer can be monitored and controlled by fabricating a layer of metallic heater on top of the device. V. CONCLUSION In this Article, we demonstrate a novel chip-scale, passive spectrometer that combines a discrete Fouriertransform and a speckle functionality in a single device to significantly increase the finesse over the individual DFT and speckle-only spectrometers. We demonstrate that the device can resolve two laser separated by 3 pm and determine the true resolution of the device to be ∼ 1 ps (140 MHz) using intensity correlation measurement. The device has 12 nm bandwidth within an operational window of 100 nm in the 1500 nm-1600 nm region. The finesse of our device is ∼ 10, 000, two orders of magnitude larger than individual DFT or speckle spectrometers [7-11, 13, 14]. To achieve the experimentally demonstrated bandwidth and resolution reported in this Article using a DFT-only spectrometer [7,8] would require 10,000s of MZIs, making it infeasible for a chip-scale device. Even though speckle-based chip-scale spectrometers can achieve such high resolution, they are severely limited to a small operational bandwidth (0.1 nm) [17]. The competing technologies involve using two different devices with coarse and fine resolutions [13] or using optical switches or spectral multiplexing [29] to achieve highresolution and large-bandwidth reconstruction simultaneously. This significant improvement in the finesse was achieved even though the footprint and bandwidth of our device was not optimized. This device can be trivially extended in the same platform to have > 100 nm bandwidth centered at ∼ 1550nm by fabricating 128 MZIs with ∆L min ≤ 1.9µm. The operational range of the device is set by the wavelength specifications of the Ysplitter and edge coupler but can be extended to anywhere within the transparency window of silicon. In addition, by using heterostructures of different materials, the design can be extended to operate at a much wider range of wavelengths of interest.
2019-08-23T14:28:53.624Z
2019-07-29T00:00:00.000
{ "year": 2020, "sha1": "e8ed714394cad37a4bba041a785cc86fffe603a6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.388153", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "0854824b80db8341776f1418e824ec663f860542", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
16291440
pes2o/s2orc
v3-fos-license
Exponential Basis in Two-Sided Variational Estimates of Energy for Three-Body Systems By the use of the variational method with exponential trial functions the upper and lower bounds of energy are calculated for a number of non-relativistic three-body Coulomb and nuclear systems. The formulas for calculation of upper and lower bounds for exponential basis are given, the lower bounds for great part of systems were calculated for the first time. By comparison of calculations for different bases the efficiency of exponential trial functions and their universality in respect to masses of particles and interaction are demonstrated. The advantage of exponential basis manifests mostly evident for the systems with comparable masses, though its use in one-center and two-center problems is justified too. For effective solution of two-center problem a carcass modification of the trial function is proposed. The stability of various three-particle Coulomb systems is analyzed. Introduction Among existing methods of calculation of non-relativistic bounded systems the variational method seems to be the most universal one as it is applied equally well for the solution of atomic and nuclear problems.It is essential that the variational method allows to find not only the upper (E U ) but also the lower (E L ) estimates of the energy.As to potentiality of the method, there are many examples of highly accurate calculations of three-and more-particle systems [1]- [21].For instance in the three-body Coulomb problem the precision is amounted up to a score of decimal places.Of course in real physical systems the relativistic and other effects lead to corrections in the energy already in 5-7-th decimal place and therefore, in practical use a variational procedure which ensures a reasonable accuracy with least computational efforts may be acceptable. Historically the first variational expansion in a three-particle problem was suggested by Hylleraas in perimetrical coordinates in the form of exponent function multiplied by a polynomial with integer nonnegative powers.Later negative and fractional powers were added [1], [2], besides Frankowski and Pekeris [3] introduced logarithmic terms.In the next this basis is referred as 'polynomial' one. Another possibility is to use a purely exponential basis.It ensures a good flexibility of the variational expansion due to the presence of many scale nonlinear parameters.Whereas the Hylleraas basis is practically oriented on solution of uni-center Coulomb problems the exponential basis is good for systems with any masses of particles and types of their interaction [4].Besides, the calculations with exponential basis are more simple and uniform whereas for polynomial basis they became more and more complicated as number of terms increases, especially if the logarithmic terms are included. Instead of exponents another non-polynomial functions, gaussians, can be used.They are not less than exponents fitted for the systems with any masses of particles, moreover they are applicable to systems with arbitrary number of particles.For this basis all the formulas needed for calculation of both the upper and lower bounds are given in paper [5], and different 3-, 4-, and 5-particle systems were calculated there.For the upper bound a generalization is given in article [6] for arbitrary orbital moments.Nevertheless, our analysis have shown that at least for three-body variational calculations with not very high number of parameters the precision for gaussian basis is lower than for exponential one. Our principal goal was not the striving for improvement of existing super-high precision calculations but the analysis of efficiency of exponential and partly gaussian trial functions for evaluation of the upper and lower variational bounds.For this purpose the Coulomb and nuclear systems of particles with different masses and types of interaction are considered and the results of calculations are compared with those published in literature. To facilitate such a comparison we will characterize the accuracy of calculations of E U and E L by the values: which determine the number of correct decimal places of E U and E L respectively, E 0 being the exact value of the energy.The universality of exponential basis in respect to masses of particles allows us to analyze the problem of stability of different Coulomb three-particle systems. Method of calculation In three-particle problem it is convenient to use interparticle distances as coordinates together with the Euler angles describing the orientation of the triangle formed by the particles.In the case of central interaction the wave function of the ground state (and exited state with zero orbital momentum) depends only on interparticle distances, therefore the function can be written as: where α a p are the nonlinear variational parameters specifying the scale of the basis function | a , R i is the distance between particles j and k where {i, j, k} is the triplet {1, 2, 3} or its cyclic permutation.In the case of gaussian basis R p in (2) is replaced by R 2 p .It is convenient to use the notations: where β p is the angle at p-th particle in the triangle.Then simple calculations result in the following formula for matrix element of operator of kinetic energy, T , between states | a and | b : where , m i being the mass of i-th particle and 1/M ≡ 1/m 1 + 1/m 2 + 1/m 3 .A calculation of matrix elements for the potential energy reduces to calculation of the integrals similar to those for kinetic energy.In particular, for the Coulomb interaction A calculation of lower variational estimate requires additional evaluations of matrix elements for operators T 2 , V 2 and V T .For this purpose it is convenient to introduce additional notations: Then, the matrix elements of operators T 2 and V p T + T V p are written as: In the particular case of Coulomb potential: The calculation of the matrix elements of V 2 is similar to calculation of a | V | b .In particular, for the Coulomb interaction a simple formula takes place: The trial function is written as a superposition of basis functions (2): where C a are linear parameters.Evidently, the difficulties arise mostly at optimization of the non-linear parameters.The possibilities of the deterministic procedures are soon exhausted as the number of terms in expansion (3) increases.Therefore, a specially designed procedure of global stochastic searching was used.Briefly it is the following: at each Monte-Carlo probe a random point is chosen in 3N-dimensional space of non-linear parameters according to previously accepted distribution function.Then the coordinatewise optimization is carried on, at first the stochastic one and then the deterministic one.At this stage the best points are selected for subsequent detailed optimization.Mentioned above distribution function is found by a procedure similar to that described in [7]. Efficiency of calculations for various systems To understand better what are the possibilities of exponential basis and described above optimization procedure in calculations of systems with different masses of particles and interactions a number of Coulomb and nuclear systems were considered.Among them: He atom, hydrogen ion H − , positronium ion Ps − (e + e − e − ), meso-systems αµ − e − , pµ + e − , ppµ − , µ + µ + e − , µ + e − e − , two-center Coulomb systems ppe − , dde − , tte − , as well as nuclei 3 H and 3 Λ H.The composition of the majority of considered Coulomb systems with the particles of unit charge can be presented as X ± Y ∓ Y ∓ , the identical particles being denoted as Y .The binding energies decrease together with the values of masses but the accuracy of calculation depends only on the ratio of masses.For these systems the upper and lower bounds were calculated with N = 30 in expansion (3), corresponding values δ U and δ L were plotted in Fig. 1 as the functions of mass ratio, ξ Y X = m Y /m X .In calculations of the lower bound the non-linear parameters were accepted to be equal to these found for the upper bound. As expected, the increase of ratio ξ Y X leads to the decrease of values δ U and δ L due to arising difficulties in description of motion of heavy particles.Nevertheless even at the approach to the two-center limit (ξ Y X ≫ 1) the accuracy of calculations remains still satisfactory.For comparison, in Fig. 1 the results of most detailed calculations with polynomial basis [8] are presented too.It is seen that the accuracy of calculations with polynomial basis [8] becomes bad for ξ Y X > 0.1 in spite of large values of N.This comparison shows that the exponential basis is applicable for a wider range of values of ξ Y X than the polynomial basis. Note that in the case of Gaussian basis δ U and δ L decrease even more slowly than for exponential basis (see Fig. 1) though the latter provides generally higher precision. The exponential basis can be used as well in the case of nuclear systems, even inconvenient for calculation (weakly bounded systems, short-range attractive potentials with strong repulsion at small distances between particles, that can be identical or not identical).As a particularly 'inconvenient' system hypertritium, 3 Λ H (consisting of npΛ), was chosen.For comparison a more 'convenient' three-nucleon system, 3 H, was considered.In these calculations two types of model nuclear NN-potentials were used, (i) purely attractive potential NN − 1 and (ii) attractive potential with a soft core NN − 2: and attractive ΛN-potential: the parameters V a,r and R a,r are given in Appendix B. The convergence of the upper and lower estimates for exponential basis is illustrated in Table 1 and in Fig. 2 for various Coulomb and nuclear systems.As seen from Fig. 2 the dependence of δ U and δ L on the lg N is close to the linear one.In accordance with Fig. 1 the accuracy decreases as the system approaches to the adiabatic limit, and in parallel the convergence of variational estimates deteriorates (it is characterized by the slope of curves in Fig. 2).Note that the precision of calculations for considered nuclear systems is generally similar or even better than for Coulomb systems. Besides, in Fig. 2 some results of calculations with the Gaussian basis are shown (dotted lines).It is seen that the convergence of the upper and lower bound is similar to that for exponential basis whereas the accuracy is significantly lower. Comparison of results for different bases Comparison of efficiency of different variational expansions is convenient to carry out on standard systems calculated by many authors.Such systems are ∞ He and ∞ H − considered in [1]- [3], [5], [7]- [16].In Fig. 3 the values of δ U and δ L , are plotted for these papers where the most detailed calculations of atom ∞ He were carried out, the results of our calculations are presented there too.Similarly, in Fig. 4 δ U and δ L are presented for hydrogen ion ∞ H − . It is necessary to emphasize that both cases are examples of one-center systems.Therefore, this is a reason to expect that the expansions especially designed for one-center problems will gain the advantage.This is generally confirmed by our analysis.Up to present the most accurate many-parameters calculations of ∞ He were carried out using polynomial or polynomiallogarithmic bases.As it is seen from Fig. 3 and Fig. 4 the convergence of the variational expansions for these bases is generally better than for exponential or gaussian bases. On the other hand, up to δ U ≈ 12 the use of exponential basis is justified as it assures the same precision at lower number of terms (see Fig. 3 and 4).Note that the over-high precision in non-relativistic calculations without taking into account relativistic and other corrections (that appears far before δ ≈ 12) have no physical meaning, though they are interesting from computational point of view. As to the lower bound calculations they are rare in literature and we estimate the number of N up to which the calculations of E L with exponential basis are justified (in the same sense as for E U ) as 100 − 200. Another limiting case is the adiabatic one (i.e. a two-center system with two heavy particles).In this case the use of polynomial basis leads to unsatisfactory results, and the exponential basis is evidently preferable (see Fig. 1).Moreover, the use of complex scale parameters in exponential basis increases significantly the accuracy of calculations [22].The most accurate calculations of two-center systems were carried out in the framework of the Born-Oppenheimer approach [23] or its modifications [24], [25].In particular, in paper [25] the energy of the system H + 2 (ppe − ) was calculated with precision δ ≈ 12 but this is only some better than that of the calculation of [22] with exponential functions (note by the way that the number of basis terms in [22] was less than in [25]). A more effective modification of exponential basis in two-center calculations is: where R 3 is a distance between the heavy particles.Note that the dependence of this function on R 3 can be presented as exp(−β a (R 3 − R a 3 ) 2 ), where R a 3 is the new variational parameter connected with α a 3 .Note that basis ( 6) is, in a certain sense, a particular case of 'carcass' functions (constructed on the base of gaussians in paper [26]), whose use together with gaussians might be useful in nuclear physics for calculation with potentials changing the sign. For functions (6) all the integrals needed for calculations of the upper variational bound are expressed in the closed form in terms of conventional functions.For instance, the basic integral can be calculated as: where F (z) ≡ e z 2 ∞ z e −t 2 dt.The calculations of the ground state of the system ppe − with this modified basis lead to significantly better results than with purely exponential or gaussian bases.In particular, in our calculations it has been shown that even a single function (6) provides a better precision than 50 exponents or gaussians.Moreover, the basis ( 6) is more flexible than the exponential basis with complex parameters used in [22].For instance, the result of calculations with N = 20 for ppe − turns out to be better than that of paper [22] with 200 complex exponents (1400 variational parameters) and better than calculations with 300 functions for systems µ + µ + e − , dde − and tte − . In addition to the preceding discussion of two limiting cases (one-and two-center problem) it is necessary to indicate that there exists a large region of values of ξ Y X between 10 −2 and 10 2 where the exponential basis is beyond compare.Note that this is the region where the great part of known three-particle Coulomb systems is located.Thus, apart from gaussians, the exponential basis seems to be the most universal one in comparison with other approaches, applicable equally well to Coulomb and nuclear three-particle systems. Stability of Coulomb Systems All considered above Coulomb systems except two (αe − e − and αµ − e − ) had summary charge ±1 and consisted of three single-charged particles from which two are identical.All systems of such type are stable in respect to separation of one of the particles.However this is not the case for other type of three particle Coulomb systems.For analysis of stability of Coulomb systems and for calculation of their energy it is natural to use the variational procedure with exponential basis as it is most universal in respect to masses of particles (see also [27]). In general case the structure of a Coulomb system of three single-charged particles with total charge ±1 may be presented in the form X ± Y ∓ Z ∓ where m Y ≤ m Z .The stability of the system depends on two ratios of masses, ξ Y X = m Y /m X and ξ ZX = m Z /m X .A boundary delimiting the regions of stable and unstable systems is determined from the condition of coincidence of the energy of the three-particle system with that of the two-particle system X ± Z ∓ .The corresponding equation determining the interdependence between ξ Y X and ξ ZX can be written as: The solution of this equation is presented in Fig. 5 by the curve A. It is seen that not only systems with two identical particles are stable but also two-center systems (two heavy particles with identical charges plus light particle with opposite charge).In contrast, a system containing two heavy particles of opposite charges are unstable.An exception can occur if all three particles have nearly equal masses.This takes place for instance for exotic systems ) for which f = 0.008745, 0.006069 and 0.002354, respectively.Of course, a three-particle system which is stable with respect to emission of one of the constituent particles can be unstable in the excited state.This problem was considered in [4] for symmetric (XY Y ) systems with m Y /m X ≪ 1. For the case of systems of the type X +m Y +m Z −m containing multiple-charged particles the situation is quit similar to the case of single-charged particles considered above.Among threebody systems containing single and double charged particles the systems of the type X ++ Y − Z + and X ++ Y − Z ++ are unstable at any ratio of their masses, whereas the systems X ++ Y − Z − are always stable.As to the systems of the type X ++ Y −− Z + they can be stable only for restricted values of ratios of their masses.The corresponding boundary is shown in the same Fig. 5, curve B. A Standard integrals A calculation of matrix elements of the Hamiltonian and its square reduces to the evaluation of the following integrals: The integrals I klm (x 1 , x 2 , x 3 ) with non-negative indexes are the uniform polynomials of the (k + l + m + 3)-th degree with respect to the variables A i ≡ 1/(x 1 + x 2 + x 3 − x i ). To calculate the upper variational estimate the following integrals are necessary: (Here and further an unimportant numerical factor 16π 2 is dropped.) For presentation of the integrals ( 9) with negative indexes it is convenient to use the following notations: E2 . To calculate the lower variational estimate the following integrals are necessary: Figure 1 : Figure 1: Dependence of δ U and δ L on mass ratio for Coulomb systems X + Y − Y − Figure 2 : Figure 2: Dependence of δ U and δ L on number of terms in variational expansion for exponential basis Figure 3 : Figure 3: δ U and δ L in calculations of atom ∞ He with different variational expansions.Markers refer to the first author of the corresponding paper. Figure 4 : Figure 4: The same as in Fig. 3 but for hydrogen ion ∞ H − Table 1 : Upper and lower bounds for Coulomb and nuclear systemsVariant V r , MeV R r , Fm V a , MeV R a , Fm Table 2 : Parameters of nuclear model potentials
2014-10-01T00:00:00.000Z
2002-03-14T00:00:00.000
{ "year": 2002, "sha1": "a36778f48e967e41e017e5d2b75a67cff70b4210", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a36778f48e967e41e017e5d2b75a67cff70b4210", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
73513404
pes2o/s2orc
v3-fos-license
Investigation of direct inkjet-printed versus spin-coated ZrO2 for sputter IGZO thin film transistor In this work, a low leakage current ZrO2 was fabricated for sputter indium gallium zinc oxide (IGZO) thin-film transistor using direct inkjet-printing technology. Spin-coated and direct inkjet-printed ZrO2 were prepared to investigate the film formation process and electrical performance for different process. Homogeneous ZrO2 films were observed through the high-resolution TEM images. The chemical structure of ZrO2 films were investigated by XPS measurements. The inkjet-printed ZrO2 layer upon IGZO showed a superior performance on mobility and off state current, but a large Vth shift under positive bias stress. As a result, the TFT device based on inkjet-printed ZrO2 exhibited a saturation mobility of 12.4 cm2/Vs, an Ion/Ioff ratio of 106, a turn on voltage of 0 V and a 1.4-V Vth shift after 1-h PBS strain. Higher density films with less oxygen vacancy were responsible for low off state current for the printed ZrO2 device. The mechanism of deteriorated performance on PBS test can be ascribed to the In-rich region formed at the back channel which easily absorbs H2O and oxygen. The absorbed H2O and oxygen capture electrons under positive bias stress, serving as acceptors in TFT device. This work demonstrates the film formation process of direct inkjet-printed and spin-coated oxide films and reveals the potential of direct inkjet-printed oxide dielectric in high-performance oxide TFT device. Electronic supplementary material The online version of this article (10.1186/s11671-019-2905-2) contains supplementary material, which is available to authorized users. Background Metal oxide dielectrics have recently emerged as promising alternatives to SiO 2 and SiN x in thin-film transistors (TFTs) owing to their superior properties, including high capacitance, low defect states, and large band gap which leads to high mobility and low off current [1][2][3]. For these reasons, oxide dielectrics fabricated by vacuum process are widely studied in displays, sensor arrays, and driving circuits [4]. Meanwhile, the solution process has also received remarkable attention because of the advantage of low cost for large-scale fabrication including spin coating, inkjet printing, spray coating, and slit coating [5,6]. Among these, direct inkjet printing is the most promising method which can achieve patterned films without photolithography. However, TFT devices fabricated by the inkjet-printing process exhibit inferior electrical performances compared to the vacuum-processed ones. Direct inkjet-printing metal-oxide films face serious problems: (1) the uncontrollable spreading of oxide precursor on the substrate due to the difference of surface energy of the fluid and substrate and (2) the compatibility of printed oxide dielectrics with semiconductor [7]. The film formation process of solution-processed dielectric film has significant influence on electrical property. The spin-coating method as an established technique is widely used in solution-processed TFT. The leakage current density of spin-coated oxide dielectric is usually lower than 10 −6 A/cm 2 at 1 MV/cm, and the breakdown electric field is more than 2 MV/cm. Saturation mobility of TFT based on coated oxide dielectric is around 10 cm 2 /Vs. However, for printed oxide dielectric, the leakage current density is about two orders of magnitude higher than that for coated oxide film (>10 − 4 A/ cm 2 at 1 MV/cm) and saturation mobility is lower than 5 cm 2 /Vs. Few reports have made comparison of inkjet-printed dielectric films with spin-coated films especially on the film formation process. Density, surface roughness and homogeneity of dielectric films are the most important factors related to the electrical performance of TFT [8]. Moreover, the interface between gate insulator and semiconductor also plays a key role for the solution process TFT [9]. A comprehensive study on inkjet-printed oxide dielectrics is of great value to better understand this promising technique. In this paper, we prepared high-quality ZrO 2 films with favorable surface appearance and excellent electrical performance by both coating and printing method and investigated the electrical effect applied in sputtered indium gallium zinc oxide (IGZO) TFT [10,11]. The film formation process of the spin-coating and direct printing methods is compared. The spin-coating method is dominated by centrifugal force leading to uniform but dispersive distribution of molecules while the inkjet-printing process depends on fluid dynamics. According to XPS and IV test, inkjet-printed ZrO 2 film (double layers) had less oxygen vacancies compared with the spin-coated one. Increasing printed layers of ZrO 2 films can fill the holes and vacancies created by unsteady flow of precursor spreading on the substrate, contributing to less defect and superior uniformity. the direct inkjet-printed ZrO 2 film for sputtered IGZO has lower leakage current density, higher mobility, larger on/off ratio, and larger V th shift under positive bias stress than the spin-coated ZrO 2 -TFT. The In-rich region formed at the back channel of inkjet-printed ZrO 2 TFT is responsible for worse stability since water molecules and oxygen in the air can easily be absorbed in under positive bias stress, consuming electrons from the IGZO layer. It reveals that the direct inkjet-printing technique is able to fabricate high-density oxide dielectric but the interface defect should be well controlled to avoid electrical instability. Materials The ZrO 2 solution was synthesized by dissolving 0.6 M ZrOCl 2 ·8H 2 O in a 10 ml mixture solvent of 2-methoxyethanol (2MOE) and ethylene glycol with a ratio of 2:3 to attain a certain surface tension of precursor. The solution was stirred at 500 r/min at room temperature for 2 h, followed by aging for at least 1 day. For ozone UV treatment process, a 100-W UV lamp with 250 nm wavelength was used to irradiate the indium tin oxide (ITO) substrate cleaned by isopropyl alcohol and deionized water. Subsequently, ZrO 2 films were formed by spin coating or direct inkjet-printing process. The coating process was carried out with a speed of 5000 rpm for 45 s, while the drop space and nozzle temperature are 30 μm and 30°C for the printing process. ZrO 2 films were annealed at 350°C under atmospheric environment for 1 h. 10-nm-thick IGZO was then grown by direct current pulsed sputtering method with a pressure of 1 mTorr (oxygen:argon = 5%) and patterned by shadow mask. IGZO was annealed at 300°C for 1 h to reduce the defect in the film. The channel width and length were 550 μm and 450 μm; thus, the width/length ratio was 1.22. Finally, Al source/drain electrodes with 150-nm thickness were deposited by direct current sputtering at room temperature. Instruments X-ray photoelectron spectroscopy (XPS) measurements were carried out to investigate the chemical structure in oxide semiconductors performed by ESCALAB250Xi (Thermo-Fisher Scientific, Waltham, MA, USA) at a basic pressure of 7.5 × 10 −5 mTorr. The cross-sectional transmission electron microscopy (TEM) images were measured by JEM-2100F (JEOL, Akishima, Tokyo, Japan) and the results of electronic differential system (EDS) mapping scan were analyzed by Bruker (Adlershof, Berlin, Germany) to investigate the element distribution. Under the dark condition and air at RT, capacitance-voltage curves were measured by an Agilent 4284A precision LCR Meter (HP, USA). To measure the transfer characteristics of IGZO TFT and leakage current density curves, we used Agilent 4156C precision semiconductor parameter analyzer. Transfer characteristics were measured by a gate voltage sweeping from − 5 to 5 V with a drain voltage of 5 V. We calculated the field effect mobility using the measured transfer curve and the following equation: where I DS , C i , μ, W, L, V GS , and V th are the drain current, capacitance of the gate dielectric per unit area, saturation mobility, channel width, channel length, gate voltage, and threshold voltage, respectively. The dielectric constant is calculated by equation as follow: where ε r , C, d, ε 0 , and S are relative dielectric constant, capacitance of the gate dielectric, thickness of the gate dielectric, vacuum dielectric constant, and the area of electrode, respectively. Result and Discussion The film formation process of the direct inkjet-printing method compared with the spin-coating method is proposed in Fig. 1. During the spin coating process, droplets are forced to spread uniformly on the whole substrate by centrifugal force [12]. As a consequence, after the annealing process ZrO 2 molecules are well distributed on the substrate. Meanwhile, the majority of ZrO 2 molecules are tossed out during the coating process, vacancies occur inside the film. The density of films fabricated by spin coating process are irrelevant to coating parameters for certain precursor [13]. For the inkjet-printing process, the printer moves in a particular direction to leave droplets on the substrate. Droplets merge together at the balance of spreading and shrinking process which is influenced by gravity, surface tension and viscoelasticity of precursor. The film formation process of inkjet printing can be well controlled by optimizing processing parameters of droplet space, jet velocity, ink composition, and substrate temperature [14]. The most important factor is drop space set by the printer and post-treatment process for the substrate. Additional file 1: Figure S1 shows images of the contact angle of printing precursor on ITO substrate with different UV treatment periods and the polarizing microscope of annealed ZrO 2 films. ZrO 2 film printed on ITO substrate with 40-s ozone irradiation possesses best morphology. In addition, multiple-layer printing method is efficient in reducing holes in the film by filling vacancies with additional droplets directly printed on the top of the former layer, leading to a more homogeneous film with higher density and less defect [15]. The thickness of films printed 1-layer and 2-layer film is 45 nm and 60 nm, respectively (Additional file 1: Figure S2). Film thickness is not in proportion to printed layers, which explains that the multiple-printing method is not just a thickness accumulation process [16]. In general, the quality of direct-printed ZrO 2 films can be well controlled by processing parameters. In our experiment, we prepare spin-coated (SC), direct inkjet-printed 1-layer (DP1) and 2-layer (DP2) ZrO 2 films and IGZO-TFT devices based on these films to investigate the difference in film morphology and electrical property from different film formation processes. Figure 2a-c shows the O1s spectrum of ZrO 2 film prepared by different methods. We fitted the oxygen 1s peak to a superposition of three peak components. The peaks centered at 529.8 ± 0.2 eV, 531.7 ± 0.2 eV, and 532.1 ± 0.1 eV can be assigned to metal-oxygen bond species (V M-O ), oxygen vacancies (V O ), and weakly bound species (V M-OR ), respectively [17,18]. The V M-O species of the DP2-ZrO 2 film is 81.57%, which is much higher than the SC-ZrO 2 and DP1-ZrO 2 . The V O species is also the lowest for DP2-ZrO 2 film. This is consistent with ideas mentioned above: (1) direct inkjet-printing process can obtain ZrO 2 film with higher density and less oxygen vacancies, and (2) repeated printing process can fill in the holes and traps and reduce vacancies inside the film. AFM measurement was performed to investigate the surface morphology of printed ZrO 2 film compared with that of spin-coated ZrO 2 shown in Additional file 1: Figure S3. Spin-coated ZrO 2 exhibits the smoothest surface with a surface roughness of 0.29 nm, and direct-printed 1-layer and 2-layer ZrO 2 films are 1.05 nm and 0.67 nm, respectively. Direct-printed ZrO 2 film possesses a rougher surface owing to the uncontrollable flow of fluid during the film formation process [19]. The remarkable decrease in surface roughness from printing one more layer for direct-printed ZrO 2 film can be ascribed to fluid printed on the substrate latter fill up the holes of the initial layer to develop a more homogeneous film. The XPS and AFM results show that the inkjet-printing method has a potential in producing higher quality, lower defect dielectric films compared with spin coating method, along with approximate surface roughness which is suitable for TFT fabrication. Capacitance-voltage and current-voltage measurements were performed to investigate the electrical properties of SC-ZrO 2 and DP-ZrO 2 film using an Al/ZrO 2 / ITO capacitor (metal-insulator-metal) fabricated on glass substrate. We eliminate influence brought by film thickness since they have approximate thickness (60 nm, 45 nm, and 60 nm, respectively). As shown in Fig. 3, DP1-ZrO 2 film exhibits hardly any insulating property, caused by a large number of vacancies in the film which Figure 4 shows capacitance-voltage curve of spin-coated and direct-printed ZrO 2 films. The relative dielectric constant for these three samples is calculated to be 19.2, 20.1, and 18.8 which is close to the reference value (18). For both spin-coated and inkjet-printed ZrO 2 films, capacitance density increases with voltage hysteresis is observed in both three samples, and it is smallest in SC-ZrO 2 sample and largest in DP1-ZrO 2 film. The hysteresis is related to the uniformity and defect state of dielectric film. It confirms that the homogeneity of coating ZrO 2 film is the best and multiple layer can improve the uniformity of direct inkjet-printing films [20,21]. To further study the effect of ZrO 2 layer fabricated by different ways on TFT performance and gate-bias stability, negative gate-bias stress (NBS) and positive gate-bias stress (PBS) results of IGZO-TFT with both SC-ZrO 2 and DP2-ZrO 2 are presented in Fig. 5. Transfer characteristic curves under NBS and PBS were measured by applying a positive (+ 5 V) or negative (− 5 V) bias for 1 h. The DP2-ZrO 2 IGZO TFT shows better performance at static state with a saturation mobility (μ sat ) of 12.5 cm 2 /V·s, I on /I off radio of 10 6 , and V th of 0 V. The SC-ZrO 2 IGZO TFT exhibits an approximate but lower mobility of 10.2 cm 2 /V·s, worse I on /I off radio of 2 × 10 5 , and higher off-state current (I off ), mainly due to an increase of channel leakage by larger amount of oxygen vacancies (V O ) in the dielectric film. The V th shift of IGZO TFT with both SC-ZrO 2 and DP2-ZrO 2 under NBS measurements is negligible. The negative V th shift of oxide TFTs under NBS is generally caused by the hole trapping or charge injection since the ionized oxygen vacancies can migrate to the semiconductor/insulator interface under the negative gate bias field. The NBS results indicate that either SC-ZrO 2 or DP2-ZrO 2 film has a favorable contact with IGZO [22,23]. However, unlike SC-ZrO 2 IGZO TFT which exhibits a V th shift of 0.4 V after applying PBS for 1 h, the DP2-ZrO 2 IGZO TFT shows a severe degeneration of performance and large V th shift of 1.2 V under PBS test. The results of ZrO 2 -IGZO TFTs under PBS test are summarized in Table 1. Since the V th shift of oxide TFTs under PBS test is generally caused by the diffusion of absorbed water or oxygen molecules, we can assume that the backchannel of DP2-ZrO 2 IGZO TFT is more sensitive to atmospheric environment under PBS test [24,25]. To investigate the degeneration and V th shift under PBS test for ZrO 2 -IGZO TFT, the cross-sectional transmission electron microscopy (TEM) images and EDS line scan were measured to analyze the element distribution. From the cross-sectional TEM images shown in Fig. 6a and b, a structure of the Al/IGZO/ZrO 2 investigated in this paper was presented. From the high-resolution TEM images of the channel region for both SC-ZrO 2 IGZO TFT and DP2-ZrO 2 IGZO TFT, a nearly 8-nm-thick IGZO layer can be obviously observed, which can be proved by the distribution of the In (Ga, Zn) element in EDS line scanning results. Meanwhile, for both SC-ZrO 2 IGZO TFT and DP2-ZrO 2 IGZO TFT, the ZrO 2 layer exhibits an amorphous structure which is beneficial to low-leakage current density. It is obvious that from the line scanning result, Al element diffuse into the IGZO layer, which may be caused by impact during the Al sputtering process. Furthermore, the ratio of Zr and O element is approximately 1:2, which demonstrates that pure ZrO 2 was formed after the annealing process. Uniform distribution of In, Ga, Zn, and Zr elements are also obtained in the IGZO layer for SC-ZrO 2 IGZO TFT, indicating a homogeneous structure of ZrO 2 and IGZO film was established during sputtering and the post-annealing process [19]. But for DP2-ZrO2 IGZO TFT, In, Ga, Zn, O and Zr are in irregular distribution. From Fig. 6(b), we can see the Zr element along with the O element is concentrated at the interface of the dielectric and active layer. And it totally coincided with the analysis of the film formation process of multiple-layer printing method. During the Table 1 The summary of mobility, I on /I off ratio, and V th during PBS test of spin-coated and direct-printed ZrO 2 TFT SC-TFT Mobility (cm 2 /V·s) multiple-printing process, the precursor printed latter on the substrate partly fills the vacancies, and the majority of droplets are accumulating at the top [26]. Moreover, the segregation of In and Zn element at the backchannel of the IGZO layer is observed in the IGZO layer of printed ZrO 2 -TFT. Since the proportion of the Zn element is minimum in our experiment, the electrical performance of IGZO TFT is determined by the In and Ga element. The formation of an In-rich region at the Al/IGZO interface can be concluded as follows: during the annealing process of the IGZO layer which aims to eliminate the defect state of IGZO, there was a redistribution of each element. O atoms were "taken away" from In and Zn elements since they have lower oxygen bond dissociation energy than the Zr element, pushing them away from the dielectric/semiconductor interface. The elementary substance of In and Zn elements are unstable so they recombined with oxygen absorbed at the back channel, which can be proved by the EDS scanning [27][28][29]. The In-rich region with absorbed water molecules and oxygen is the reason for a large V th shift under PBS test. In order to conceptually depict the mechanism of the degenerated performance and V th shift under positive bias stress for IGZO TFT, schematic band diagrams of TFT for spin-coated ZrO 2 and inkjet-printed ZrO 2 are shown in Fig. 7. DP2-ZrO 2 TFT can accumulate more carriers than SC-ZrO 2 TFT at static state due to a better insulating property, but under positive bias stress, most carriers are exhausted by acceptor-like molecules like water and oxygen in the atmosphere. In general, hydrogen, oxygen, and H 2 O molecules will incorporate into the IGZO thin film due to diffusion in the backchannel. Afterwards, the hydrogen will react with oxygen and generate oxygen-hydroxide bonds and consume electrons which results in degenerated performance under positive bias stress. Meanwhile, the adsorbed O 2 and H 2 O molecules act as an acceptor-like trap that can capture electrons from conduction band, leading to the positive V th shift after PBS [30]. The degenerated performance and V th shift are unstable and it can recover after hours under an ambient atmosphere. Owing to different oxygen bond dissociation energies of Zr-Oxide (756 kJ/mol), Ga-Oxide (364 kJ/mol), In-Oxide (336 kJ/ mol), and Zn-Oxide (240 kJ/mol) [31], O atoms are more likely to combine with the Zr element due to large oxygen bond dissociation energies. The In and Zn element pushed away from ZrO 2 /IGZO interface to the backchannel absorb oxygen in the environment. For IGZO TFT using direct inkjet-printed ZrO 2 as gate insulator, large amounts of hydrogen, oxygen, and H 2 O molecules "consume" the electrons when applying positive bias stress, leading to degeneration of device performance. Methods including introducing a passivation layer in the top of source/drain electrode for bottom gate structure, using top gate structure, and introducing an interface modification layer between the dielectric and semiconductor layer are effective ways to improve PBS for a solution-processed TFT device, which is interesting and will be carried out in our further research. Conclusion In conclusion, we fabricated a high-quality direct inkjet-printed ZrO 2 gate insulator using multiple-layer printing method without extra patterning technology, which is suitable for large-size printing fabrication process. The film formation process demonstrates that ZrO 2 film fabricated by direct inkjet-printing process obtains a denser structure compared with the spin coating process, but the homogeneity is worse because of the Capacitance-voltage curve of DP2-ZrO 2 film shows a slight hysteresis, which is similar with SC-ZrO 2 . As a result, DP2-ZrO 2 film exhibits a relatively low leakage current density of 2.4 × 10 −5 A/cm 2 at 1 MV/cm and a breakdown voltage over 2 MV/cm; TFT device based on DP2-ZrO 2 exhibited a saturation mobility of 12.4 cm 2 /Vs, an I on /I off ratio of 10 6 , a turn on voltage of 0 V, and a 1.2-V V th shift after 1 h PBS test. The segregation of the In element at the backchannel of the IGZO layer observed in TEM image and EDS scan can be responsible for larger V th shift during PBS test due to the adsorbed O 2 and H 2 O molecules which act as acceptor-like trap that can capture electrons from conduction band. This article presents the advantages of direct inkjet-printing technology and investigates the dielectric property for solution-processed oxide insulator used in oxide TFT device. It demonstrates that DP2-ZrO 2 has a denser structure with less oxygen vacancies, but poor stability under PBS caused by element diffusion. It is promising for direct inkjet-printing technology to be applied in mass production since its low cost and high performance after improving its stability. Additional file Additional file 1: Figure S1. Images of contact angle of oxide precursor on ITO substrate for different ozone UV treatment period: (a) 20s, (b) 40s, and (c) 60s and polarizing microscope of annealed ZrO 2 film on ITO substrate for different UV treatment period (d) 20s, (e) 40s, and (f) 60s, respetively. Figure S2. Step profiler images of direct-printed (a) 1layer and (b) 2-layer ZrO 2 films. Figure
2019-03-08T20:53:36.849Z
2019-03-05T00:00:00.000
{ "year": 2019, "sha1": "4b6cefcd3bf1492f080a727cad2221ec00d75626", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s11671-019-2905-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b6cefcd3bf1492f080a727cad2221ec00d75626", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
221277501
pes2o/s2orc
v3-fos-license
Patients with fibromyalgia show increased beta connectivity across distant networks and microstates alterations in resting-state electroencephalogram Fibromyalgia (FM) is a chronic condition characterized by widespread pain of unknown etiology associated with alterations in the central nervous system. Although previous studies demonstrated altered patterns of brain activity during pain processing in patients with FM, alterations in spontaneous brain oscillations, in terms of functional connectivity or microstates, have been barely explored so far. Here we recorded the EEG from 43 patients with FM and 51 healthy controls during open-eyes resting-state. We analyzed the functional connectivity between different brain networks computing the phase lag index after ໿group Independent Component Analysis, and also performed an EEG microstates analysis. Patients with FM showed increased beta band connectivity between different brain networks and alterations in some microstates parameters (specifically lower occurrence and coverage of microstate class C). We speculate that the observed alterations in spontaneous EEG may suggest the dominance of endogenous top-down influences; this could be related to limited processing of novel external events and the deterioration of flexible behavior and cognitive control frequently reported for FM. These findings provide the first evidence of alterations in long-distance phase connectivity and microstate indices at rest, and represent progress towards the understanding of the pathophysiology of fibromyalgia and the identification of novel biomarkers for its diagnosis. Introduction Fibromyalgia (FM) is a chronic disorder characterized by widespread pain and frequently accompanied by other symptoms such as fatigue, sleep disturbances or attention and memory problems ( Wolfe et al., 2010 ). It is a disease of unknown etiology, and although abnormalities at the peripheral level have been found, FM seems to be driven by alterations in the central nervous system ( Üçeyler et al., 2013 ;Serra et al., 2014 ;Clauw, 2015 ). In this sense, brain differences have been observed in FM, both at structural ( Jensen et al., 2013 ;Burgmer et al., 2009 ;Schmidt-Wilcke et al., 2007 ) and functional levels. At the functional level, studies with functional Magnetic Resonance Imaging (fMRI) that applied experimental pain to patients with FM generally found higher activation in pain-related brain areas (or similar activations at lower intensity of nociceptive stimulation) in comparison with controls ( Gracely et al., 2002 ;Pujol et al., 2009 ;Kim et al., 2011 ); reduced activation in areas related to descending pain inhibition ( Jensen et al., 2009 ) or differences in both directions -higher and lower levels of activationover several brain locations ( Burgmer et al., 2009 ;Burgmer et al., 2010 ). In studies of electrical brain activity, increased evoked responses and reduced habituation to nociceptive stimuli are common findings ( Gibson et al., 1994 ;de Tommaso et al., 2011 ;de Tommaso et al., 2014 ). Given that brain indexes related with ongoing pain can be different from those associated with experimental evoked pain ( Davis et al., 2017 ), the study of spontaneous brain activity may provide novel insights into the central alterations related with FM. In this sense, using functional neuroimaging, several abnormalities have been observed in the resting-state brain activity of patients with FM; such as altered connectivity between the insular cortex and other cortical areas ( Ichesco et al., 2014 ), increased connectivity between the periaqueductal grey matter and insula, anterior cingulate cortex (ACC) and anterior prefrontal cortex ( Truini et al., 2015 ), or several functional connectivity alterations between the default mode network and additional cortical structures ( Fallon et al., 2016 ). EEG recordings during resting-state conditions in FM also revealed alterations in power spectral density and connectivity at several frequency bands ( Fallon et al., 2018 ;González-Roldán et al., 2016 ;Lim et al., 2016 ;Choe et al., 2018 ;Hsiao et al., 2017 ). Nevertheless, there is still a lack of knowledge on the possible functional connectivity alterations in FM analyzing spontaneous oscillatory activity. The spontaneous EEG also shows stable spatial distributions of the global scalp potential that vary dynamically over time in an organized manner ( Koenig et al., 2002 ). A microstate (MS) is a time period (for around 100 ms) where the scalp potential remains stable and then changes to a new spatial configuration. MS are quasi-stable spatial patterns of the brain electrical activity that can be classified into a limited number of groups based on their topographical characteristics. The microstate analysis offers a method to characterize the EEG signal by the spatial configuration of the electrical fields, based on the existence of repeated topographic distributions of the EEG power in sensor space. Each MS is supposed to be related to a specific neural computation performed during that period, and thus reflecting different cognitive processes or mental states. Although there is no complete consensus about the cognitive process that can be underlying each MS, there are several works that have related the different topographical distributions with specific cognitive computations ( Milz et al., 2016 ;Seitzman, 2017 ;Bréchet et al., 2019 ). In addition, several studies have found alterations in different parameters of the MS (like occurrence, duration and coverage) in a variety psychiatric and neurological disorders ( Tomescu et al., 2014 ;Jia and Yu, 2018 ;Kikuchi et al., 2011 ); nevertheless, there are no previous research analyzing those patterns of scalp potentials in FM. The aim of the present study was to explore resting state EEG patterns in patients with fibromyalgia, as compared to healthy controls. To this end, we propose two novel approaches: one, to evaluate the functional connectivity across different neural networks by computing the Phase Lag Index (PLI) ( Stam et al., 2007 ) between components extracted using group-level Independent Component Analysis (group-ICA) ( Huster and Raud, 2018 ); and two, assess the occurrence, duration and coverage of the microstates obtained in both groups. These analyses will provide new insights about large-scale network interactions and brain dynamics at rest in patients with FM. Participants An initial sample of 46 patients with fibromyalgia (FM) and 53 healthy controls (HC) matched in sex (all women), age, and years of education participated in this study. The final sample comprised 43 FM and 51 HC (see reasons below). All FM patients were diagnosed by a physician (usually initially by a general practitioner and confirmed by a rheumatologist) and fulfilled the 1990 American College of Rheumatology criteria ( Wolfe et al., 1990 ). The exclusion criteria for patients with FM was the presence of other disease that could explain the reported pain, generalized anxiety disorder, severe depression or other neurological and psychiatric disorders, except for low or moderate levels of depression or anxiety. The same exclusion criteria were applied for the HC group, along with the condition of having no history of chronic pain. All participants were asked not to smoke or consume coffee, alcohol, or other drugs not prescribed by a physician in the 4 h prior to evaluation. Participants were asked to keep the consumption of medication used to alleviate typical FM symptoms to the minimum necessary on the day of the evaluation. All the experimental procedures were approved by the Ethics Committee of the University of Santiago de Compostela (Spain), in accordance with the Declaration of Helsinki. Participants were informed about the experimental protocol and all of them gave written informed consent before participation. Sociodemographic and clinical assessment Participants were interviewed about their sociodemographic status and the presence of symptoms related to FM. They completed a series of Visual-Analogue Scales (VAS) to evaluate their clinical status. Each scale consisted of a line of 10 cm in length in which the participants had to indicate the severity of each symptom from 0 to 10 (where 0 was "no problem at all " and 10 "maximum severity ") in the following variables: pain, health status, morning stiffness, fatigue, mood, headache, and sleep quality (all referred to the last month, except for fatigue, which referred to the last week). To further explore the presence of depressed mood, participants completed the Spanish version of Beck Depression Inventory -IA (BDI) ( Sanz and Vázquez, 1998 ). This test has a total score ranging from 0 to 63 (higher scores indicate more severe depressive symptoms). Sleep quality was also assessed using the Spanish version of the Pittsburgh Sleep Quality Index (PSQI), a self-rated questionnaire that explores different aspects of sleep disturbance, with a total score ranging from 0 to 21 (higher scores indicate poorer sleep quality) ( Buysse et al., 1989 ;Macías and Royuela, 1996 ). Quality of life and general health status were evaluated using the Spanish version of the Short-Form ( Sanz and Vázquez, 1998 ) Health Survey (SF-36) ( Alonso et al., 1995 ;Ware, 2000 ), that ranges from 0 to 100, where 0 is the worst and 100 is the best status value. Pain pressure threshold and tolerance were measured at the 18 tender point sites ( Wolfe et al., 1990 ) using a pressure algometer (Wagner Force One, Model FDI). The results of these variables are presented in Table 1 . 1 Procedure and EEG recording Participants were fitted with an electrode cap for EEG recording and were seated in a comfortable armchair in an electrically isolated room with low light and noise levels. They were instructed to keep their eyes open and gaze fixed (looking at a specific point on the wall, located 1.5 m in front of them) during the 10-minute registration session. They were also asked to blink when needed, but trying not to blink too often. Brain activity was recorded with a 28-electrode cap (Electro-cap International, Inc., Eaton, OH, USA), following the 10-20 International System, and referred to the nose. An electrode placed on FPz was used as ground. The vertical and horizontal electrooculogram was recorded using 2 electrodes placed above and below the left eye and 2 electrodes attached to the outer canthus of the eyes. The EEG was recorded using a SynAmps amplifier (Neuroscan Labs, Charlotte, NC, USA) at an acquisition rate of 500 Hz. The signal was filtered online with 0.1-100 Hz bandpass filter and a 50 Hz notch filter. Electrode impedances were kept below 10 k Ω. EEG preprocessing EEG recordings were preprocessed using EEGLab 14.1.1 ( Delorme and Makeig, 2004 ) and running in Matlab r2017b. Noisy electrodes were removed and reconstructed using spherical interpolation (a total of 6 electrodes were interpolated in the FM group and 7 in the HC group; making an average of < 0.15 interpolated electrodes per participant). Segments with muscular noise or bad recording of the electrodes were manually removed. Consecutive epochs of 2 s were extracted and Independent Components Analysis (ICA) for noise removal was applied using Extended Infomax ICA. Thirty independent components (ICs) were extracted from the recording of each participant. Multiple artifact rejection algorithm (MARA) software was used to automatically select ICs related to noisy activity, including eye artifacts, muscular artifacts and loose electrodes ( Winkler et al., 2011 ). This step was reviewed by the experimenter to avoid possible misclassification of the ICs by the algorithm. During the manual steps of preprocessing, the researcher was blind to the group to which each of the EEG recordings belonged. After removing the electro-oculogram, the EEG was re-referenced to the average reference. EEG was band-pass filtered from 0.5 Hz to 40 Hz using a FIR filter. Subsequently, to homogenize the duration of the recordings among the subjects, we selected the first 219 two-second epochs of the recording, making a total of 438 s. This number of epochs was selected for showing a good ratio in keeping recordings of considerable duration without the need to eliminate too many participants. Three FM and 2 HC participants were removed for having less than 219 epochs, making a final sample of 43 FM and 51 HC. Network-based connectivity For network-based connectivity we first performed the Temporal -Concatenation Group ICA (hereinafter referred as group-ICA), that provides a powerful method to analyze functional brain networks at the multi-subject level ( Raud and Huster, 2017 ). First, an initial Principal Components Analysis (PCA) was computed for data reduction and dimensionality estimation. To select the number of independent components (ICs) we followed the criteria suggested by Huster and Raud (2018) , i.e. the first n components that altogether explain 90% of the variance of the dataset. The EEGs of all the participants were concatenated in the temporal dimension and the group-ICA decomposition was performed. Each one of the extracted ICs was defined by a common topography across subjects, and its time-course reconstructed for each participant. Group-ICA was performed using the software provided by the same authors ( Huster and Raud, 2018 ). Subsequently, the phase lag index (PLI) was performed between all pairwise combinations of the reconstructed time series for each IC. The PLI measures the asymmetry of the distribution of phase differences between two signals ( Stam et al., 2007 ), and returns values between 0 (no phase-locking or phase locking with zero lag) and 1 (perfect phase-locking, discarding zero-lagged phase coupling). PLI was computed from 2 Hz to 40 Hz in 1 Hz steps. The mean PLI values among all IC pairwise combinations were computed for group comparisons (see Fig. 1 -Left). Afterwards, we selected the frequency band that showed significant group differences, and performed group comparisons for each pair of ICs (see Fig. 1 -Right). Microstates analysis To obtain the microstates (MS) analyses we used the Microstate toolbox ( Poulsen et al., 2018 ). The EEG was segmented based in the Global Field Power (GFP) and then classified in different classes according to their topographies. The datasets were normalized, and a total of 1000 peaks per subject entered the segmentation -with a minimum peak distance of 10 ms-for the extraction of the GFP peak maps. The calculation of cluster maps was done using the EEGs of both groups together. The optimal number of cluster maps was selected using the cross-validation criterion ( Pascual-Marqui et al., 1995 ), comparing between different classifications in a range from 2 to 8 clusters. The clustering method for classifying the MS was the modified K-means algorithm ( Pascual-Marqui et al., 1995 ). The convergence threshold was set to 10 − 6 and the maximum number of iterations was set to 1000. Given that the modified K-mean is a stochastic algorithm, we applied 50 restarts of the classifi-cation method in order to select the one with the lowest cross-validation criterion value. Once the number of MS prototypes was selected, they were back-fitted to all the recordings -ignoring their polarity, following the recommendations for the spontaneous EEG- ( Poulsen et al., 2018 ). The back-fitting from the EEG to the MS prototypes was performed by computing the Global Map Dissimilarity index ( Murray et al., 2008 ) Short periods of unstable EEG topographies (shorter than 30 ms) were filtered using the "small segments rejection " procedure described in ( Poulsen et al., 2018 ). For the statistical analyses we extracted the following parameters: duration (defined as the average time a MS remains stable), occurrences (the number of times a microstate occurred per second), and coverage (the proportion of time covered by each MS). Statistics Group differences in sociodemographic and clinical variables, Phase Lag Index values, and microstate parameters were evaluated using independent samples t -test. In addition, we performed Spearman's rank correlation analysis to explore the relation between clinical variables and connectivity values. To correct for multiple comparisons we applied the False Discovery Rate correction (FDR) using the Benjamini & Hochberg method ( Benjamini and Hochberg, 1995 ). The FDR was applied independently for global PLI and for microstates parameters. Effect sizes for PLI and microstates parameters are reported using Hedge's g s ( Lakens, 2013 ). Demographic and clinical variables No between-groups differences were observed in demographic variables such as age, weight, height or education. Nevertheless, patients showed significant differences in symptoms related to FM, such as pain, depression, fatigue, sleep quality, or pain pressure threshold and tolerance (See Table 1 ). Connectivity analyses We first extracted 6 independent components (ICs) that explained the 92.5% of the total variance. Then, the connectivity analysis between each pair of ICs and the average of all of them were performed. We observed significantly higher global (average) PLI values for patients with FM at beta frequencies (from 17 to 34 Hz) with p FDR < 0.05. Independent samples t -test of the mean PLI values in this frequency range showed a t (92) = 3.76 and p = 0.0011; Hedges's g s = 0.77 (mean global PLI values from 17 to 34 Hz: FM = 0.040 ± 0.014; HC = 0.031 ± 0.009) (See Fig. 1 -a and b). Afterwards, we analyzed the differences in connectivity between all pairwise combinations of ICs. Several ICs pairs showed higher connectivity in the FM group with differences at p FDR < 0.05 (See Fig 1 -c and d). These differences involve IC 2 (in their interconnections with ICs 3, 4 and 5) and IC 3 (in their interconnections with ICs 1, 2 and 5). To clarify the relation between long distance connectivity and the clinical measures, we correlated the mean global PLI (from 17 to 34 Hz) with the clinical variables listed in Table 1 by calculating Spearman rank-order correlation coefficient (See supplementary data). None of these variables were significantly correlated with the PLI when using the FM group or the healthy control participants separately. When gathering data from both groups of participants, we found that PLI at Beta was significantly correlated with all the clinical variables. All these correlations were in the same direction -higher PLI related to higher impairment-. Microstate (MS) analyses We extracted 4 microstates based on the cross-validation criterion (See Fig. 2 ). The four MS accounted for the 62.0% of the Global Explained Variance (GEV), although this GEV is lower than typically reported, it is a similar value to that obtained in previous research during resting state EEG ( Seitzman, 2017 ;Britz et al., 2010 ); each MS contributes to GEV in each group of participant as follows: MS1: FM = 15.3% ± 9.1; HC = 20.6% ± 8.9; MS2: FM = 12.5% ± 8.1; HC = 12.2% ± 5.3; MS3: FM = 12.8% ± 8.2; HC = 11.8% ± 8.9; MS4: FM = 9.2% ± 5.3; HC = 8.5% ± 8.9. We observed that MS1 showed a similar topography to the one described in the literature as microstate Class C ( Britz et al., 2010 ;Michel and Koenig, 2018 ). MS1 had significantly shorter values of Occurrence and Coverage in patients with FM than in HC, while the Duration parameter was not significant, but near to significance (See Table 2 ). MS2 showed a topography similar to the one described as microstate Class C', with no significant differences between groups. MS3 showed a similar topography to that described as microstate Class E, and no significant differences were observed between groups. Finally, MS4 showed a topography similar to the one usually referred as microstate Class D, again with no group differences in any of the parameters analyzed. Discussion In the present study, we investigated whether patients with fibromyalgia showed alterations in their electroencephalographic activity during open-eyes resting state. Here we pursued two novel analysis of the EEG not previously applied to data recorded in FM. First, we measured functional connectivity between different networks; second, we performed broadband microstate analysis to evaluate patterns related to spontaneous thought and neural processes that may be altered in chronic pain. We found higher global functional connectivity in the beta band for patients with FM, and also observed differences in microstate parameters between patients and controls. These results extend current knowledge on the brain activity of chronic patients during ongoing pain and provide physiological markers of altered brain function in FM. For network connectivity analysis we first extracted six components using group-ICA decomposition, each one characterized by a different topography and time course. This group-level decomposition method is a novel and powerful tool that allows to study functional brain networks in EEG data ( Huster and Raud, 2018 ). Subsequently we analyzed phase connectivity among components and found that patients with FM had higher global connectivity values at beta frequencies ( ≃17-34 Hz). We also observed group differences between pairwise PLI values, especially involving IC 2 and IC 3. Beta-band oscillations have been classically related to the activity in motor areas ( Pfurtscheller and Lopes da Silva, 1999 ;Pfurtscheller et al., 2005 ), although recently they also have been implicated in long-range communication, top-down processing, and the preservation of the current brain state ( Spitzer and Haegens, 2020 ;Engel and Fries, 2010 ). These oscillations are mechanistically related to a facilitation of network-level communication ( Kopell et al., 2000 ;Varela et al., 2001 ;Alavash et al., 2017 ;Donner and Siegel, 2011 ). Particularly, phase synchronization at beta frequencies are thought to regulate the communication among distant neural groups, which can be used to maintain information in working memory and facilitate the integration of distributed processing ( Siebenhühner et al., 2016 ;Fries, 2015 ;Kornblith et al., 2016 ). This frequency band has also been related with feedback predictions in the predictive coding model ( Michalareas et al., 2016 ;Brodski-Guerniero et al., 2017 ) and with the endogenous activation and reactivation of cortical content representations ( Spitzer and Haegens, 2020 ). Our results indicate that patients with FM show a hyper-synchronization among distant distributed neural circuits. As beta activity has been also related to the continuation of the cognitive set and the dominance of endogenous top-down influences, its pathological enhancement may lead to the deterioration of flexible behavior and cognitive control ( Engel and Fries, 2010 ). In this vein, patients with FM consistently show impairments of executive function, attention, or working memory, including poor selective and divided attention, slow information processing and vulnerability to distraction ( Tesio et al., 2015 ;Kravitz and Katz, 2015 ;Glass, 2009 ;Teodoro et al., 2018 ). The observed abnormally high syn-chronization among long-distance networks could be a mechanism related to the impaired attention and processing of external stimuli, and the concomitant cognitive dysfunction reported by patients with fibromyalgia. Connectivity values in the beta range were significantly correlated with the measured clinical variables when using data from the whole sample (FM and HC), suggesting a positive relation between long distance beta phase connectivity and symptom severity. Nevertheless, these results should be taken with caution, since the correlations were far from significance when computed with the FM or the HC groups separately (See supplementary data). The lack of correlations in the FM group could be explained by the high heterogeneity of the disease and the existence of different profiles of patients with diverse clinical manifestations ( de Souza et al., 2009 ;Triñanes et al., 2014 ). While FM symptoms are not dichotomous and everyone (either healthy controls or patients) is in a position on that continuum, distribution of scores in some clinical variables are clustered by group (See scatterplots in the supplementary data), and this may explain the significant correlations for the whole sample. Overall, these results suggest that PLI is useful in differentiating between the two groups, but shows a low correlation with specific symptoms of the FM spectrum in patients. Contextualizing our results with recent research analyzing spontaneous magneto-and electroencephalographic activity, other previous investigations have also found alterations in beta frequencies in patients with FM. For example, González-Roldán et al. (2016) found increased beta power and increased power cross-correlation between scalp electrodes located in the left hemisphere of patients. In this vein, Lim et al. (2016) found beta power increase in FM, with the largest group differences in the anterior insular cortex, primary motor cortex, and left S1 and S2. Nevertheless, alterations were also found in other indexes like delta power ( González-Roldán et al., 2016 ), theta power ( Fallon et al., 2018 ;Lim et al., 2016 ), centroparietal theta synchronization ( González-Roldán et al., 2016 ) and global theta connectivity ( Choe et al., 2018 ), or gamma power ( Lim et al., 2016 ). Although there are some common points, there is still little consistency in the electrophysiological indexes observed during resting state. These disparities may be explained by differences in the characteristics of the samples and in the types of analyses (e.g. power analysis at scalp or source level, different functional connectivity indexes). Regarding microstate analysis, we found a reduction in occurrence and coverage of the Microstate 1, which also showed the higher global explained variance. This microstate exhibits an anterior-posterior topography, and corresponds to the described in the literature as Microstate C ( Koenig et al., 1999 ). Similar observations were reported for patients with dementia and panic disorder, that respectively showed reduced duration and occurrence of the Microstate C ( Kikuchi et al., 2011 ;Nishida et al., 2013 ). The microstate class C has been positively correlated with Blood-oxygen-level dependent (BOLD) activity in areas like the dorsal anterior cingulate cortex, the right anterior insula and the inferior frontal gyri ( Britz et al., 2010 ). These areas are part of the so-called salience network, that is related to switching between centralexecutive function and the default mode. Among other functions, the salience network is supposed to contribute to self-awareness through the integration of sensory, emotional, and cognitive information. Areas that belong to this network, such as the insular cortex or the anterior cingulate cortex (ACC), are also involved in the processing of nociceptive input ( Tracey and Mantyh, 2007 ), and brain imaging studies frequently observed functional and structural alterations over these areas in patients with FM ( Ichesco et al., 2014 ). Therefore, the reduced duration, occurrence and coverage of this MS is consistent with the fact that FM patients show impaired performance or altered brain activity during cognitive control tasks ( Bell et al., 2018 ;González-Villar et al., 2017b ), processes that involve the activation of the insula and ACC ( Swick et al., 2011 ;Aron, 2011 ). The microstate C has also been related to the activation of brain areas involved in autonomic and interoceptive processing ( Britz et al., 2010 ;Pipinis et al., 2017 ;Schiller et al., 2019 ). The reported data could be related to the reduced attentional focus towards the interoceptive experience in FM -as reported by Duschek et al., that found decreased interoceptive awareness in this population ( Duschek et al., 2017 )-, and is in line with the relation between reduced heartbeat perception and increased pain-related affect and symptom severity ( Borg et al., 2018 ). Nevertheless, our results are inconsistent with previous reports of increased attention to body signals in those patients ( Borg et al., 2015 ). Finally, Ceko et al. (2015) found a reduced deactivation of fMRI response over default-mode network (DMN) regions (posterior cingulate/precuneus, medial prefrontal cortex) in patients with FM during a working memory task, and also reduced modulation of DMN deactivation caused by task demands ( Ceko et al., 2015 ). These results are also consistent with our previous observations of reduced modulation of electrophysiological indexes caused by external events in patients with FM ( González-Villar et al., 2017a, 2017bGonzález-Villar et al., 2019 ). Altogether, the evidence obtained from the connectivity and microstate analysis are convergent, suggesting alterations in a neurophysiological mechanism that may be related to the diminished ability to process both interoceptive and exteroceptive information that FM patients often exhibit. One limitation of this study is related to the consumption of medication by patients, which could not be interrupted for the study and whose effects are difficult to identify. In addition, the cross-sectional design does not allow establishing causal relations between EEG features and the clinical manifestations in FM, a complex syndrome characterized by a plethora of symptoms (mainly chronic pain, but also cognitive and affective). Furthermore, the design of the study does not allow clarifying whether the findings are FM-specific or could be common to other chronic pain diseases. In conclusion, the present findings indicate that FM participants show increased connectivity over different brain networks at beta band, and differential microstates dynamics during resting state. Although we used two independent approaches to analyze the spontaneous EEG data (i.e. connectivity of independent components and microstate analysis), the group differences of both physiological outcomes are related to the processing of endogenous top-down information and the minimization of novel external input. These alterations could be related to the subjective complains about deficits in attentional processes and cognitive func-tioning commonly reported in this chronic pain disorder. The present results contribute to the understanding of the alterations in the central nervous system of patients with FM and could help in the search of EEG biomarkers for its diagnosis. Acknowledgments This study was supported by funding from the Spanish Government (Ministerio de Economía y Competitividad; grant number PSI2016-75313-R) and from the Galician Government (Consellería de Cultura, Educación e Ordenación Universitaria; axudas para a consolidación e Estruturación de unidades de investigación competitivas do Sistema universitario de Galicia; grant number GRC GI-1807-USC; REF: ED431-2017/27). A.G.V. was partially supported by a grant from Xunta de Galicia (Axudas de apoio á etapa de formación posdoutoral 2018) and by the Portuguese Foundation for Science and Technology within the scope of the Individual Call to Scientific Employment Stimulus 2017. Credit author statement AGV was involved in data acquisition, data analysis, writing of the draft, review and editing of the manuscript. YT was involved in the conceptualization of the work, data acquisition and review of the manuscript. CGP was involved in the conceptualization of the work and data acquisition. MCP was involved in the conceptualization of the work, funding acquisition, supervision, review and editing of the manuscript. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.neuroimage.2020.117266 .
2020-08-25T13:27:07.126Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "630823d4e217e4e133584c7f54aaea08823ad805", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2020.117266", "oa_status": "GOLD", "pdf_src": "Elsevier", "pdf_hash": "e00fd72b4496a07e11cdb65df6e98af0e4a332be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
214680229
pes2o/s2orc
v3-fos-license
Micronucleus Assay of Buccal Mucosa Cells in Waterpipe (Hookah) Smokers: A Cytologic Study Methods: This was a case control. A total of 30 male waterpipe smokers and 30 non-smokers were included in the study. The exfoliated buccal mucosa cells were scrapped using wooden spatula and were spread over glass slides. The mean number of micronuclei was determined using Feulgen-stained slides. The number of micronuclei per 1000 cells was calculated and compared between the two groups of smokers and non-smokers. Introduction For many centuries, waterpipe is used for tobacco smoking in Asia and Africa. Traditionally, most waterpipe users are concentrated in North Africa and South-east Asia (1). Nowadays, waterpipe smoking is a global problem. Based on published data numbers of waterpipe users are increasing among women and teenagers. Waterpipe smoking delivers high levels of nicotine to mucosal cells of oral cavity and respiratory tract. The inhaled nicotine contains toxic materials comprising carbon monoxide and carcinogenic materials (2). The risk of carcinomatous changes intensifies by waterpipe smoke (3-4). Waterpipe smokers inhale higher doses of nicotine compared to a cigarette smoker. Inhalation a chemical agents such as nicotine causes genetic damages. Revealing the genetic damage in persons who are at risk on being exposed to toxic materials is a practical tool in evaluating the genotoxicity effect of agents and malignant transformation. Bio-monitoring of individuals exposed to genotoxic agents using exfoliated buccal mucosa cells is a simple and a reliable method for determining the genotoxic effect. For the first time, Stich et al. used the micronucleus test on exfoliated buccal mucosa cells for tracing the genotoxic exposure in humans (5). Micronucleus test is an inexpensive and non-invasive method for screening the persons who are at risk of cancer development (6). Micronucleus is a separated part of nucleus originates during cellular division. Micronuclei generate from chromosomal fragments of inter-phasic cells (7). The micronuclei are cytoplasmic structures measuring between 1/5 to 1/3 size of nucleus with staining similar to nucleus (8). In general populations, the mean prevalence of cells with micronuclei is 0 to 0.9%. Any increase in micronucleus count is a reflection of chromosomal alterations. The number of micronuclei has been related to degree of carcinogenic effect (9). In a study which was among the first investigations about the effect of waterpipe smoking on cytogenetic changes, El-Setouhy et al. showed a higher level of micronuclei in waterpipe smokers of rural Egypt population (10). In Iran, the popularity of waterpipe IRANIAN JOURNAL OF PATHOLOGY smoking is growing. This is an important issue for persons who are concerned about health planning programs. Despite increasing tendency of youths and women for using the waterpipe, knowledge about the genotoxcicity of waterpipe is insufficient. The aim was to evaluate the genotoxic effects of waterpipe smoking by testing the micronucleus count of buccal mucosa cells in a cytologic study. Materials and Methods This was a case control with simple sampling method. The study was carried out in the department of Pathology, Faculty of Dentistry, Shahed University, Tehran, Iran in Oct 2015-Apr 2016. The study was approved by ethical committee of Shahed University and registered as IR.Shahed.Rec.1394.301. Using Cochran`s sample size formula with 95% confidence level and 90% strength of test, the sample size was determined as 27.89 subjects in both smokers and non-smokers' groups. A total 60 subjects (30 waterpipe smokers and 30 non-waterpipe and cigarette smokers) were entered the study. All subjects in both case and control groups were 20 to 50 year old males. The persons younger than 20 years old, suffering of systemic disease and any oral lesions, consuming any type of drugs and being exposed to dental radiography beam in recent 6 months and alcohol consumers were excluded from the study in both case and control groups. Waterpipe smokers were selected from a local waterpipe café in Tehran, Iran. Non-smokers were collected from dental school of Shahed University. All subjects were living in Tehran and were not farmer or worker in Arsenic industries. The inclusion criteria for selecting the waterpipe users were using the waterpipe at least once in a week. To reduce the effect of cigarette smoking on the results, the protocol of El-Setouhy et al. was used to select the samples. Accordingly, persons who never smoked cigarettes or smoke utmost 100 cigarettes in their whole life were included the study (10). Time duration of waterpipe smoking was registered based on number of smokings per year (11). An inform consent was taken from all subjects before participation in the study. The demographic information were entered in a registration form and coded. The participants were not identified by names and families. For collecting the buccal mucosa cells, all subjects rinsed their mouth twice with normal salin. Using wooden spatula, the exfoliated buccal cells were scrapped and were spread on to the glass slides. Samples were fixed in Carnoy's fixative (methanol and glacial acetic acid in a ratio of 3:1) for 30-35 minutes and then dried at room temperature. The modified method of Thomas et al. was used for staining the micronuclei by Feulgen reaction (12). The Feulgen reaction was performed as follows: Slides were dipped in 1 N HCL at 60℃ for 10 minutes, rinsed in the distilled water for 3 minutes, placed in Schiff's reagent for 90 minutes and then in normal salin for 10 minutes. Then, slides were placed in 0.5 % sodium metabisulfite solution for 3 times and then rinsed with tap water. Then the slides were stained with 1% light green for 15 minutes, were rinsed with tap water and finally dried and mounted. The structures within cytoplasm with similar staining of nucleus measuring between 1/5 to1/3 size of nucleus was considered as micronucleus (8). The cells presenting cell death features comprising of karyorrhexis, karyolysis and pyknosis were not included in the study (Figure 1). The micronuclei count completed in form of blind. Cells with distinct cellular margin were encountered for counting. The overlapped cells and cellular collections were not considered. Optic microscope (ZEISS, Germany) under oil immersion lens with × 1000 magnifications was used for micronuclei count. The micronuclei count was demarcated by the number of counted micronuclei per 1000 cells per subject (10). Mean number of micronuclei were determined for all samples and were presented as mean±SD. The linear regression and T-test were employed at the P≤0.01 as the significant level. The statistical analyses were completed using SPSS 20 package (IBM Company, Chicago, IL, USA). Results The average age of waterpipe smokers and nonsmokers were 26.83±3.74 and 28 ± 7.88 years, respectively. The range of waterpipe smoking duration was from 1 to 11 years with the mean duration of 3.3±2.24 years. The mean number of micronuclei in buccal mucosa of waterpipe smokers and non-smokers were 1.94±0.39 and 1.68±0.35, respectively. The count of micronuclei in buccal mucosa of waterpipe smokers and non-smokers were 25±1.83 and 8.78± 0.83, respectively ( Table 1). The T-test revealed that micronuclei count in waterpipe smokers was significantly higher than non-smokers (P=0). The difference between the number of waterpipe smoking and micronuclei count was significantly different (P=0) (Figure 2). The comparison of data on number of waterpipe smoking per year using regression analysis indicated that, the number of micronuclei count increased to 0.33 (P=0.35) by increasing the smoking time in each years of smoking. Each time waterpipe smoking was associated with an increase in micronucleus count up to 0.027 (P=0). IRANIAN JOURNAL OF PATHOLOGY Discussion The study showed that the mean number of micronuclei in buccal mucosa cells of waterpipe smokers was significantly higher than non-smokers. The genotoxic effect of waterpipe on buccal cells and peripheral blood leukocytes have been demonstrated with comet assay (13), sister chromatid exchanges (SCEs) assay (14) and chromosome analyses (15). In present study, using a simpler method, the previous results were confirmed. Micronucleus assay is a reliable, simple and inexpensive biological test in demonstrating the genotoxic effect of agents. The results of present study showed that the micronuclei count of buccal mucosa cells in waterpipe smokers was higher than non-smokers. The finding is in consistent with El-Setouhy, et al. (10). It has been reported that the micronuclei count in tobacco chewer and cigarette smokers were higher than general population. Based on reports, the count of micronuclei in smokers were 1-2 times more than nonsmokers (16)(17)(18)(19)(20). Compatible with previous findings in cigarette smokers, the results showed that the mean count of micronuclei in waterpipe smokers is almost 1.5 fold more than persons who never smoked waterpipe. The false believe about of waterpipe smoking being harmless in comparison to cigarette smoking derives from the method of waterpipe application; passing the smoke of burned tobacco through the water. Smoke of waterpipe contains toxic material such as carbon monoxide and heavy metals (2). The carbon monoxide in expired air of waterpipe and cigarette smoking are 23.7 ppm and 2.7 ppm, respectively. The carboxyhemoglobin level after waterpipe smoking is 3 times higher than cigarette (21). The amount of produced toxin during one session of waterpipe smoking is equal to 10 cigarettes per day (22). The difference originates from different exposing time to smoke. The average time of cigarette and waterpipe smoking are 5-7 minutes and 45 minutes, respectively. A person inhales 0.5 -0.6 L smoke during cigarette smoking. This amount equals to 0.15 -1 L during waterpipe smoking (1,23). The present study showed that by adding one year to waterpipe smoking history, the number of micronuclei count increased to 0.33. Each time waterpipe smoking was associated with an increase in micronucleus count up to 0.027. The results were compatible with this finding that the hazard of waterpipe smoking depends on amount and time of smoking (24). Absence of an established protocol for measuring the dose and duration of waterpipe smoking is a problematic concern in studying the impact of waterpipe smoking on micronucleus assessment. Different staining method has been used in evaluating the micronucleus assessment. Application of the nonspecific DNA stains in demonstrating the micronuclei of epithelial cells leads to false-positive or false-negative results. It has been shown that the results of micronuclei evaluation in oral mucosa cells strongly relates to staining method (25). Omitting the effect of staining method on obtained results, we used the Feulgen stain. Feulgen technique is the most reliable method for staining the nuclear DNA and micronuclei evaluation in cytologic materials (26). It has been reported that air pollution, exposing to agricultural pesticides and chronic occupational exposure to Arsenic are relating factors in producing the higher rates of micronuclei count in buccal mucosa and peripheral blood lymphocytes (10,27). In present study, all samples were collected from a local waterpipe café in Tehran. All subjects were under the same condition regarding the inhalation of polluted air. The subjects were not farmer or worker in Arsenic industries. To achieve more reliable results and omit the possible effect of female hormones on findings, the study was completed on 20 to 50 year old males. The used sampling method and using specific DNA stain decreased any possible biases in achieved results. The present study was limited on male waterpipe smokers. Study the genotoxic effects of waterpipe smoking in females and comparing the results with male users is intensely recommended. In most societies such as Iran, waterpipe smoking is a fun activity. Most waterpipe smokers are not cigarette smokers because they believe that cigarette smoking is more harmful than waterpipe smoking. Alternatively, some waterpipe users are heavy cigarette smokers. Because of this divergence, tissue sampling was very time consuming and difficult. To date, waterpipe smoking is a public health problem. Recent studies showed the increasing level of waterpipe smoking between youths and educated persons (28). Waterpipe smoking has adverse health effects similar to or even higher than cigarette smoking (29). Update socio-demographic researches on waterpipe smoking is an important necessity for managing the preventive efforts. Conclusion The waterpipe smoking had genotoxic effect on human buccal mucosa cells. The genotoxic effect of waterpipe was dose-dependent. Due to increasing interest of youths and women in waterpipe smoking, further researches were needed for studying the health effect of waterpipe smoking on different human cells and tissues in both genders and different age ranges.
2020-03-19T10:47:39.526Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "0d78899c681e170ebb16865fef97cbea96c12ded", "oa_license": "CCBY", "oa_url": "http://ijp.iranpath.org/article_38292_1b5199e6c9464002a45a400a980e8ac2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a4b65899dad3aa4295ff654d4903ae7cfb8ab83", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119092280
pes2o/s2orc
v3-fos-license
Superconducting spin filter Consider two normal leads coupled to a superconductor; the first lead is biased while the second one and the superconductor are grounded. In general, a finite current $I_2(V_1,0)$ is induced in the grounded lead 2; its magnitude depends on the competition between processes of Andreev and normal quasiparticle transmission from the lead 1 to the lead 2. It is known that in the tunneling limit, when normal leads are weakly coupled to the superconductor, $I_2(V_1,0)=0$, if $|V_1|<\Delta$ and the system is in the clean limit. In other words, Andreev and normal tunneling processes compensate each-other. We consider the general case: the voltages are below the gap, the system is either dirty or clean. It is shown that $I_2(V_1,0)=0$ for general configuration of the normal leads; if the first lead injects spin polarized current then $I_2=0$, but spin current in the lead-2 is finite. XISIN structure, where X is a source of the spin polarized current could be applied as a filter separating spin current from charge current. We do an analytical progress calculating $I_1(V_1,V_2), I_2(V_1,V_2)$. Hybrid systems consisting of a superconductor (S) and two or more normal metal (N) or ferromagnetic (F) probes recently started to attract a great attention [1,2,3,4,5]. Among most striking new results is the prediction that NSN (FSF) devices can play the role of entangler producing Einstein-Podosky-Rosen (EPR) pairs [4] having potential applications, for example, in quantum cryptography [6]. Not long ago rather unusual effect was described in normal metal -tunnel barrier (I) -superconductor -tunnel barrier -normal metal (NISIN) junction (see, e.g., Fig.1b) [2,3]. It was shown that when N 1 is biased, N 2 and S are grounded there is no current injection from N 1 to N 2 at subgap biases; main assumptions were: 1) the superconductor is clean, 2) large number of conducting channels are involved in electron tunneling through NS interfaces [2,3]. In other words, the subgap cross conductance G 12 ≡ ∂ V1 I 2 (V 1 , 0)| |V1|<∆ = 0, where the current I 1 flows in N 1 , V 1 is the bias between N 1 and S and V 2 -between N 2 and S. The suppression of G 12 was attributed to the compensation of the contributions to the current from Andreev and normal quasiparticle tunneling processes between N 1 and N 2 [2]. It was also noted that G 12 = 0 in FISIF junctions: G 12 decays exponentially as exp(−r/ξ) with the characteristic distance r between the normal terminals (see, e.g., Fig.1b), where ξ is the superconductor coherence length; at small r/ξ, G 12 decays also rather quickly (at atom-scales): as 1/(k F r) 2 (k F in the superconductor) [2]. Thus with 1) e-mail: nms@landau.ac.ru Fig1. The outline of the setup. N1,2 are normal metals or ferromagnets. clean superconductors a measurement of G 12 may become difficult. In this letter we first of all generalize results [2,3] and get rid of the assumption 1) [i.e., S is not restricted to be clean]. We show that when the superconductor is dirty (the mean free path is smaller than ξ) Andreev and normal transmission rates [as well as G 12 in FISIF junctions] slowly decay with the characteristics distance r between the normal (ferromagnetic) terminals (at r < ξ) in contrast to the clean regime (see Refs. [2,3]). For example, in FISIF with superconducting layer thinner than ξ, see Fig. 1b, G 12 ∼ ln(r/ξ); when the superconductor is bulk then G 12 ∼ ξ/r [r > λ F is supposed]. Measurements of the effects, related to electron tunneling through a superconductor (e.g., G 12 ) in the dirty superconductor case can be easier realized experimentally then in the clean case because then r is not restricted to atomic scales but by ξ ≫ λ F . We show that contributions to the current from Andreev and normal quasipar- ticle tunneling processes always compensate each other in NISIN junctions (so, e.g., I 2 (V 1 , V 2 = 0) = 0 for |V 1 | < ∆ in first nonvanishing order over the transparencies of the layers I) for any amount of disorder in the S-layer. If one prepares a NISIN junction with layers I having large transparency then normal tunneling start dominating Andreev tunneling (and I 2 (V 1 , V 2 = 0) = 0). We also considered FISIN junction, in particular with V F = 0 and V N = 0. Then the ferromagnet F plays the role of the spin-polarized current injector. In this case I 2 (V F , V N = 0) = 0 also, but spin current in N is finite: charge component of the current converts into the supercurrent, spin accumulates in N. So XISIN structure, where X is a source of the spin polarized current, could be applied in spintronics [7] as a filter separating spin current from charge current. We find Andreev T he and normal transmission probabilities T ee of a NISIN sandwich for subgap energies |E| < ∆ and different angles θ between incident quasiparticle trajectory and the normal to NS interface. It is shown that the probabilities have resonances where T he ∼ T ee ; averages of T he and T ee over incident channels (over θ) are equal -this is the reason why I 2 (V 1 , V 2 = 0) is suppressed and the spin current I (s) 2 (V 1 , V 2 = 0) is finite. We start investigation of NISIN structures from the sandwich sketched in Fig.1a: barriers at NS boundaries provide spectacular reflection; electrons in N and S move ballistically; the number of channels at both NS boundaries is much larger than unity. The transmission probabilities T he (E, θ) and T ee (E, θ) [see Fig.2] describe Andreev and normal tunneling of an electron incident on the NS boundary with the angle θ and the energy E correspondingly into a hole and an electron in the lead 2. Following the Landauer-Büttiker approach [8,9,10]: where the sum is taken over channels (spin degrees of freedom are included into channel definition); f (1,2) are distribution functions in the leads 1,2; e.g., is not necessary a Fermifunction. We calculate the transmission and reflection probabilities using Boguliubov equations (BdG). The layers I are approximated by δ-barriers. Quasiparticle motion parallel and perpendicular to the NS interfaces can be decoupled [11,12]. Matching appropriate wave functions in the normal region and the supercondctor we get 8 × 8 linear system of equations for the transmission amplitudes. Analytical progress can be made. It follows that if there is no barrier at NS boundaries (except ∆) T he /T ee (∆/E F ) 2 for any thickness d of the superconducting layer. This result is intuitively quite clear because ∆ ≪ E F can hardly reverse the direction of the quasiparticle momentum being about k F [11,13]. However if there are barriers at NS boundaries in addition to ∆ (e.g., insulating layers I) then the situation changes: at certain θ the transmission probabilities have resonances where T he ∼ T ee . When the transmission probabilities of the layers I, T (1,2) N S ≪ 1, the areas under the resonance peaks of T eh (θ) and T ee (θ) are nearly same and where . . . = channels (. . .)/N channels ≈ 1 0 (. . .)d cos 2 θ. Eq. (2) is exact in first nonvanishing order over T N S . The resonances appear at k F d cos(θ n ) = πn, n = 1, 2 . . ., give the leading contributions to T he , T ee and are responsible for Eq.(2). The resonance width Γ ∼ min{1, T N S , d/ξ}. Typical dependencies of T eh (θ) and T ee (θ) from θ and T N S are illustrated in Fig.2. In fact θ is discrete variable; its particular value is determined by the channel of the incident particle. Equation (2) is applicable when 1) T N S (θ) slightly change when θ changes from one channel to an adjacent one and 2) change of θ from one channel to another should be smaller than the resonance width. The condition 1) is fulfilled typically when T N S (θ) ≪ 1, the It follows from Eqs. (1-2) that subgap charge injection from the lead 1 into the lead 2 in weak coupling regime (T N S ≪ 1) is suppressed: I 2 (V 1 , V 2 = 0) = 0, because charge currents of transmitted hole and electron quasiparticles compensate each other in the lead 2; all the electron current converts into Cooper-pair supercurrent in S. However if spin-polarized current is injected from the lead 1 finite spin current appears in the lead 2; transmitted electron and hole quasiparticles contribute the spin current. XISIN structure with T where σ 1 = ±1 labels spin degrees of freedom in X. General feature of transmission probabilities T and the current -their exponential suppression with d/ξ when d ≫ ξ (ξ is the superconductor coherence length). We show below that all the results discussed above remain true in general NISIN structure with more complicated shape than in Fig.1a, (e.g. like in Fig. 1b) no matter dirty or clean. In general a system of weakly coupled normal (ferromagnetic) and superconducting layers can be described by the Hamiltonian:Ĥ =Ĥ 1 +Ĥ 2 +Ĥ S +Ĥ T , wherê H 1,2 refer to the electrodes N 1 and N 2 , andĤ S to the superconductor. The tunnel HamiltonianĤ T , which we consider as a perturbation, is given by two terms T corresponding to one-particle tunneling through each tunnel junction: where indices i = 1, 2 refer to normal (ferro) electrodes; t (i) kp is the matrix element for tunneling from the state k = (k, σ) in normal lead N i to the state p = (p, σ ′ ) in the superconductor. The operatorsâ (i) k andb p correspond to quasiparticles in the leads and in the superconductor, respectively. The current can be expressed through the quasiparticle scattering probabilities within the Landauer-Büttiker approach. It is possible to calculate the scattering probabilities within the tight-binding model (4), but it is more convenient to describe the current on the language of electrons only: Andreev transmission probability T he (1) is closely related to the crossed Andreev (CA) tunneling rate Γ S← CA (V 1 , V 2 ) which shows how many electron pairs tunnel per second from the leads 1 and 2 into the condensate of the superconductor (each lead gives one electron into a pair) and vice versa correspondingly, see Fig. 5b, and [2]. Elastic co-tunneling rate Γ 2←1 (EC) corresponds to T ee . Direct Andreev tunneling rates, Γ describe Andreev reflection in the leads 1 and 2 [see, e.g., Fig. 5a]. The current in the lead 2 consists of two contributions: one, I (i) 2 , comes from the electron injection from the lead 1 due to crossed-Andreev and cotunneling processes, the other, I (I) 2 , -from the direct electron-tunneling between the lead and the superconductor. Same applies for the lead 1. Thus Using the Fermi Golden rule, the rates can be found in second order in the tunneling amplitude t k,p . Following the approach described in Ref. [2,14,16], we finally obtain (6) where n (i) is the distribution function in the lead i = 1, 2. Hereafter we take = 1, e = 1 [we do not assume n (i) to be only equilibrium Fermi function]. The rate Γ S→12 CA can be obtained from the expression for Γ S←12 CA by substitution of (1 − n) for n. The kernelΞ . It can be expressed through the classical probability P (X 1 ,p 1 ; X 2 ,p 2 , t) meaning that an electron with the momentum directed alongp 1 initially located at the point X 1 near the NS boundary arrives at time t at some point X 2 near the NS boundary with the momentum directed alongp 2 spreading in the superconducting region as follows Here the spatial integration is performed over the N 1 S and N 2 S surfaces. We choose the spin quantization axis in the direction of the local magnetization in the terminal N 1 (2) . The quasiclassical probabilities G (i) (X,p, σ), i = 1, 2 for the electron with spin polarization σ tunneling from the terminal N i to the superconductor are normalized in such way that the junction normal conductance per unit area g (i) σ (X) and the total normal conductance G (i) N are determined as [14,15] Then the normal conductance per unit area, discussed above, is defined as g where A is the surface area of the junction. Symbol θ(X 1 , X 2 ) is the angle between the magnetizations of the terminals N 1 and N 2 at points X 1 and X 2 near the junction surface. If electrons in N 1 and N 2 are not polarized then θ = 0. In a similar way: The rate Γ 1←2 (EC) can be obtained from the expression for Γ 1→2 (EC) by substitution of (1 − n) for n. DA-rates are written in [14]. Equations (5a-7) derived here allow to describe transport properties of many types of junctions. Consider FISIN junction with biased ferromagnet with the respect to the superconductor, the normal metal N has same voltage as S. So the ferromagnet plays the role of a current "injector"; electrons coming from F are distributed with some distribution function n (1) . Electrons in the deep of the terminal N are Fermi-distributed. It follows from Eqs.(5a-7) that contributions to the current from EC and CA processes compensate each other for subgap voltages so I N (V F = 0, 0) = 0. However spin current is finite: Finally we consider a FISIF junction. It was shown in [2] that in this junction I 2 (V 1 , 0) = 0 and I 2 (V 1 , 0) changes its sign when the ferromagnetic terminals change their orientation from parallel to antiparallel. Naively it can be supposed that in a FISIN junction where F is a current injector, S, N are grounded spin accumulation at the interface of the normal metal would lead to spin-splitting of the density of states in N and a charge current. However this is not so, this corrections are of higher order over tunneling amplitudes than the processes in Fig. 5 and can be neglected because we assume that tunneling amplitudes are small. It was also noted in [2,3] that the cross conductance G 12 ≡ ∂ V1 I 2 (V 1 , 0)| V1=0 is suppressed in a FISIF structure as 1/(k F r) 2 when the characteristic distance between the ferromagnets r < ξ. In dirty regime there is no conductance suppression at atomic-scales. Consider, for instance, the layout sketched in Fig. 1b; the width d of the superconducting film is supposed to be smaller than ξ. According to Eqs.(5a-7) the crossconductance dependence from the distance r is determined by the Laplace transformP (s = 2 √ ∆) of the probability P (r, t) = exp(−r 2 /4D|t|)/4πd|t|, where D is diffusion constant in the superconductor, d < ξ. When λ F ≪ r < ξ, G 12 ∼P ∼ ln(r/ξ) and if r ≫ ξ, G 12 ∼P ∼ exp(−r/ξ). When the superconductor is bulk (d > ξ) similarly we find G 12 ∼ ξ/r, λ F ≪ r < ξ. All considerations above apply also for CA-and ECrates. Thus it is practically more convenient to measure finite effects related to electron subgap tunneling through a superconductor when it is dirty rather than clean. In dirty case the terminals are not restricted to be as close as λ F like in clean case but closer then ξ ≫ λ F . We are grateful to M. Mar'enko, Yu.V. Nazarov, V.V. Ryazanov, M.V. Feigelman, A.S. Iosselevich, and Ya.V. Fominov for stimulating discussions. M. Mar'enko paid my attention to suppression of the zero bias cross conductance in dirty NISIN junctions (of certain type) and the long-range decay of the EC and CArates with the characteristic distance between the normal terminals which appeared important for reviewing in general case spin and charge transport in the superconducting junctions with weak coupling to the normal (ferromagnetic) terminals. After the paper was nearly completed I got the information that spin injection in a normal layer of a FISIN junction was mention in one sentence of Ref. [17]. We thank to D. Feinberg for criticism and pointing our attention to Ref. [17]. We wish to thank RFBR (project No. 03-02-16677), the Swiss NSF and Russian Ministry of Science.
2019-04-14T02:02:52.888Z
2003-06-22T00:00:00.000
{ "year": 2003, "sha1": "ec75fdd4d1b620a5f7026fa6afad70d8c82b41e2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0306552", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7b9124f09ce6c392ce31c79510694ffcef49e90c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1719227
pes2o/s2orc
v3-fos-license
Detecting species-site dependencies in large multiple sequence alignments Multiple sequence alignments (MSAs) are one of the most important sources of information in sequence analysis. Many methods have been proposed to detect, extract and visualize their most significant properties. To the same extent that site-specific methods like sequence logos successfully visualize site conservations and sequence-based methods like clustering approaches detect relationships between sequences, both types of methods fail at revealing informational elements of MSAs at the level of sequence–site interactions, i.e. finding clusters of sequences and sites responsible for their clustering, which together account for a high fraction of the overall information of the MSA. To fill this gap, we present here a method that combines the Fisher score-based embedding of sequences from a profile hidden Markov model (pHMM) with correspondence analysis. This method is capable of detecting and visualizing group-specific or conflicting signals in an MSA and allows for a detailed explorative investigation of alignments of any size tractable by pHMMs. Applications of our methods are exemplified on an alignment of the Neisseria surface antigen LP2086, where it is used to detect sites of recombinatory horizontal gene transfer and on the vitamin K epoxide reductase family to distinguish between evolutionary and functional signals. INTRODUCTION Multiple sequence alignments (MSAs) are high dimensional discrete datasets, which play a prominent role in bioinformatics. They are typically involved in the functional classification of proteins and phylogenetic reconstruction of evolutionary trees, for example. In general, there are two aspects of MSAs; analyses are mostly either species-or site focused. Species-driven approaches usually aim at the relationship between sequences, averaging over the alignment columns. Methods for phylogenetic reconstruction as well as general sequence clustering methods are examples, and make (amongst other things) use of distance measures to impose an hierarchy on the species in an alignment. This allows for the detection of closely related species, functional clusters and the reconstruction of gene trees or species trees. Site-driven analyses in contrast put more emphasis on sequence content, looking for specific sequence motifs, conservation profiles, areas with characteristic biochemical properties like hydrophobicity or transmembrane regions, thereby averaging over the sequences or focusing on their conserved regions. A combination of both types of analyses of an (correctly aligned) MSA helps to distinguish functionally conserved from variable sites, detect clusters of sequences and find sites responsible for a certain splitting of sequence groups. This integration can finally lead to an understanding of the functional evolution of sequences, as tree splits or cluster breaks can be annotated with the associated autapomorphies [an autapomorphy is a trait characteristic for a terminal group in a phylogenetic tree (a monophyletic group), i.e. a property that is shared by only the members of the group, but not by any other taxa]. Due to the complexity of MSAs of realistic size, thorough analyses require expert knowledge, are tedious, time consuming and error-prone. Traditionally, first view analyses are done in alignment editors/aligners like SEAVIEW (1), CLUSTAL_X (2), Jalview (3) or 4SALE (4). Amino acids are usually colored with respect to their biochemical and physical properties and conservation bars are aligned to the MSA to get a column-based summary. A better graphical representation of the degree of conservation can be achieved by sequence logos (5), which additionally visualize the entropy of the site distributions. RNA logos also include horizontal dependencies in RNA sequences, defined by their respective secondary structure (6,7). With the arrival of hidden Markov model (HMM) (8)(9)(10) in sequence analysis, HMM logos were introduced presenting entropy terms based on estimated HMM parameters like emission, insertion and deletion probabilities (11,12). These site-focused methods provide an abstract summary of the sequence variability in an alignment, but usually do not allow for the detection of sequence clusters and fail at representing long sequences adequately. Apart from character-based methods, clustering of sequences is either done indirectly, via an interposed distance measure as in the case of phylogeny, or requires a meaningful way to embed sequences into a real-valued vector space, something which cannot be achieved trivially. Given such an embedding, standard dimension reduction techniques like principal component analysis (PCA) or classical multidimensional scaling (MDS) could be applied. Casari et al. (13) introduced a method for dimension reduction on MSAs, which was later implemented in the Jalview application (3). The algorithm is based on a simple mapping of sequences to binary vectors, not including gaps, and applies PCA to the binary sequence data. Our method captures both horizontal and vertical information by combining an improved embedding of sequences including gaps with a site-specific annotation of sequence clusters. Instead of mapping the sequence data to a binary vector, we apply an HMM-based embedding using a vector of sufficient statistics for the emission probabilities instead of the Fisher scores (14)(15)(16). We apply correspondence analysis (CA) (17) to the embedded sequences and sites, elaborating on the association between both data and visualizing clusters of sites and sequences in one joint plot. Dimension reduction is done the usual way, preserving as much information as possible in the lower dimensional representation. Selection of the axes allows for a precise investigation of different signals in the alignment, as shown in studies on the Neisseria factor H binding protein and the vitamin K epoxide reductase family. Embedding Molecular sequences are typically represented by strings over an alphabet of either 4 or 20 characters. In order to apply numerical methods on these kinds of data, a sensible embedding into R n has to be found. Fisher scores are derived from the posterior probabilities of a fitted HMM and are known to be a sufficient statistic for the fitted HMM parameters (15,18). Fisher scores are the derivative of the log-likelihood of an HMM with respect to all parameters of the HMM, namely emission and transition probabilities, evaluated for each datum, i.e. for each sequence. To be more precise, the Fisher score vector F i for the i-th sequence S i is F i =r  log(P[S i |Â]), where  denotes the vector of HMM parameters. They therefore represent a site-specific fixed-length embedding that directly encodes emission, insertion and deletion events. Intuitively spoken, a Fisher score of an HMM parameter describes the slope of the likelihood for the given data (the given sequence) with respect to this parameter. This can be seen as the degree of influence the datum has on the parameter in an optimization context, or the degree of surprise encountering the given amino acid/nucleotide/ indel at that specific alignment position. For a precise description of the computational details of the Fisher score calculation, see refs (15,19). Correspondence analysis (CA) CA is an ordination method originally created for count data in two-way contingency tables and rooted in ecology and community analysis (17). In contrast to other ordination methods built around singular value decomposition (SVD), CA performs its ordination simultaneously on column and row scores. It superimposes the results in one joint plot, thus painting an usually 2D picture of dependencies between data points and its most significant factors. Pre-processing. For technical reasons, the n  m data matrix F = (f ij ) is first made positive by adding a constant to each entry. It is then normalized by dividing the matrix entries by its respective row and column sums h ij ¼ f ij = ffiffiffiffiffiffiffi ffi f iÁ f Áj p , resulting in the normalized data matrix H. In matrix notation this may be written as H = S À1/2 XC À1/2 , where S À1/2 and C À1/2 are diagonal matrices containing the reciprocals of the square root of the row and column marginal totals. SVD. SVD is a factorization of a real or complex matrix A 2 M (m  n; K) of the form A = UAEV à where U is a m  n unitary matrix over a field K, AE is n  n positive semidefinite diagonal matrix and V à denotes the conjugate transpose of V, an n  n unitary matrix over K. AE contains the singular values, whereas the columns of U and V à are the left-and right-singular vectors for the corresponding singular values (20). A lower dimensional representation of the data is generated by ordering the singular values by size and taking the first n 0 singular values. The loss of information is described in terms of the proportion of the sum of squares of singular values P n 0 i¼1 AE 2 ii used (total inertia). In the CA context, the total inertia is proportional to the value of the w 2 statistic, and thus to the degree of association in the data (21). Post-processing. After SVD, the row U and column V scores are usually rescaled via , to obtain the optimal or canonical row (X) and column (Y) scores. Depending on the implementation of the CA algorithm, these are afterwards further scaled by their corresponding singular values (17). Interpretation. The selected component axes are then plotted in usually 2D scatterplots. The Euclidean projection of both site and species points in the new space approximates their w 2 distances as closely as possible. Proximity of points in the CA biplot therefore corresponds to dependencies between items. Furthermore, points are projected such that the further away a point is from the origin, the higher its contribution to the w 2 statistic. Positive associations lie on the same side of the plot, whereas negative associations lie on the opposite sides. For a more detailed explanation, see ref. (17). Please note that the addition of a positive constant to the original data matrix does not change the proportions of the new coordinate system but implies a rescaling of the result. Sequence analysis Neisseria meningitidis factor H binding protein. Sequences for the LP2086 and VKOR studies were aligned using Muscle (22) (Supplementary Data). The distance matrix for the LP2086 alignment was calculated using ProfDist (23,24) applying the VT substitution matrix (25). The distance matrix was further analyzed and visualized by SplitsTree 0 s split decomposition method (26). Vitamin K epoxide reductase family. Vertebrate sequences were extracted from ENSEMBL using the human VKORC1 and VKORC1L1 proteins as query in a blastp search (27). The ENSEMBL identifiers are: The Ciona savigny homologue was identified only in genomic sequences. The protein sequence was predicted using gene-wise (28) and the human VKORC1 protein as template. The alignment was calculated using Muscle (22) and manually optimized (Supplementary Data). The phylogenetic tree for the VKOR example was calculated with proml of the PHYLIP package (29) and 100 bootstrap replicates. Ancient sequences were reconstructed by codeml of the PAML package (30). RESULTS The method we propose here is a novel approach to an explorative analysis and visualization of MSAs. The goal of our method is the detection and depiction of major signals in alignments, ordered by their importance, co-clustering of sequences and sites and resolution of contradictory signals, i.e. different parts of the alignments vote for a different clustering. The approach comprises three separate steps: (i) the embedding of sequence data into a real valued very high dimensional vector space, (ii) the simultaneous dimension reduction and ordination of both rows and columns of the data matrix (the alignment) and (iii) a biplot visualization of the canonical row and columns scores. The result is a lower dimensional representation of sequences and sites, which can be analyzed by (two-, or three dimensional) scatterplots, comparable but not identical with the result of classical dimension reduction techniques like PCA, applied to both sequences and sites. In contrast to traditional dimension reduction methods, the sequences are co-clustered to their defining sites and vice versa. In this representation, the sites responsible for a cluster of sequences come to lie close to the sequences. Embedding (i) is achieved via the Fisher score representation of HMM parameters (14,15,18). Therefore we start by training a profile HMM (pHMM), (9,10) on the previously aligned sequences. The Fisher scores are then computed as the vector of derivatives of the log-likelihood of each training sequence with respect to the emission probabilities of the HMM (see 'Material and Methods' section). The sequence is thus transformed in a meaningful way into a vector of real-valued numerical values for the following ordination step. Steps (ii) and (iii) are done via direct application of CA to the derived data matrix of Fisher scores. CA is a method originating from ecology and designed for the analysis of two-way contingency tables (17). It is capable of performing simultaneous ordination on both rows and columns of a data matrix (often referred to as species and sites in ecology, a nomenclature which also fits well in sequence analysis) and has also been shown to be of use for continuous datasets in the context of microarray analysis (31). In principle, it can be thought of as an oriented MDS on w 2 distance matrices computed from both sides of a data matrix, which is jointly plotted. In the CA, each axis is a weighted linear combination of the Fisher scores of the data vectors, i.e. of the existent (and due to the way the Fisher scores are generated also non-existent) nucleotides/residues in the alignment. CA is a co-clustering of sequences and sites, where conditionally independent signals are projected onto the component axes. Therefore, in a phylogenetic context, one would expect that the first component axis corresponds to the branch of a phylogenetic tree which discriminates most between the most different sequence groups. Typically this refers to the longest branch of the tree. This means that the two major phylogenetic sequence groups are expected to lie consistently on one side of the first component axis, or the other, respectively. Other long-branched subgroups of the tree are then likely to be found in higher order component axes. The co-clustered sites are major candidates for the autapomorphies defining the split. In the same manner, alignments can be decomposed, even when a well-supported phylogenetic tree cannot be constructed, either due to contradictory signals within the alignment or different evolutionary histories. In summary, our proposed method yields a complete decomposition of the considered MSA. In particular, it visualizes information content and speciessite dependencies with respect to a given sequence family, modeled by the underlying pHMM. Example on an artificial dataset To illustrate the concept of our proposed method, we created an artificial DNA MSA (Figure 1a) of four sequences. The main split of the associated cluster tree (Figure 1b) distinguishes the sequences 1 and 2 from 3 and 4. Given the first split, split II distinguishes between sequences 1 and 2 and split III distinguishes between sequences 3 and 4. Application of our method illustrates how it is able to recover the sequence groups and the nucleotide replacements responsible for the grouping. The procedure decomposed the alignment into a 3D space, without loss of information. Figure 1c and d are CA plots of the MSA showing the first three component axes (1 versus 2, and 3 versus itself). In Figure 1c, the first component axis corresponds to the main split of the cluster tree and separates sequences 1 and 2 from sequences 3 and 4, thereby indicating the sites responsible for this split, i.e. G and C versus both Ts at position 3 and 4. The second component axis explains the (conditional) split between sequences 1 and 2 identified by an A or T at position 10. The last conditional split to be explained is the one separating sequences 3 and 4. This is shown in Figure 1d, the third component axis, which identifies the differences at position 16 (G versus C) as being responsible for the split. Neisseria meningitidis factor H binding protein To validate our method on a biological example we chose the N. meningitidis factor H binding protein (fHBP), also termed lipoprotein 2086 and GNA1870, which has become a prominent target in the development of a novel vaccine against serogroup B meningococci (32)(33)(34). This alignment seemed especially suitable for evaluating our method, as there have been conflicting reports about how many distinct sequence variants can be found within the sequence cluster (32,33). We based our analysis on an extended alignment consisting of 114 (47 distinct) sequences from the Genbank database, including the 64 (21 distinct) sequences used by Fletcher et al. (33). We skipped the initial clustering step proposed by Fletcher and co-workers and worked directly on the complete alignment of 114 sequences, each 263 amino acids long. Embedding and ordination took $30 s on a standard desktop computer. Distance-based phylogenetic analyses carried out by the Fletcher group showed a clear clustering of the sequences into two separate subfamilies, each with several further sub-clusters. The authors concluded from these findings that the sequence family consists of two major sequence variants (called subfamily A and B) and recommended representatives of those two variants to be used for vaccine design. On the contrary, Masignani et al. (32) reported at least three major sequence clusters and consequently recommended to use representatives of all three clusters to be included for vaccine design against serogroup B meningococcal disease. Application of our method of combined embedding and ordination of the sequence alignment resulted in a 46-dimensional representation of the data matrix comprising 5610 Fisher score columns. The distribution of the cumulative contribution of the axis to the w 2 statistic showed typical exponential behavior, where seven axes were sufficient to explain >50% of the total inertia. Visual inspection of the major contributing axes showed that axes 1-3 were prominent candidates for the detection of the major sequence clusters (Figure 2c), where the signal on axis 2 was mainly due to single nucleotide polymorphisms in otherwise highly conserved positions. Investigation of the scatterplot showed clustering of the sequences in four separate groups (Figure 2c). Axis 1 separated the Fletcher subfamily A (left, negative halfplane) from subfamily B (right, positive half-plane) without error. From the co-clustered sites it could be seen that major blocks of conserved sites within the respective groups, ranging from alignment position 106-261, were mainly responsible for the observed grouping (Figure 2b, right side). To compare our results with classical methods, we computed a matrix of evolutionary distances between all 47 unique sequences, which was then visualized as an evolutionary network using split decomposition (26). The main cluster, as found by Fletcher et al. (33) and our analysis of component axis 1, was also recovered in the evolutionary network (Figure 2a). These findings strongly suggest that the evolutionary split which lead to development of subfamilies A and B must have happened early in the history of this protein. Remarkably, axis 3 divided both subfamilies A and B into two sub-clusters (A1, A2 and B1, B2, respectively). When we investigated the most prominent representatives of these groups (i.e. the ones closest to the borders of the plot), the co-clustered sites showed that these groups contain identical sequence elements (positions 37-69), including a three-residue long lys-asp-asn insertion between alignment position 67 and 69. This indicates that if the development of subfamilies A and B was prior to the emergence of the second split, clusters 1 and 2 have developed within subfamily A (Figure 2a), and parts of the sequence has afterwards been transferred to members of subfamily B by means of an horizontal gene transfer (HGT)/recombination event. This uncertainty in the evolutionary hierarchy between the sequences is also reflected in the large rectangles contained in the split decomposition visualization of the distance matrix (Figure 2a). This finding was further supported by a PHI test for recombination, which was carried out on the complete alignment (P-value <1.07  10 À11 ) (35). Vitamin K epoxide reductase family Vitamin K is an essential cofactor for the posttranslational g-glutamyl carboxylation of the vitamin K-dependent proteins such as several coagulation factors, bone proteins, cell growth regulating proteins and others of unknown function (36,37). During the carboxylation vitamin K hydrochinone is converted into vitamin K 2,3-epoxide (38). The recycling reaction of vitamin K epoxide back to the hydrochinone form is catalyzed by the vitamin K epoxide reductase (VKORC1) in the so-called vitamin K cycle (39). VKORC1 is the key protein in this redox reaction and the molecular target of coumarin derivatives, such as warfarin, which act as vitamin K antagonists (40). They reduce coagulation activity by interfering with the vitamin K epoxide reductase. Worldwide, coumarins are used in therapy and prevention of thromboembolic events and also in higher doses for rodent pest control. Mutations in the VKORC1 gene cause one form of combined deficiency of vitamin K-dependent coagulation factors (VKCFD type 2) as well as resistance or hypersensitivity to warfarin (41,42). The human VKORC1 gene is localized on chromosome 16 (43) and consists of three exons encoding a 163-amino acid endoplasmic reticulum membrane protein with three or four predicted transmembrane a-helices (44). With the identification of the VKORC1 gene in 2004 (45,46) a paralog gene was discovered, which is called vitamin K epoxide reductase complex 1-like 1 (VKORC1L1) and which is highly conserved over the species. Its physiological function is completely unknown. Extensive database searches in a wide variety of metazoan genomes and subsequent phylogenetic reconstruction allowed us to time the duplication event to the base of the vertebrates. To identify candidate positions for functional analyses, we built a MSA including both variants over different vertebrates, namely a group of fish species (danio, tetraodon, fugu and oryzias), a group of mammals (macaca, pan, human, pongo, mouse, rat, cow, cat, dog and horse), Monodelphis and Xenopus. Furthermore, the alignment contained the VKOR ortholog of Ciona savigny, pre-dating the duplication event. As expected, a first phylogenetic tree revealed two groups, VKORC1 and VKORC1L1, and placed the C. savigny sequence as outgroup. It further clearly separated the fish species from the rest in both groups and correctly clustered the subgroups of mammals in contrast to the singletons Chicken, Monodelphis and Xenopus (Figure 3a). Application of our method to this alignment revealed the following: the first (and most informative) axis separated all species, i.e. all duplicated genes, from the C. savigny sequence (data not shown). This corresponds to the longest branch and rootsplit in the phylogenetic tree, but as we were more interested in variation between species with both paralogs present, we did not investigate this further. We expected axis 2 to either separate the C1 from the L1 sequences or the fish from the land animals, in analogy to the phylogenetic tree. Axis 3 in general separated C1 from L1, for all but the C1 fish sequences, which came to lie near the origin. Analysis of G T T T T T T T T G G G G 1 the subsequent axes of the ordination results separated the L1 fish sequences from its main group (axis 5) and showed that the Danio sequence within the C1 fish group was evolutionarily more distant to the other C1 fish (axis 4), as reflected in the phylogenetic tree. The co-clustered sites showed us that positions which are otherwise completely conserved within the C1 or L1 family were different in the C1 fish sequences. For example, alignment positions 73-77 (marked with yellow dots in Figure 3b) showed a typical EHVL motif for the C1 family and a GSIF sequence for the L1 cluster. The C1 fish sequences in contrast had a QYFV motif (QIFT for Danio) instead. The missing information was caught by axis 2 which separated the C1 fish sequences from all others (for a combined scatterplot of axes 2 and 3, see Figure 3b). In addition, different positions in the alignment were identified, where the fish C1 sequences harbored the same amino acids as the L1 land animal group but differed from the rest of the C1 group. A prominent example is the Warfarin binding motif which is found as a TYA in the C1 non-fish and L1 fish sequences, but as a TYV/TYI/TYL in the C1 fish and L1 non-fish sequences. Reconstruction of ancient sites revealed that this motif evolved in the C1 group only after the split of fishes from the other vertebrates (Figure 3a). Following this observation, we extracted all positions specific for the L1 group and the L1/C1 fish groups, respectively. A AT TG GC CA AT T A A C CA AT TG GC C A AT TG GC CA AT T A A C CA AT TG GC C G G G G C C C C A A A AG G G G A AT TG GC CA AT T A A C CA AT TG GC C A AT TG GC CA AT T A A C CA AT TG GC C G G G G C C C C G G G G T T T T A AT T A AT T A A C CA AT TG GC C A AT T A AT T A A C CA AT TG GC C G G G G C C C C G G G To analyze their functional relevance, we mapped these positions onto the transmembrane topology of this protein [ Figure 3b, (44)]. Two clusters of these sites reside on the cytoplasmic extensions of the transmembrane helices I and III. Further sites are localized within the transmembrane helix II. Here, the positions were placed regularly on every fourth position (alignment positions 111, 115, 119 and 123, Figure 3b). With a standard helix turn taking on about every 3.5 amino acids, there seems to be a spatially aligned position, where the sites of this transmembrane helix are specific for the subgroups. Although highly speculative, these findings might suggest the following model of action for this family of transmembrane proteins. First, a substrate, differing between the C1 and L1 subfamilies, is bound by the cytoplasmic extensions of helices I and III. Possibly, a further region in the first, large cytoplasmic loop (position 73-77 in Figure 3b) assists in substrate recognition. Second, the substrate is channeled into the membrane along one site of transmembrane helix II. Finally, it is presented to the catalytic center built by the CIVC motif residing in helix III (blue dots, Figure 3b). DISCUSSION Recent advances in genome sequencing technology have lead to a noticeable shift in focus toward methods dealing with contig-or genome-sized sequences, be it for contig assembly or phylogenomics. Nevertheless, accurately reconstructed MSAs on the gene or protein level are still of major importance. Most tools or algorithms introduced in this context are dedicated to a specific task like the reconstruction of phylogenetic trees, transmembrane prediction or conservation profiling. The method we propose here is different in that it is a method for the explorative unsupervised analysis of MSAs. It decomposes the alignment into its major signals and co-clusters sequences and sites, thereby simultaneously finding sequence groups and the sites responsible for their grouping. The probabilistic model (pHMM) used to describe the alignment is a known and approved method for sequence modeling (10) and due to their nature the Fisher score embedding is advantageous to other embeddings proposed and applied before (3,13). These advantages include the possibility to directly Further, apart from a pure probabilistic representation of the alignment itself, the HMM fitting process allows integration of prior knowledge about amino acid distributions. Biologically meaningful priors can be derived, e.g. via Dirichlet mixtures (47,48) or from log-odds-based substitution matrices (49). These incorporate the desired biological signal into the pHMM, giving, e.g. amino acid positions with similar chemical or physical properties, more similar probabilities than obtained from the alignment alone. Fisher scores are known to be 'sufficient statistics' for the underlying HMM parameters, i.e. they contain all available information about the parameters (14). In contrast to a direct embedding via the HMM scores or site probabilities, they do not suffer from the effect that highly divergent, but from the HMM's perspective equally probable sequences receive the same representation. This would project those unrelated sequences close to each other during the ordination step. Additionally, Fisher scores are a fixed-length representation of the original sequences, thus preventing length-driven biases in the analyses. Computational problems due to the high dimensionality of the Fisher score representation itself can be circumvented by application of the economysized SVD variant. The computational complexity of the Fisher score calculation is similar to the forwardbackward algorithm [O(N 2 T) for N states and sequence length T]. Even though our proposed method of ordination (CA) was originally designed for two-way contingency tables (17), it has been shown earlier that the method is very suitable for the analysis of continuous datasets, in which dependencies between rows and columns of a data matrix are of interest (31). We compared our method to a standard approach of ordination with an Euclidean metric (e.g. PCA). Representatives are, for example, the SeqSpace and Jalview programs (3,13,50), although these tools additionally suffer from the inexpressiveness of the binary embedding employed. For a fair competition, we compared our CA decomposition to classical PCA on the same dataset, in both cases embedded via Fisher scores, and found CA to be more sensitive toward biological signals. For example, PCA analysis of the LP2086 dataset moved sequences ACB38144.1 and ACI31835.1 (close to the blue HGT candidate group in Figure 2c) even though they do not share the 30 amino acid region characteristic for sequences of that cluster (Figure 2b). In the original CA ordination, they clearly separate from the other sequences of their cluster on the x-axis (the two points on the far right side of Figure 2c), but show no grouping with the HGT candidates. Similar effects were found in other regions of the sequence and in the VKOR example (data not shown). It seems that CA profits from application of the w 2 distance in that it focuses on sequencesite associations rather than simple one-way Euclidean ordination. We finally also directly loaded our datasets into Jalview, but as the software is missing the ordination of sites in the alignment, no functional annotation of sequence clusters could be made. The SeqSpace software, which is supposed to also cluster the sites, was not available anymore at the time of this writing. The advantages of detecting associations in terms of the w 2 distance become apparent in the fHBP example. Neither sequence-based nor site-based methods are on their own able to detect any recombination event. Phylogenetic algorithms average over the length of the alignment, rightfully discarding the subtle 30 amino acid transfer region in the beginning of the alignment. The HGT never shows in the tree, it can be suspected from the evolutionary network, but due to the short length and the low number of representatives carrying the motif, the signal is only weakly reflected in the distance matrix and therefore in the split decomposition. Conservation profiles like sequence logos or clustering procedures on sites would not reveal the HGT either, which can only be identified by detection of incompatible sites (35), i.e. sites for which contradicting sequence clusters can be built. Our method was able to resolve the recombinated group and identify the responsible sites. It allows for an explorative analysis of the MSA without focusing on any specific type of signal, e.g. phylogenetic signals or HGT alone. It is important to note that this is by no means a test for recombination nor a method to thoroughly find all possible sites of HGT within an alignment, but it can provide an unbiased and structured view on an MSA from different perspectives. Studying the VKOR protein family again showed how major phylogenetic signals appear on one of the first axes in the ordination, like separation of the C. savignyi outgroup. But it is also a good indication of how interesting features of the alignment are completely missed by sequence-based methods, like the phylogenetic tree, or site-base methods, like the depicted sequence logo alone. The co-clustering of species and sites, i.e. the identification of associations between the two, bring insight into the dependencies and-maybe-functional relations, between sequences in the alignment, thereby annotating them with the necessary sequence features. It showed us for example, that in contrast to the L1 fish sequences, the C1 fish sequences do not share the typical C1-L1 site differences of the other groups and identified the positions where those sequences differed. Recovering this tiny signal covered by the large phylogenetic trend would not be possible by methods considering complete sequences, as in the calculation of phylogenetic trees. From these findings we are convinced that the method proposed here provides researchers with a new and unique way to analyze MSAs. Our method provides a structured decomposition of an alignment and depiction of its information content with increasing granularity. The modularity of the approach allows for a variety of statistical methods applicable to high-dimensional datasets to be used. Its explorative nature can give rise to hypotheses which might then be validated by, for example, statistical tests. On the modeling side, future work might extend the algorithm to include combined sequence structure alignments suitable for analysis of RNA sequences. In general, all types of sequential data (DNA, RNA and protein sequences) are in principle suitable for such an analysis, provided they can be modeled in a probabilistic fashion via, for example, an HMM and from which Fisher scores can be derived.
2014-10-01T00:00:00.000Z
2009-08-06T00:00:00.000
{ "year": 2009, "sha1": "2c35a6e6aec266316ac091d60513c12ecd50ba3f", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/37/18/5959/16754755/gkp634.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a220c6ba9419460f220045a15033704d574db93b", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3853408
pes2o/s2orc
v3-fos-license
Iron limitation promotes the atrophy of skeletal myocytes, whereas iron supplementation prevents this process in the hypoxic conditions There is clinical evidence that patients with heart failure and concomitant iron deficiency have increased skeletal muscle fatigability and impaired exercise tolerance. It was expected that a skeletal muscle cell line subjected to different degrees of iron availability and/or concomitant hypoxia would demonstrate changes in cell morphology and in the expression of atrophy markers. L6G8C5 rat skeletal myocytes were cultured in normoxia or hypoxia at optimal, reduced or increased iron concentrations. Experiments were performed to evaluate the iron content in cells, cell morphology, and the expression of muscle specific atrophy markers [Atrogin1 and muscle-specific RING-finger 1 (MuRF1)], a gene associated with the atrophy/hypertrophy balance [mothers against decapentaplegic homolog 4 (SMAD4)] and a muscle class-III intermediate filament protein (Desmin) at the mRNA and protein level. Hypoxic treatment caused, as compared to normoxic conditions, an increase in the expression of Atrogin-1 (P<0.001). Iron-deficient cells exhibited morphological abnormalities and demonstrated a significant increase in the expression of Atrogin-1 (P<0.05) and MuRF1 (P<0.05) both in normoxia and hypoxia, which indicated activation of the ubiquitin proteasome pathway associated with protein degradation during muscle atrophy. Depleted iron in cell culture combined with hypoxia also induced a decrease in SMAD4 expression (P<0.001) suggesting modifications leading to atrophy. In contrast, cells cultured in a medium enriched with iron during hypoxia exhibited inverse changes in the expression of atrophy markers (both P<0.05). Desmin was upregulated in cells subjected to both iron depletion and iron excess in normoxia and hypoxia (all P<0.05), but the greatest augmentation of mRNA expression occurred when iron depletion was combined with hypoxia. Notably, in hypoxia, an increased expression of Atrogin-1 and MuRF1 was associated with an increased expression of transferrin receptor 1, reflecting intracellular iron demand (R=0.76, P<0.01; R=0.86, P<0.01). Hypoxia and iron deficiency when combined exhibited the most detrimental impact on skeletal myocytes, especially in the context of muscle atrophy markers. Conversely, iron supplementation in in vitro conditions acted in a protective manner on these cells. Introduction Muscle atrophy reflects a systemic response to various chronic conditions, including heart failure (HF) (1,2).disease-associated decreases in the size of muscle tissue can occur as a result of various pathologies and one such reported causative factors is hypoxia (3)(4)(5). Exercise intolerance and skeletal muscle dysfunction are among the fundamental features of HF pathophysiology, and are associated with limited everyday function and poor patient outcomes (6)(7)(8).The mass and volume of skeletal muscle in different regions of the body are decreased in patients with HF (9)(10)(11)(12), and skeletal muscles are more prone to exertion fatigue (13,14).The skeletal muscle wasting known as cardiac cachexia (15) may also be observed in histopathological evaluations where fibre atrophy is observed (16,17) and at the molecular level as the proteolysis by the ubiquitin-proteasome system is activated (18). Iron deficiency is one of the potential pathomechanisms that contribute to muscle dysfunction in HF (19)(20)(21).The beneficial effects of iron supplementation on the improvement of muscle function in patients with HF have already been reported (22,23); however, the associated mechanisms remain to be elucidated. It may be hypothesized that the disturbed iron metabolism observed in HF affects skeletal muscles, which leads to their dysfunction and exercise intolerance.The present authors have recently demonstrated that, during hypoxia, reduced iron concentration had a greater negative impact on the viability and functioning of myocytes, compared with augmented iron availability (24). As hypoxic conditions, which occur in the course of HF are difficult to be introduced and manipulated in humans and animals, a model of skeletal myocyte cultures was established in the present study.The aim of the present study was to investigate the influence of iron availability and hypoxia in the context of muscle atrophy markers and cell morphology in a rat skeletal myocyte cell line (L6G8C5). The following parameters were measured in the L6 cell line using differing states of iron availability in the culture medium: Atrogin-1 and muscle-specific RING-finger 1 (MuRF-1), which are muscle-specific ubiquitin E3-ligases that are a part of the ubiquitin proteasome pathway associated with protein degradation during muscle atrophy, and are markedly induced in almost all types of atrophy (25)(26)(27); mothers against decapentaplegic homolog 4 (SMAD4), which is a transcriptional factor that serves a central role in the balance between atrophy and hypertrophy (28)(29)(30); and Desmin, which is a structural protein that builds class-III intermediate filaments found in muscle cells.B cell lymphoma-2 (Bcl-2) associated protein X (Bax)/Bcl-2 gene expression ratio (31) and expression of transferrin receptor 1 (TfR1) (32) were evaluated as previously described (24), to evaluate apoptotic activity and iron influx, respectively. Materials and methods Experimental schedule.Rat L6G8C5 skeletal myocytes (L6; sigma-Aldrich; Merck KGaA, Darmstadt, Germany) were cultured in normoxia (18% o 2 , 5% Co 2 ) or hypoxia (1% o 2 , 5% Co 2 , 94% N 2 ), for 48 h (24), supplemented with 100 µM deferoxamine (DFo; sigma-Aldrich; Merck KGaA) or 200 µM ammonium ferric citrate (AFC; sigma-Aldrich; Merck KGaA) in order to change iron accessibility (Fig. 1).Controls were grown in normal cell culture conditions in normoxia for 48 h.DFo is a selective iron chelator that is typically used in cell culture studies.It has been reported that the addition of DFo into culture medium reduces iron concentration both in the cellular environment and inside the cell, as DFo may be taken up via fluid phase endocytosis (33).AFC was previously applied in cell culture studies in order to induce intracellular iron uptake (34,35).Compounds were added to cells from 1,000X concentrated stock, diluted in culture medium.Hypoxia was generated in a standard cell culture incubator by displacing O 2 with infusion of N 2 , which was supplied by an external high-pressure liquid nitrogen tank. Iron content in cells.The measurement of intracellular iron content was performed via flame atomic absorption spectroscopy assay (36).Briefly, 2x10 6 cells, cultured according to the aforementioned protocol, were dissociated by trypsinisation, pelleted by centrifugation (5 min; room temperature; 500 x g), and washed five times with PBs in order to remove free iron ions from the medium.The pellet was dissolved in 250 µl radioimmunoprecipitation assay (RIPA) buffer (sigma-Aldrich; Merck KGaA) for 30 min on ice and sonicated on ice (20 kHz; 10 sec) to disaggregate cellular structures.The protein concentration in the cell lysate was determined using the Lowry method (37).Intracellular iron content was measured in 250 µl of medium containing 1 mg protein lysate using an atomic absorption spectrometer (soLAAR M6; Thermo Elemental, Ltd., Cambridge, uK) equipped with a deuterium lamp for background radiation correction by direct calibration against aqueous standards.An air-acetylene flame was used (gas flow, 0.9 l/min).The calibration solutions were prepared following successive dilutions of the stock standard solution (iron standard for AAs TraceCERT; 1,000 ppm; sigma-Aldrich; Merck KGaA).The final iron concentration in the successive dilutions of the stock solution varied from 0.5-2.0ppm.All solutions were diluted to 1:5 with deionized water.Measurements were made under the following conditions: Burner, 100 mm; wavelength, λ=253.7 nm; background correction (quadline bandpass, 0.2 nm); lamp current, 75%.The method was verified by determination of reference material [seronorm™ Trace Elements serum level I: Fe=1.39 mg/l; level II: Fe=1.91 mg/l (sERo As, Billingstad, Norway)]. Immunofluorescence and imaging.For immunofluorescence, cells were grown to 70-90% confluency on sterile coverslips placed in 6-well plates (sarstedt AG).Cells were fixed in 4% formalin in PBs for 20 min at room temperature, then unmasked in Target Retrieval solution (pH 9; Dako; Agilent Technologies, Inc., santa Clara, CA, usA) in 95˚C for 10 min, permeabilized with 0.1% Triton X-100 in PBs (3x7 min), washed with PBs and blocked at room temperature for 1 h with PBs containing 10% FBs.Double-immunofluorescence staining was applied as cells were incubated overnight at 4˚C with primary antibodies (Table I), washed with PBs and subsequently incubated for 2 h at room temperature with fluorescein isothiocyanatae-or rhodamine-conjugated secondary antibodies (Table I) and DAPI (2 µg/ml; santa Cruz Biotechnology, Inc., Dallas, TX, usA) as a nuclei marker.Control reactions were performed without primary antibodies.Labelled cells were mounted in ProLong Gold Antifade Mountant (Thermo Fisher scientific, Inc.).Cells were viewed using a Nikon Eclipse 80i fluorescence microscope (Nikon Corp., Tokyo, Japan) with 40x objective lens.Representative images were chosen from each sample and were processed with the use of ImageJ 1.51 h (National Institutes of Health, Bethesda, MD, usA).Data was obtained from three separate experiments. Statistical analysis.data are presented as the mean ± standard deviation unless otherwise indicated.Kruskall-wallis analysis followed by post-hoc Dunn's multiple comparison test was used to compare the groups.All experiments were performed in triplicate.spearman's rank correlation coefficient was calculated between the expression of Atrogin-1, MuRF-1, SMAD4 and Desmin genes in three states of iron concentrations or between expression of the aforementioned genes and Bax/Bcl2 or TfR1 in those same conditions.P<0.05 was considered to indicate a statistically significant difference. Results Changes in intracellular iron due to the addition of DFO or AFC to culture media.The mean intracellular content of iron in L6 cells in the standard DMEM medium was 0.91 mg/l.The direct measurement of intracellular iron demonstrated that the addition of 100 µM DFo to the medium caused a decrease of intracellular iron concentration to 0.41 mg/l.In contrast, a supplementation of medium with 200 µM AFC increased mean intracellular iron concentration to 5.12 mg/l (data not shown). Effects of differing iron availability during normoxia or hypoxia on the morphology of skeletal myocytes.The exposition of skeletal myocytes to hypoxia alone did not markedly affect the morphology of the studied cell line.Notably, the morphological abnormalities within skeletal myocytes, such as cell shrinkage and pyknosis, occurred upon decreased iron availability in both normoxic and hypoxic conditions.In turn, AFC treatment upon both optimal and reduced oxygen concentration did not lead to any marked alterations in cellular morphology (Fig. 2). Effects of differing iron availability on the atrophy markers Atrogin-1 and MuRF1 in skeletal myocytes.The effect of iron availability on levels of Atrogin-1 and MuRF1 were assessed (Fig. 3).Hypoxic treatment of L6 skeletal myocytes caused, as compared with the untreated cells, a significant increase in mRNA expression of Atrogin-1 (P<0.001;Fig. 3A).L6 cells when exposed to iron-deficient environment demonstrated, as compared with the cells cultured in optimal iron concentration, significantly increased mRNA expression of Atrogin-1 in both normoxia (P<0.01) and hypoxia (P<0.05;Fig. 3A), indicating enhanced protein degradation in the cells.Notably, the increase in Atrogin-1 mRNA expression was greater during hypoxia and concomitant reduced iron availability than in hypoxia alone (P<0.05).In turn, AFC treatment of myocytes did not significantly affect the mRNA expression of Atrogin-1 under optimal oxygen levels, whereas during hypoxia the expression of the aforementioned marker was significantly decreased, as compared with cells cultured in optimal (P<0.05) and reduced iron concentrations (P<0.001;Fig. 3A).western blot analysis and immunocytochemical staining revealed similar patterns of changes at the protein level (Fig. 3C and D). Low oxygen concentration did not significantly affect the mRNA expression of MuRF1 (Fig. 3B), as compared with untreated cells.The exposure to DFo induced a significant increase in mRNA expression of MuRF1 in normoxia (P<0.05) and hypoxia (P<0.05), as compared with cells cultured in optimal iron concentration.Notably, skeletal myocytes demonstrated a greater increase in mRNA expression of MuRF1 when cultured in low iron availability during hypoxia than in normoxic conditions.In turn, increased iron availability induced a significant decrease in mRNA expression of MuRF1 during normoxia (P<0.05) and hypoxia (P<0.001),compared with optimal iron levels.western blot analysis and immunocytochemical staining revealed similar patterns of changes at the protein level (Fig. 3C and E).Notably, the mRNA expression of MuRF1 during hypoxia was significantly associated with mRNA expression of Atrogin-1 (R= 0.98, P<0.001) regardless of iron status.Further, in hypoxia an increased expression of Atrogin-1 and MuRF1 was associated with an increased expression of Tfr1 (24), reflecting intracellular iron demand (R=0.91,P<0.01; R=0.86, P<0.01; data not shown).The aforementioned associations were not observed under normoxic conditions. Effects of differing iron availability on the expression of SMAD4 in skeletal myocytes. The exposure of skeletal myocytes to hypoxia alone did not significantly affect the mRNA expression of SMAD4.In turn, reduction of iron availability resulted, as compared with the cells cultured in optimal iron concentrations, in significantly decreased mRNA expression of SMAD4 under hypoxia (P<0.001),suggesting a shift towards atrophy in the atrophy-hypertrophy balance; whereas during normoxia, the expression of this transcription factor was not significantly changed (Fig. 4A).when exposed to iron supplementation with AFC in normoxic conditions, L6 cells exhibited a significant increase in mRNA expression of SMAD4 compared with cells cultured in optimal iron concentration (P<0.001).western blot analysis and immunocytochemical staining confirmed that hypoxia treatment did not alter the protein expression of SMAD4 (Fig. 4B and C).In turn, the addition of dFO or AFc to the culture medium caused, a marked decrease or increase, respectively, in the protein level of SMAD4 during hypoxia and normoxia, compared with the cells cultured in optimal iron concentrations (Fig. 4B and C).Notably, the mRNA expression of SMAD4 during hypoxia and concomitant different iron status was significantly associated with mRNA expression of Tfr1 (24) (R=-0.94,P<0.01) and apoptotic activity measured as Bax/Bcl-2 (24) gene expression ratio (R=-0.79,P<0.05) (data not shown).The aforementioned associations were not present upon normoxic conditions. Effects of different iron availability on the expression of desmin in skeletal myocytes.Reduced iron availability resulted in significantly increased mRNA expression of desmin during normoxia and hypoxia, compared with the cells cultured in optimal iron concentrations (both, P<0.001).Notably, the increase of desmin mRNA expression was markedly greater in low iron conditions during hypoxia than in reduced iron levels in normoxic conditions.similarly, when exposed to AFC treatment, L6 skeletal myocytes exhibited a significant increase in the mRNA expression of desmin during normoxia and hypoxia, compared with the cells cultured in optimal iron concentrations (both, P<0.05).The increase of mRNA expression of desmin was greater during hypoxia upon low iron treatment than in AFC-treated cells in normoxia.western blot analysis (Fig. 5B) and immunocytochemical staining (Fig. 5C) revealed a similar pattern of changes at the protein level.Notably, the expression of desmin was significantly associated with increased mRNA expression of TfR1 (24) during both normoxia and hypoxia, regardless of iron level (R= 0.71, P<0.05; R= 0.79, P<0.05).Furthermore, the expression of desmin during hypoxia with concomitant different iron status was significantly associated with apoptotic activity measured as the Bax/Bcl-2 (24) gene expression ratio (R= 0.93, P<0.001) and with the decreased mRNA expression of SMAD4 (R=-0.80,P<0.01) (data not shown). Discussion The present study provides an insight into the response of muscle-specific atrophy markers to increased or reduced iron availability in skeletal myocytes when cultured under normoxic or hypoxic conditions.In particular, it was demonstrated that iron depletion, when combined with hypoxia, induced abnormal cell morphology and an upregulation of key enzymatic components of intracellular regulatory system of muscle atrophy, namely muscle-specific ubiquitin E3-ligases Atrogin-1 and MuRF1.Notably, it was demonstrated that augmented iron availability in hypoxic conditions acted in a protective manner in the context of these atrophy markers. skeletal muscle wasting has been insufficiently investigated in the context of iron metabolism.To date, there have been few studies that linked iron overload to muscle atrophy.For example, Ikeda et al (40) recently demonstrated that excess iron caused a decrease in mean size and muscle fibre area as well as an induction of Atrogin-1 and MuRF1 expression in mice subjected to 7-or 14-day iron injection treatment.However, it should be emphasized that animals that underwent the aforementioned experiment were healthy and, apart from iron load, no other factors mimicking any pathology were investigated in this study.similarly, Reardon and Allen (41) previously demonstrated an iron-induced atrophy observed in murine skeletal muscle, but the process occurred only in soleus muscles and was not detected within the rest of the investigated muscles.There is also a scarcity of data on the influence of both iron excess and iron deficiency on atrophy markers in skeletal myocytes when exposed to hypoxia.The present authors previously studied the influence of increased or reduced iron availability in hypoxic conditions on skeletal myocytes and demonstrated that, during hypoxia, the reduced iron concentration had a more negative impact on the viability and apoptotic activity of the studied cells as compared with elevated iron availability (24,42).The present authors' preliminary results also demonstrated that, in skeletal myocytes, the mRNA expression of muscle-specific atrophy marker Atrogin-1 was increased upon reduced iron availability, and was downregulated in increased iron concentrations (24). In the present study, it was demonstrated that skeletal myocytes exposed to an iron-deficient environment demonstrated an upregulation of Atrogin-1 and MuRF1 at the mRNA and protein levels in hypoxia and normoxia, which suggested that catabolic activation and resulting protein degradation had occurred in the cells.The most severe impact in the context of muscle atrophy markers was observed in the combined conditions of hypoxia and iron deficiency.Notably, AFC-treated cells demonstrated an opposing trend, which suggested a protective influence of iron supplementation.This may support a molecular substantiation of efficacy of iron therapy for the improvement of muscle functional capacity in iron-deficient patients with HF (22,23).Notably, the observed upregulation of both Atrogin-1 and MuRF1 in low-iron availability introduced in hypoxia was associated with increased expression of Tfr1, reflecting the association between an increased intracellular iron demand and catabolic activation of skeletal myocytes. As the morphology of cells cultured in and iron-deficient environment was altered both in normoxia and hypoxia as compared with optimal or augmented iron availability, the expression of desmin, a muscle-specific cytoskeletal intermediate filament, was investigated (43,44).To the best of our knowledge, desmin has not previously been analysed in skeletal muscle in the context of iron availability.In the present study, it was demonstrated that reduced and increased iron concentration lead to the upregulation of desmin, whereas hypoxia strengthened this effect.The aforementioned alterations may be considered as a maladaptive mechanism resulting in abnormal desmin accumulation, which has recently been correlated with altered myofiber morphology or mitochondrial dysfunction (45).In another previous study, an increased expression of desmin in aging rat muscles was potentially linked to the altered contractile force (46).In the present study, it was demonstrated that both iron overload and iron depletion were associated with desmin accumulation, which was analogous to the association previously reported by walter et al (47) who demonstrated the detrimental effect of both iron overload and iron deficiency on the function of liver mitochondria.However, it is notable that the greatest augmentation of desmin expression was observed in DFo-treated cells cultured in hypoxia, which suggests that these combined conditions have the most negative impact on the cell structure. The influence of differing iron availability on skeletal myocytes cultured in hypoxic conditions has been poorly investigated also in the context of atrophy-hypertrophy balance thus far, therefore the expression of SMAD4 was investigated, which is associated with equilibrium between novel protein accumulation and the degradation of existing proteins (28,29).Notably, sartori et al (30) previously revealed the occurrence of atrophy changes in the skeletal muscle of SMAD4-deficient mice.In the present study, it was discovered that, under hypoxia, reduced iron availability induced a decrease in SMAD4 expression.This finding indicated that environmental iron limitation along with reduced oxygen may contribute to muscle atrophy.It was also demonstrated that, in hypoxic conditions, the increased iron concentration may be protective as it induced an increase in SMAD4 expression, and therefore, a potential shift towards hypertrophy. Together, the present data suggested that the combined conditions of hypoxia and iron deficiency are the most detrimental for skeletal myocytes in the context of morphology alterations and expression of atrophy markers.Conversely, it appears that elevated iron availability in hypoxic conditions may be beneficial to a certain extent for skeletal myocytes, preventing their catabolic activation.Although it is necessary to verify these results in more advanced experimental models, they still may provide a valuable starting point for the understanding of efficacy of iron therapy for the improvement of muscle functional capacity and exercise tolerance observed in patients with HF and concomitant iron deficiency. Furthermore, it may be interesting to further investigate the level of atrophy markers in human skeletal muscle tissue samples obtained from iron-deficient patients with HF prior to and following receiving intravenous iron supplementation. Figure 2 . Figure 2. Bright field inverted microscope images of L6G8C5 cells in different iron availability conditions in normoxia and hypoxia.scalebar length, 200 µm.DFo, reduced iron concentration via deferroxamine; C, control; AFC, increased iron concentration via ammonium ferric citrate. Figure 3 . Figure 3. Expression of atrophy markers in L6G8C5 cells with concomitant optimal, reduced or increased iron availability in normoxia and hypoxia.mRNA expression levels of (A) atrogin-1 and (B) MuRF1 in L6G8C5 cells.(C) western blot analysis of respective proteins expression in the cell lysates.Representative images of immunocytochemical staining of (D) atrogin-1 and (E) MuRF1 in L6G8C5 cell lines (with DAPI as a nuclei maker).scalebar length, 25 µm.Data are presented as the mean + standard deviation obtained from three separate experiments.* P<0.05; ** P<0.01; *** P<0.001.Au, arbitrary units; MuRF1, muscle-specific RING-finger 1; DFo, reduced iron concentration via deferroxamine; C, control; AFC, increased iron concentration via ammonium ferric citrate. Figure 5 . Figure 5. Expression of Desmin in L6 cells with concomitant optimal, reduced or increased iron availability in normoxia and hypoxia.(A) mRNA expression levels of Desmin in L6G8C5 cells.(B) western blot analysis of respective proteins expression in the cell lysates.(C) Representative images of immunocytochemical staining of Desmin with DAPI as a nuclei marker in L6G8C5 cell lines.scalebar length, 25 µm.* P<0.05; *** P<0.001.Au, arbitrary units; DFo, reduced iron concentration via deferroxamine; C, control; AFC, increased iron concentration via ammonium ferric citrate. Figure 4 . Figure 4. Expression of SMAD4 in L6G8C5 cells with concomitant optimal, reduced or increased iron availability in normoxia and hypoxia.(A) mRNA expression levels of SMAD4 in L6G8C5 cells.(B) western blot analysis of respective proteins expression in the cell lysates.(C) Representative images of immunocytochemical staining of SMAD4 and DAPI as a nuclei marker in L6G8C5 cell lines.scalebar length, 25 µm.Data are presented as the mean + standard deviation obtained from three separate experiments.** P<0.01; *** P<0.001.Au, arbitrary units; DFo, reduced iron concentration via deferroxamine; C, control; AFC, increased iron concentration via ammonium ferric citrate; sMAD4, mothers against decapentaplegic homolog 4. Table I . Antibodies and dilutions used for wB and IF.
2018-03-12T19:12:04.068Z
2018-02-12T00:00:00.000
{ "year": 2018, "sha1": "d720bf6ac0d96c355e1dad357aad8ad0faa749d9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.3892/ijmm.2018.3481", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "3279ff57752e5b6a4c89624d2cb5156cccb630cc", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
232188338
pes2o/s2orc
v3-fos-license
Metrics of sleep apnea severity: beyond the apnea-hypopnea index Obstructive sleep apnea (OSA) is thought to affect almost 1 billion people worldwide. OSA has well established cardiovascular and neurocognitive sequelae, although the optimal metric to assess its severity and/or potential response to therapy remains unclear. The apnea-hypopnea index (AHI) is well established; thus, we review its history and predictive value in various different clinical contexts. Although the AHI is often criticized for its limitations, it remains the best studied metric of OSA severity, albeit imperfect. We further review the potential value of alternative metrics including hypoxic burden, arousal intensity, odds ratio product, and cardiopulmonary coupling. We conclude with possible future directions to capture clinically meaningful OSA endophenotypes including the use of genetics, blood biomarkers, machine/deep learning and wearable technologies. Further research in OSA should be directed towards providing diagnostic and prognostic information to make the OSA diagnosis more accessible and to improving prognostic information regarding OSA consequences, in order to guide patient care and to help in the design of future clinical trials. Introduction Obstructive sleep apnea (OSA) is a common chronic condition that is associated with neurocognitive impairment, hypertension, and incident cardiovascular and cerebrovascular disease [1]. Using conventional polysomnographic measures and thresholds for abnormality, OSA has been estimated to affect up to 1 billion people worldwide [2]. While most of these cases are undiagnosed, it is likely that most are asymptomatic or minimally symptomatic, and it remains uncertain whether these individuals have a disorder that requires therapeutic intervention [3]. OSA has historically been defined and quantified primarily by the frequency of apneas and hypopneas during sleep (apneahypopnea index, AHI), although the use of this metric has been challenged on both methodologic and pathophysiologic grounds [4][5][6]. Development of this Research Statement was motivated by a growing recognition of the limitations of the AHI to predict adverse effects of OSA and to predict responsiveness to treatment. Given the myriad of OSA-associated conditions across multiple biological systems, one might expect the optimal metric of OSA severity to differ depending on the outcome of interest [7,8]. In this Research Statement, we discuss the development of current definitions of apneas and hypopneas, the strengths and weaknesses of the AHI as a measure of OSA severity, and consider the possible added value of new or emerging metrics that can be derived from the PSG. This document is not intended to be used as a practice standard, nor is it intended to provide an exhaustive review of the available literature, but rather to provide historical context and recommendations for research needed to optimize OSA diagnostic and severity metrics. Important topics such as central sleep apnea and hypoventilation are outside of the scope of this document. Defining the metric A metric is defined as a system for measuring something or a standard of measurement. In medicine, metrics help us differentiate disease states from normal, as well as categorize the severity of illnesses. The use of metrics in health care has become increasingly important to define not only disease states, but also to measure outcomes and then to improve care continually. As stated by Blumenthal and McGinnis: "If something cannot be measured... it cannot be improved" [9]. The metrics that have been traditionally used to define sleep-disordered breathing syndromes derive from the number of breathing events that occur during sleep. In the 1970s, healthy subjects were studied as part of a control group and sleep-disordered breathing event rates were calculated. In these healthy controls, a breathing event rate of <5 apneas per hour was determined to be the threshold and hence became the standard for defining "disease" from "no disease" [10].Of note, the apneas with this early definition were not differentiated by type of apnea (obstructive, mixed, central). Lugaresi et al. suggested that more than 30 apneas during the night was the best discriminator of normal vs. abnormal [11,12]. Block et al. and later Gould et al., introduced the concept of hypopneas to identify episodes of reduced breathing that were felt to be physiologically important due to an associated drop in oxygen saturation or arousal [13][14][15]. Inconsistency in the definition of sleep-disordered breathing events, especially hypopneas, has been a widely recognized concern. Ultimately, the threshold to define disease vs. no disease may well evolve based on interventional studies, as has occurred with cholesterol and blood pressure and other cardiovascular risk factors. History of polysomnography Polysomnography (PSG) evolved from electroencephalography (EEG), first described by Berger in 1929 and applied to the study of sleep by Loomis in 1937 [16,17]. In 1957, Dement and Kleitman described sleep cycles and proposed a sleep classification schema [18,19]. Breathing sensors were first described in the 1960s and Gastaut described obstructive breathing events in which intermittent episodes of upper airway obstruction were noted with continued respiratory effort in patients with the Pickwickian Syndrome [20]. Cardiac signals were later added and the term "polysomnography" was used to describe the measurement of sleep utilizing a variety of body sensors. In the early years of PSG, thermal sensors (thermistors and thermocouples) were used to measure airflow in a semiquantitative way. These could be placed over both the nose and mouth to detect the temperature changes in breathing as a surrogate for airflow. In 1997, Norman et al. reported using a standard nasal cannula connected to a pressure transducer to detect airflow [21]. This resulted in improved sensitivity to changes in airflow and allowed the demonstration of "flow limitation" characterized by flattening of the nasal cannula-pressure transducer signal. Respiratory effort was initially measured using mercury strain gauges around the thorax and abdomen, but at present, the most commonly utilized method is respiratory inductance plethysmography (RIP), which provides a more quantitative measure of effort and is able to provide an alternative flow signal when using the sum of the chest and abdominal belts. Some montages utilized carbon dioxide sensors and esophageal pressure transducers as measures of ventilation and respiratory effort, respectively. Pulse oximeters were subsequently used to measure oxygen saturation (initially from the ear) and oxygen saturation has now become a standard part of the polysomnography montage. Combining thermal sensors and a nasal cannula-pressure transducer device is the current recommended standard for measurement of airflow. While other effort measures are available, RIP has become the standard methodology for respiratory effort measurement. Defining respiratory events The term obstructive sleep apnea syndrome (OSAS) was introduced in 1976 with findings of daytime hypersomnolence and polysomnographically proven obstructive apneas [22]. That article referred to the following definition: "An apnea has been defined as a cessation of airflow at the nose and mouth lasting at least 10 sec," which derived from an earlier study in 1975 [23][24][25][26]. The 10-second rule for scoring respiratory events was based on the average amount of time that would lapse if two regular breaths were skipped with the subject breathing at the usual respiratory rate. This definition was felt to differentiate pathological apneas from breathing in normal adults, as the authors noted: "Isolated, brief apneic episodes and spirographic abnormalities may appear normally at the onset of sleep and during REM sleep." [21,26,27]. As noted above, hypopneas were first described by Block et al. as events of shallow breathing causing oxygen desaturation [13] as "flows in the nose and mouth decreased, chest movement decreased, and desaturation occurred; (see Figure 1 for timeline of hypopnea definitions). Desaturations were thought to be clinically noteworthy when a fall of 4% or greater from the preceding baseline occurred." Block and his colleagues had used a similar oxygen definition on previous studies and observed that it "could easily be seen on review of long tracings." The first case of sleep hypopnea syndrome, with frequent hypopneas but no apneas and clinical symptoms similar to obstructive sleep apnea syndrome was described in 1988 by Gould and colleagues [15]. Initially, OSA was defined by 30 apneas over the course of the night, which later evolved to the apnea index (the number of apneas divided by hours of sleep), where a cutoff was set at 5 apneas/hour to diagnose the disease [25]. However, as hypopneas became part of the spectrum of sleep-disordered breathing, substantial variability was noted in the definition of hypopnea. In one of the largest sleep research projects of the 1990s, the Wisconsin Sleep Cohort study (WSC), hypopneas were defined as clear decreases in the amplitude of a calibrated RIP signal accompanied by a 4% oxygen desaturation [28]. The Sleep Heart Health Study (SHHS), a large multicenter investigation, recognized the variability in hypopnea definitions that were in use. The authors used the ability of computerized PSG systems to combine data from airflow, oximetry, and EEG, calculated multiple measures of hypopnea based on a decrease in airflow or thoracoabdominal excursion of at least 30% of baseline for 10 seconds or more accompanied by variable degrees of oxygen saturation or the presence of EEG evidence of arousal, and demonstrated the dramatic impact of varying hypopnea definitions on the calculated AHI [29,30]. In an early attempt to standardize event definitions, an American Academy of Sleep Medicine (AASM) task force in 1999 published a recommendation that apnea and hypopnea be considered equivalent, and that the apnea/hypopnea event be defined as "a clear decrease (≥50%) from baseline in the amplitude of a valid measure of breathing during sleep" or "a clear amplitude reduction of a validated measure of breathing during sleep" that does not reach the 50% threshold "but is associated with either an oxygen desaturation of >3% or arousal." [31]. The 1999 AASM task force also defined a more subtle breathing abnormality, the respiratory event-related arousal (RERA), with the following: "These events must fulfill both of the following criteria: (1) Pattern of progressively more negative esophageal pressure, terminated by a sudden change in pressure to a less negative level and an arousal and (2) The event lasts 10 seconds or longer." They also recommended severity criteria based on degree of sleepiness and frequency of respiratory events, with the following severity grades based on the number of obstructive breathing events per hour: It should be noted that these recommendations were based on expert consensus, with an acknowledgment that the data on which to base these event definitions and severity measures were limited. Ultimately, these thresholds may well be revised as data from interventional studies help to characterize those patients most likely to benefit from therapy. Additional variables beyond simply counting events will likely be required, analogous to using C reactive protein (hsCRP) as a guide to cholesterol-lowering. The definitions of respiratory events have continued to evolve. The AASM Manual for the Scoring of Sleep and Associated Events was published in 2007 [32] with redefined rules for the respiratory events. An apnea was scored as a drop in the peak thermal sensor excursion by 90% of baseline, an event that last at least 10 seconds and at least 90% of the event's duration meets the amplitude reduction criteria for apnea. The apneas were then classified by the presence or absence of inspiratory effort. The Recommended rule for hypopneas included a nasal pressure signal excursion drop by ≥30% of baseline for at least 10 seconds with a ≥4% desaturation from pre-event baseline, and 90% of the event duration must meet the amplitude reduction. The Alternative rule for hypopneas included two primary differences: a 50% reduction in nasal pressure signal and that the event is associated with either a ≥3% desaturation or an arousal. A RERA was defined as at least a 10-second event that did not meet the definition of hypopnea and was characterized by increasing respiratory effort or flattening of the nasal pressure waveform 2007 •AASM Scoring Manual (v 1.0) sets 2 rules for hypopneas: "Recommended" (30% AF reducƟon + 4% oxygen desaturaƟon) and "AlternaƟve" (3% oxygen desaturaƟon and/or arousal) (31)) 2012 •AASM changes "Recommended" to 30% AF reducƟon + 3% oxygen desaturaƟon and/or arousal; adds an "OpƟon" to report hypopneas with a 4% desaturaƟon (26) leading to an arousal from sleep [33,34]. While ideally the RERA is defined via use of an esophageal manometer, nasal pressure and inductance plethysmography are alternate measurement tools [33][34][35]. Version 2 of the AASM Scoring Manual was released in 2012 with a major shift in the Recommended hypopnea definition to a ≥30% drop in the flow signal for 10 seconds associated with either a ≥3% oxygen desaturation or an arousal, with the option to additionally report hypopneas using a definition requiring an associated ≥4% desaturation [27]. This change reflected a recognition that some patients whose events were not associated with a 4% desaturation were symptomatic and would benefit from treatment of OSA. It was acknowledged that these two alternative event definitions yielded very different measures of OSA severity, and that "thresholds for identification of the presence and severity of OSA, and for inferring health-related consequences of OSA, must be calibrated to the hypopnea definition employed [27]." However, no such change in recommended severity grade based on event frequency has been recommended by the AASM. The 2020 definitions from the AASM scoring Manual v2.6 for apneas, hypopneas and RERAs are generally unchanged from the 2012 definitions. Just as the evolution and variation in respiratory monitoring technology and inconsistency in event definition has caused confusion and complicated comparisons across studies, the terminology for frequency of respiratory events has also been inconsistent. Early studies reported the frequency of apneas per hour of sleep as the apnea index (AI), but with the addition of the hypopnea as a physiologically relevant event, the term apnea-hypopnea index (AHI) came into use. At the same time, the term respiratory disturbance index (RDI) was also being used synonymously [27]. The 2007 AASM Scoring Manual defined the RDI to include RERAs in addition to hypopneas and apneas, measured per hour of sleep, thus formally differentiating it from the AHI [27,36]. Used commonly in early PSG when assessment of airflow and respiratory effort were less common, and still often reported today, the oxygen desaturation index (ODI) is defined as the number of transient falls in oxygen saturation per hour of sleep; the percent desaturation used to identify a desaturation event (typically 3% or 4%) must be specified. Home sleep apnea testing Technological advancements in sleep apnea measurement in recent years have been aimed at reliably measuring OSA in the unmonitored, home setting because of the greater convenience for the patient and the reduced cost of having the patient self-apply and record sleep and cardiopulmonary signals in the home. Home sleep apneas tests (also referred as portable / ambulatory monitoring) used for OSA screening and/or diagnosis range from devices with 1-2 sensors (e.g. pulse oximetry or airflow) to multi-channel devices with multiple sleep and cardiopulmonary signals [37][38][39][40]. The definitions used to score sleep-disordered breathing events by these devices vary widely depending on the technology used and the signals that are recorded making it difficult to compare data across devices [41]. Although home sleep apnea tests (HSATs) do not record the same set of signals as a full PSG, they all attempt to provide an index of OSA severity that is comparable to the PSG-derived AHI. Implementation of the AASM hypopnea definition that includes EEG arousal is not possible with most HSAT devices due to absence of EEG recording. However, some HSAT devices use surrogates for EEG arousal such as change in snoring, pulse rate change, or movement to identify hypopneas [42]. The majority of HSAT devices use recording time or valid signal time as the denominator instead of total sleep time (TST) in the calculation of AHI as they do not include EEG sensors to differentiate sleep from wake. Typically the difference in recording time versus TST is about 20% resulting in lower AHI values in the HSAT from this difference alone [43]. In order to differentiate indices that use recording time (including both sleep and wake periods) and total sleep time (EEG defined sleep time only), the AASM recommended use of the term Respiratory Event Index (REI). However, an increasing number of HSATs are able to distinguish sleep from wake using limited EEG or alternate technology (such as a combination of arterial tonometry and other physiological signals) and provide an estimated measure of TST [40,[44][45][46]. Despite these differences in technology and scoring, validation studies comparing HSAT to in-lab PSG data demonstrate reasonable agreement between OSA severity indices and adequate diagnostic sensitivity/specificity for OSA [47][48][49][50]. Because HSATs are less invasive, record data in the patient's natural environment and have the ability to record multiple nights (integrating physiological night-to-night variability of OSA), these data may theoretically provide a more reliable measure of sleep-disordered breathing severity than a single night in the laboratory. While most portable sleep testing devices include a sensor to measure airflow, some alternative technologies such as peripheral arterial tonometry (PAT) have been used to identify sleepdisordered breathing events without use of airflow. The indices of OSA severity obtained using typical PAT devices, referred to as pAHI and pRDI have been reported to be equivalent to PSGderived AHI [45,51]. Severity of residual OSA while patients use PAP therapy is measured using airflow sensors built into PAP devices. Without the ability to measure sleep, PAP devices use the recording time as the denominator for an AHI that is based on airflow change alone. Differences between PAP device-derived AHI and PSG have been described, although the clinical significance of these differences is unclear [52]. Strengths and weaknesses of the AHI In this section, we address the ability of the AHI to predict clinically relevant correlates of OSA, including patient-reported outcomes of daytime sleepiness and quality of life, motor vehicle and industrial accidents, hypertension, diabetes mellitus, CHD, stroke, heart failure, and death. In addition to the limitations of the AHI resulting from inconsistent methodology described above, concern has been raised that the AHI fails to capture the physiological abnormalities accurately that underlie its neurocognitive, metabolic, and cardiovascular effects. Among the criticisms of the AHI is that it explains little of the variance in these symptoms or disease outcomes. In considering the limitations of the AHI as a predictor of these outcomes, it is important to recognize three potential sources of limited predictive ability: 1. Precision with which the AHI reflects the true OSA-related exposure that is the cause of adverse outcomes, due either to failure to reflect the operative pathophysiologic mechanisms accurately or to measurement error from night-tonight variability or scoring inaccuracy; 2. Individual differences in response to OSA, which may reflect multiple factors, including genetics, age, medication use, and comorbid conditions such as obesity, among others. These differences in response to OSA will limit the ability to predict outcomes even when exposure to airway obstructive events is measured precisely; 3. Competing (non-OSA) causes of outcomes of interest, which will impose an upper limit on the amount of variance of an outcome that can be predicted even when exposure to OSA is measured precisely. Therefore, where available, the predictive ability of the AHI is compared to other established risk factors for the outcome of interest. Daytime sleepiness Excessive daytime sleepiness (EDS) has long been recognized as a cardinal symptom of OSA, as described by early reports in the 1970s [26,53]. Suggested mechanisms underlying EDS in OSA include sleep fragmentation [54], increased somnogenic circulating cytokines and intermittent nocturnal hypoxemia [55][56][57][58], the latter possibly leading to neural cell injury and apoptosis affecting wake-promoting regions of the brain [59,60]. However, EDS is not universally present in patients with OSA, as some alternatively report fatigue or lack of energy [61], and others no symptoms at all. Indeed, the majority of patients with OSA do not report EDS, as identified in several geographically diverse studies reflecting an EDS prevalence of approximately 40% in OSA [62,63]. EDS prevalence in OSA can vary across comorbidities, with relatively low prevalence in heart failure [64] and atrial fibrillation [65] compared to a higher prevalence in patients with asthma. [66] Epidemiologic data [67][68][69] from the SHHS and WSC [70] support monotonic relationships of increasing severity of OSA as defined by the AHI and increasing percentage of those with EDS, associations that were present even in milder degrees of OSA. These studies, however, differ in terms of sex-specific differences in sleepiness symptoms in those with OSA. While the SHHS did not support differences, the WSC showed higher degree of EDS in women versus men-specifically, 22.6% of women and 15.5% of men with an AHI-4% >5/hour reported sleepiness [70]. In these epidemiology studies, however, fewer than half of those with moderate-to-severe OSA (defined as an AHI-4% ≥15/hour) report excessive sleepiness (defined as Epworth Sleepiness Scale Score ≥11/24) [68,71]. Clinic-based studies appear to be overall consistent in significant associations with EDS, mainly ascertained by ESS, but in some cases with objective multiple sleep latency testing and maintenance of wakefulness testing, with increasing degree of OSA defined by the AHI [72][73][74][75][76][77][78]. Numerous treatment trials have demonstrated that treatment of OSA results in improvements in both self-report and objective measures of sleepiness. While severity of OSA assessed by AHI is associated with improvement in sleepiness in some studies, the baseline severity of sleepiness is a better predictor of improvement, indicating the importance of the individual response to OSA as a marker of disease severity [79]. Quality of life Few studies have examined the relationship between OSA metrics and quality of life; however, a relationship between AHI and QOL appears present. In a study of 737 individuals in the community-based WSC, higher AHI-4% was associated with significantly lower scores on 6 of 8 SF-36 health status scores (mental health, vitality, physical functioning, social, physical role, general health perception) in a dose-response fashion [80]. For example, compared to subjects without OSA (AHI = 0) and after adjustment for confounders including BMI, general health perception was 3.6, 5.6, and 7.0 points less in patients mild, moderate, and severe OSA as defined by AHI (mean score in the cohort was 72.5). The decrements in general health perception associated with moderate and severe OSA are similar to those found with other common diseases such as arthritis (reduced by 7.3), hypertension (reduced by 3.5), and back problems (reduced by 4.4), though the values are less than those seen in diabetes or angina (12.8 and 13.2 respectively) [81]. These results are generally consistent with those from the SHHS cohort, in which participants with severe OSA (AHI-4% ≥30/hour) demonstrated significantly reduced quality of life across a variety of domains. However, in the SHHS, only the vitality score of the SF-36 demonstrated a consistent linear relationship with AHI [82][83][84]. CPAP therapy may improve some domains of generic QOL, particularly with respect to physical functioning. However, the impact of CPAP appears more robust for sleep specific quality of life instruments such as the SAQLI or FOSQ [85]. Motor vehicle crashes and occupational industries There is a presumed link between OSA and the occurrence of motor vehicle crashes (MVC) [86], through the impact of sleep fragmentation on vigilance and reaction time. Although many other factors also contribute to the occurrence of MVC, e.g., the number of miles driven (measure of exposure), driver experience, sleep duration, circadian factors, and age, numerous studies have documented increased rates of MVC in patients with OSA. In a meta-analysis of 10 studies [87], patients with OSA (as assessed by standard indices such as the AHI) had a significant and substantially increased relative risk of MVC compared to those without OSA (RR = 2.43, 1.21-4.89, p = 0.01). Evidence of an association between disease severity (as measured by the AHI) and rates of crashes is inconclusive. Three studies had sufficient data to generate a pooled estimate of the relationship between AHI and MVC risk in patients with OSA. In these three studies, there was a trend towards greater AHI in OSA patients who had a crash vs. those who did not by approximately 10/hour (standardized mean difference in AHI between groups = 0.27, p = 0.055). In eight studies not included in the analysis, three found that severity of OSA was associated with crash risk, but 5 did not. However, in a more recent community-based study (SHHS), a significant increase in MVC risk was noted with increasing AHI-4% (OR 1.15, 95% CI 1.07-1.26 for every 10 event/ hour increase in AHI-4%), after adjustment for age, sex, miles driven, usual sleep duration and excessive sleepiness (based on a score ≥11/24 on the Epworth Sleepiness Scale) [88]. The MVC risk associated with a 10-unit increase in AHI was slightly greater than the MVC risk associated with habitually sleeping one hour less per night. Fewer studies have examined the impact of OSA on occupational injuries, and rates of occupational injuries are lower than MVC and highly dependent on job type/responsibilities. Nevertheless, studies have consistently demonstrated an increased risk of occupational injuries in OSA patients, as recently reviewed [89]. In this analysis of 7 studies, patients with OSA had an increased risk of occupational injuries (OR = 2.18, 1.53-3.1). In four of the studies, OSA was defined according to PSG or polygraphy; OSA was diagnosed based on threshold AHI (5-10/ hour). When only these studies were considered, OR = 1.78 (1.03-3.07), again consistent with a substantial risk associated with OSA as documented by AHI. In a subsequent study of 1109 workers sent for a sleep study (PSG), sleep apnea severity (log (AHI+1)) was significantly associated with occurrence of occupational injuries (OR = 1.31, 95% CI 1.02-1.73, p = 0.04) after controlling for confounders [90]. Patients with moderate-tosevere OSA had a rate double those without OSA (OR 1.99, 95% CI 0.96-4.44 and 2.00, 95% CI 0.96-4.49 for moderate and severe OSA groups, respectively); this increase in OR was similar to the increased odds ratio of workers in a physical/manual related industry vs. those who were not (OR = 2.28). Patients with OSA adherent to CPAP have rates of MVC similar to that of individuals without OSA [91]; the extent to which CPAP therapy might reduce occupational injury risk is not clear. However, there is no evidence that the AHI metric per se predicts a reduction in crash risk with therapy. Hypertension OSA is strongly associated with both prevalent and incident hypertension, although this effect appears weaker in older populations [92][93][94][95][96][97][98]. In the WSC and the SHHS, mean systolic and diastolic blood pressure (among those not using antihypertensive medications) and prevalent hypertension increased linearly with OSA severity as measured by the AHI-4%, after adjusting for age, sex, and body habitus. The magnitude of these associations was large: in SHHS, despite a prevalence of hypertension of 43% in the referent category, AHI-4% ≥30 was associated with an adjusted OR of 1.47 compared to those with AHI-4% <1.5; in WSC, for an individual with a BMI of 30, the estimated OR for hypertension was 1.21, 1.75, and 3.07 for AHIs of 5, 15, and 30, respectively, compared to an AHI of zero. In the WSC, there was also a dose-dependent association of OSA, as measured by AHI, with incident hypertension. After adjustment for age, sex, and body habitus, compared to an AHI of zero, the OR for incident hypertension was 2.03 for AHI-4% 5-14.9 and 2.89 for AHI-4% ≥15/hour [95]. No significant association with incident hypertension was seen in the SHHS cohort, however [98]. A recent metaanalysis of observational studies found that both prevalent and incident hypertension increased with increasing severity of OSA as measured by the AHI [99]. Although comparisons of AHI to other hypertension risk factors have rarely been reported, in a study of 372 adults aged 68 (SD 1) year, multivariate regression found that severe OSA (defined as AHI-3% >30/hour) was more strongly associated with incident hypertension than was male sex or BMI ≥ 30 [96]. Numerous studies have found that treatment of OSA with CPAP lowers blood pressure [100][101][102]. Few studies have looked in detail at PSG predictors of blood pressure response to CPAP. One study found that higher baseline AHI was associated with a greater fall in BP with treatment [103], while another found that time at saturation <90% was predictive [104]. In general, however, there is no consistent relationship between AHI and degree of blood pressure reduction with OSA treatment [105][106][107][108]. Conversely, the severity of blood pressure elevation at baseline is strongly associated with blood pressure reduction following PAP therapy, with particularly large effects noted in patients with resistant hypertension, again emphasizing the importance of the individual response to OSA as a marker of disease [103,109]. Coronary artery disease A strong cross-sectional association of OSA with coronary heart disease (CHD) has been reported. In several case-control studies from Sweden comparing patients with CHD to controls free of known CHD, OSA was independently associated with CHD in both men and women [110][111][112][113]. Peker et al. found that an AHI ≥10 had an adjusted odds ratio (aOR) of 3.1 (95% CI 1.2-8.3) for CHD, similar to the effect of diabetes mellitus in this sample (aOR 4.2, 95% CI 1.1-17.1), and greater than that of either hypertension or hyperlipidemia. Mooe et al. found that in men, an AHI ≥14/hour was associated with CHD with an aOR of 4.5, nearly identical to the effect of hypertension (aOR 4.2), diabetes mellitus (aOR 4.3), or a five-unit increase in BMI (aOR 4.8), and considerably stronger than a positive smoking history (aOR 1.6 for current or former smoking). In women, an AHI ≥5/hour was associated with CHD with an aOR of 4.1, greater than that of hypertension (aOR 3.4), smoking history (aOR 2.4), or BMI (not significant), although not as strong as diabetes (aOR 6.8). In a cross-sectional analysis of baseline data from the communitybased SHHS, a considerably weaker association of AHI with CHD was noted, with AHI-4% >4.4/hour having an aOR 1.2 after extensive covariate adjustment, with no increase in risk at higher levels of AHI [114]. Community-based prospective studies of the relation of OSA to incident CHD do not provide consistent results. Using the unconventional reference group of participants with AHI = 0, the Wisconsin Sleep Cohort Study found that among participants not using CPAP, the age-, sex-, BMI-, and smoking-adjusted hazard ratio (aHR) for incident CHD was 2.4 (95% CI 1.0-6.0) for those with AHI-4% ≥30/hour [115]. The aHR ranged from 1.6 to 1.8 for groups with AHI >0-<5, 5-<15, and 15-<30, and the overall trend was not statistically significant. In the SHHS, the age-, race-, BMI-and smoking-adjusted association of OSA with incident CHD was significant only in men, with excess risk limited to those with AHI-4% ≥30/hour [116]. After further adjustment for lipids, diabetes mellitus, hypertension, and blood pressure, the association of OSA with incident CHD was significant only in those under age 70. In the Busselton Health Study cohort, using a MESAM IV device (a type of HSAT) to measure respiratory events, moderate-to-severe OSA was not associated with incident CHD (for AHI ≥15/hour compared with AHI <5/hour, aHR 1.1, 95% CI 0.24-4.6) [117]. Clinic-based prospective studies have also yielded mixed results regarding the association of OSA with incident or recurrent CHD, and many are difficult to interpret due to analysis of CPAPadherent versus CPAP-non-adherent patients, with high risk of bias due to the healthy user effect. Several recent, high-profile randomized clinical trials in non-sleepy patients with elevated AHI have failed to demonstrate a reduction in CHD, stroke or death with PAP therapy [118][119][120]. Stroke A recent meta-analysis (which included both 3% or 4% desaturation criteria) of 86 studies comprising 7096 stroke patients found that 71% had an AHI >5/hour and 30% had an AHI >30/ hour [121]. The prevalence was similar in studies performed within 1 month to >30 months following stroke. OSA is also associated with incident stroke in both community-based and clinical cohorts [117,[122][123][124][125][126][127][128][129]. In cohorts in which both outcomes have been studied, the association of OSA with incident stroke is considerably stronger than its association with CHD. The WSC, found that an AHI-4% ≥20/hour was associated with increased risk of first-ever stroke over a 4-year follow-up period (OR 4.3, 95% CI 1.3-14.2); the magnitude of effect was somewhat lower and not statistically significant after adjusting for BMI (3.1, 95% CI 0. 74-12.8), with wide confidence intervals due to the overall low stroke event rate in this relatively young cohort [122]. The SHHS found that risk of incident stroke was higher in men with moderate to severe OSA in adjusted analyses (aHR = 2.9, 95% CI 1.1-7.4) [123]. This effect was not observed in women. One of the first clinic-based studies designed to examine OSA and stroke found that at an AHI >5, incident stroke and mortality were increased by nearly 2-fold (HR 1.97, 95% CI 1.12-3.48) [125]. In a study of 392 patients with CHD who were screened for OSA, presence of an AHI-3% ≥5 was associated with an adjusted HR for incident stroke of 2.9 (95% CI 1.4-6.1), stronger than the risk associated with type 2 DM, hypertension, current smoking or atrial fibrillation [127]. A clear dose-response relationship between AHI and risk of stroke has generally not been observed, however. While prospective observational studies suggested a decreased risk of stroke in patients who were treated with CPAP compared to untreated patients [128,130,131], these studies have a high risk of bias due to comparison of CPAP-adherent versus non-adherent patients. Randomized clinical trials in patients with stroke have not demonstrated a reduction in stroke risk, although they have been small and of limited power. No reduction in stroke risk with CPAP was noted in large randomized trials of patients with cardiovascular disease and OSA, although these studies were also not powered to detect a change in stroke risk per se [119,120,128,132,133]. Mortality Conventional OSA metrics of AHI and measures of hypoxemia are strongly associated with mortality risk in the general population. In the WSC, the Busselton Health Study, and the SHHS, after adjusting for age, sex, BMI, and prevalent medical conditions, mortality risk increased with AHI [134][135][136]. Although there was not always a clear monotonic increase with AHI, the adjusted mortality hazard was generally higher with increasing OSA severity as measured by AHI. In the WSC, e.g. among participants not treated with positive airway pressure, the adjusted hazard ratios for all-cause mortality in those with mild (AHI-4% 5.0-14.9/hour), moderate (AHI-4% 15.0-29.9/hour), and severe (AHI-4% ≥30.0/hour) OSA were 1.4 (95% CI 0.7-2.6), 1.7 (95% CI 0.7-4.1), and 3.8 (95% CI 1.6-9.0), respectively, compared to those with AHI-4% <5. In the SHHS, percent time at SpO 2 <90% was also associated with increased mortality, although less strongly than the AHI, while arousal index was not a predictor of mortality. Increased mortality has long been reported in untreated OSA patients in sleep clinic-based cohorts [137,138]. Where it has been evaluated, mortality risk in these cohorts increases with increased severity of OSA as measured by AHI [130,139,140]. The mortality risk associated with OSA compared to other mortality risk factors has been reported in a small number of studies. In the Busselton Health Study, the unadjusted HR for AHI ≥15 was 5.0 (95% CI 2.0-12.2), similar to that of diabetes (HR 4.0), a decade older age (HR 3.6), and current smoking (HR 3.8), and larger than that of a 10 mmHg increase in mean arterial pressure (HR 1.7) [136]. Two clinic-based studies from Spain have reported adjusted HR for OSA that are comparable to other important mortality risk factors, although these studies report the risks associated with OSA in patients who have declined treatment and must therefore be interpreted with caution. Martinez-Garcia et al. reported that compared to those with AHI-4% <15, the aHR for untreated moderate OSA (AHI-4% 15-<30/hour) was 1.4, for untreated severe OSA (AHI-4% ≥30/hour) aHR was 2.3, while smoking ≥30 pack-years was associated with aHR of 1.5, diabetes mellitus aHR of 2.3, and age aHR of 1.8 per decade [140]. Similarly, Campos-Rodriguez reported that compared to those with AHI-4% <10, those with untreated AHI-4% of 10-29 had an aHR of 1.6, and untreated AHI ≥30 had an aHR of 3.5, while diabetes mellitus was associated with an aHR of 1.4, hypertension an aHR of 2.4, and age an aHR of 1.6 per decade [139]. Both community-based and clinical cohorts suggest that the association of OSA with mortality is stronger in those under age 70 than in older adults [135,141]. It is unclear whether this reflects an increase in competing causes of mortality or a different physiological response to OSA in the elderly. No adequately powered randomized clinical trials of OSA treatment to reduce mortality have been conducted, and no significant reduction in mortality has been reported in cardiovascular secondary prevention trials of CPAP therapy [119,120]. Alternative metrics The AHI has been the most commonly used metric of sleep apnea severity for decades, but methodologic problems described above are widely recognized. In addition, failure to quantify the mechanisms that underlie the pathophysiologic consequences of OSA adequately is likely to contribute to the limited ability of the AHI per se to predict clinical consequences of OSA or response to OSA treatment. Thus, alternative metrics of disease severity have been proposed based on advanced signal processing and other sophisticated analyses [142]. We summarize here some of these alternative metrics, recognizing many have been proposed and no ideal metric has yet emerged. Indeed, as OSA is now considered a heterogeneous disease both from the perspective of underlying mechanisms (endotypes) as well as in terms of clinical manifestations (phenotypes) [1,143,144], it is quite likely that no single metric will adequately characterize all aspects of OSA and its related risks (Figure 2). Hypoxic burden. There is general agreement that hypoxia, particularly when severe, has deleterious effects on cardiometabolic function. The degree of desaturation and the frequency of desaturation are commonly quantified, but now investigators have captured the area under the oxyhemoglobin saturation curve as a metric of hypoxic burden. Azarbarzin et al. reported that this measure of hypoxic burden was easily derived from an overnight sleep study and was predictive of mortality from cardiovascular disease in two community-based cohorts [145,146]. In both the SHHS and the Study of Osteoporotic Fractures in Men [147], cardiovascular mortality increased progressively with increasing hypoxic burden, an effect that was not diminished by adjustment for AHI or more conventional polysomnographic measures of hypoxia, including minimum saturation or percent time at saturation <90%. In the SHHS, the upper quintile of hypoxic burden had an adjusted HR of 1.96 (95% CI 1.11-3.43) for cardiovascular mortality. By contrast, AHI was not significantly predictive of mortality from cardiovascular mortality in these cohorts. The findings suggest that not only the frequency but the depth and duration of sleep-related upper airway obstructions, are important disease characterizing features. Prior authors had quantified the T90 (duration of saturation below 90%), a less OSA-specific measure of hypoxic burden, which was also predictive of important outcomes, including platelet aggregation as well as overall mortality [148]. In contrast, recent post-hoc analyses of the SAVE (Sleep Apnea Cardiovascular Endpoints) study have shown minimal predictive value of desaturation indices from the standpoint of an incident composite cardiovascular outcome [149]. Of note, pulse oximeters have evolved over the years but vary in sensitivity and time constants etc. emphasizing the important of methodological details in yielding robust conclusions. 2. Arousal intensity: Amatoury et al. have quantified the arousal intensity as a potentially important physiological variable [150]. The authors hypothesized that arousal from sleep can vary in intensity with some arousals being quite subtle (and not captured by traditional EEG criteria), whereas others are more robust leading to complete awakening from sleep [151]. The average arousal intensity was not related to the magnitude of the preceding respiratory stimuli but was positively associated with arousal duration, time to arousal, rate of change in epiglottic pressure, and negatively with body mass index (R 2 > 0.10, p ≤ 0.006). The authors concluded that the average arousal intensity is independent of the preceding respiratory stimulus. This finding is consistent with arousal intensity being a distinct pathophysiological trait. Respiratory and pharyngeal muscle responses increase with arousal intensity. Thus, patients with higher arousal intensities may be more prone to respiratory control instability. Prior work on 'subcortical arousals' had noted fragmentation of sleep with important effects on daytime function but were not captured by traditional EEG criteria. Azarbarzin and colleagues compared arousal intensity with the change in heart rate associated with obstructive events in a sample of 20 PSGs from patients attending a sleep laboratory. They found a strong correlation between these measures for a given individual (average r: 0.95 ± 0.04), consistent with concept that arousal intensity is a marker of autonomic activation [152]. A heart rate response to obstructive events in the upper quartile of the population was associated with an increased risk of both fatal (HR 1.68, 95% CI 1.22-2.30) and non-fatal (HR 1.60, 95% CI 1.28-2.00) cardiovascular disease events in the Sleep Heart Health Study, and this risk was particularly great in those who also had a large hypoxic burden [153]. Of note, arousals can also be quite variable, subject to inter-observer and intra-observer variability. The autonomic response to arousal is likely an important factor underlying the pathophysiology of OSA complications [154]. However, the predictive value of this parameter for hard cardiovascular events is untested and will require further study. 3. Odds ratio product (ORP) is a more recent metric that quantifies sleep depth [155]. It is derived from quantitative analyses of the EEG by using various power spectral measures. ORP values can range from 0 to 2.5 with values of 0 to 1.0 predicting sleep and 2.0 to 2.5 predicting wakefulness. Although ORP can vary significantly within any particular stage of sleep, there is overlap in ORP value and different stages of sleep. As with any metrics of sleep quality, there is substantial night-to-night variability in ORP values [156]. ORP values have also been shown to improve with CPAP therapy in patients with OSA. ORP during sleep has also been related to excessive wake time and sleep depth in those with OSA and/or PLMs [157]. The correlation between the right and left hemisphere ORP measures (interhemispheric sleep depth coherence) may be a measure of susceptibility to adverse neurocognitive outcomes in sleep apnea. A recent study analyzing SHHS data found that interhemispheric sleep depth coherence in patients with sleep apnea predicted reported occurrence of car accidents 2 years after the sleep study. Those in the highest quartile of sleep depth coherence had a 57% lower risk of accidents compared to the lowest quartile, independent of important confounders including AHI, reported sleepiness and usual sleep duration [158]. Thus measures of ORP may predict cognitive consequences of OSA. However, its relationship to cardiovascular outcomes is untested. 4. Cardiopulmonary coupling (CPC). Thomas et al. developed an automated technique to assess CPC during sleep using a single-lead EKG signal [159]. From a continuous, singlelead electrocardiogram, the authors extracted both the normal-to-normal sinus interbeat interval series and a corresponding electrocardiogram-derived respiration signal. Employing Fourier-based techniques, the product of the coherence and cross-power of these two simultaneous signals was used to generate a spectrographic representation of cardiopulmonary coupling dynamics during sleep. This technique shows that non-rapid eye movement sleep in adults demonstrates spontaneous abrupt transitions between high-and low-frequency cardiopulmonary coupling regimes, which have characteristic electroencephalogram, respiratory, and heart-rate variability signatures in both health and disease. Using the kappa statistic, agreement with standard sleep staging was poor (training set 62.7%, test set 43.9%) but higher with cyclic alternating pattern scoring (training set 74%, test set 77.3%). The authors concluded that a sleep spectrogram derived from information in a single-lead electrocardiogram can be used to track cardiopulmonary interactions dynamically. This technique may provide a complementary approach to the conventional characterization of graded non-rapid eye movement sleep stages. CPC derived measures are correlated with conventional metrics derived from PSG including AHI [160]. A number of studies have linked CPC measures to outcomes including predicting early response in depressed patients, improvements in sleep quality with CPAP therapy, upper airway surgery, and MAD [161][162][163][164]. While these data suggest potential usefulness of CPC further studies are needed to determine whether they add value to AHI and other available time/frequency analyses (e.g. cardiovascular entropy). 5. Apnea-hypopnea event duration. Event duration has been quantified by Butler et al. [165]. The authors analyzed data from the SHHS and observed a potentially important relationship between the duration of the respiratory events and the overall mortality seen in SHHS. The authors had previously shown the event duration to be heritable, although the mechanisms driving the duration of respiratory events are unclear. In theory, short respiratory events could reflect a low arousal threshold (propensity to wake up) but in addition an individual with unstable ventilatory control (high loop gain) may also terminate respiratory events more quickly than an individual with low loop gain. Regardless, the authors showed that short respiratory event duration was predictive of mortality in men and women. After adjusting for demographic factors (mean age, 63 years; 52% female), apnea-hypopnea index (mean, 13.8/hour; SD, 15.0), smoking, and prevalent cardiometabolic disease, individuals with the shortest-duration events had a significant hazard ratio for all-cause mortality of 1.31 (95% confidence interval, 1.11-1.54). The authors surmised that individuals with shorter respiratory events may be predisposed to increased ventilatory instability and/or have augmented autonomic nervous system responses that increase the likelihood of adverse health outcomes. Of note, however, short respiratory events are likely associated with less hypoxic burden compared to long respiratory events, leading to some inconsistency or complexity in the predictive value of various metrics. Given the interest in personalized medicine in OSA, the event duration may be one factor to consider when identifying patients at high risk of mortality from OSA. Future directions Although the AHI as determined from laboratory-based PSG has been considered the gold standard metric of OSA severity (see Figure 3; [135]), financial pressures and the scale of the problem have driven increased reliance on home sleep apnea testing. This change has been further accelerated by the COVID pandemic, in which many patients prefer the convenience and presumed safety of home based testing [166]. While OSA as identified using the AHI is strongly associated with neurocognitive, metabolic and vascular outcomes, it is clear that the AHI as a single metric is neither adequate nor sufficient to define the presence or characterize the severity of OSA. This notion is evidenced by the lack of reported symptoms in many patients with severely elevated AHI and by failure of the AHI alone to identify patients whose experience cardiovascular benefit from PAP therapy. We are therefore highly supportive of the development of novel techniques to capture OSA occurrence and to predict its complications. Given the varying biology underlying each organ system, we expect that the ideal metric for OSA severity will vary by the complication of interest. Such severity metrics will likely require some combination of measuring [1] the magnitude of the OSA stimulus (e.g. using measures of gas exchange abnormality such as hypoxic burden considering mechanism of obstructive vs. central apnea), [2] individual responses to the stimulus (e.g., assessment of the autonomic nervous system or EEG) and [3] individual response to therapy (e.g., improvement in sleepiness or reduction in blood pressure) [167]. Many approaches could be taken to quantify sleep apnea severity better. Notwithstanding the limitations of the AHI, it is important that, as novel metrics are developed, their ability to improve upon the AHI as markers of prognosis or predictors of response to therapy be formally tested and replicated across populations of interest. A number of strategies are proposed: 1. Evaluation of symptom subtypes. Severity of sleepiness was included in the 1999 AASM recommendations for classifying OSA severity, although in the absence of a reproducible standard for assessing sleepiness, this severity metric was not retained in subsequent recommendations. Consideration of the individual's symptomatic response to OSA therapy may be particularly important, however. Ye et al. showed via cluster analyses three distinct groups of OSA patients: those who are minimally symptomatic, those with disrupted sleep and those with EDS [63]. In the SHHS, increased risk of incident total cardiovascular disease, CHD, and heart failure was seen only in a cluster characterized by excessive sleepiness [143]. Similarly, in a different study, mortality risk was increased only in those OSA patients who reported excessive sleepiness [168]. It is speculated that the failure of recent clinical trials to demonstrate reduction in cardiovascular risk with PAP therapy may reflect the exclusion of sleepy patients from these trials. Thus, sleepiness may be a marker of individual response to OSA that also reflects susceptibility to the cardiovascular effects of OSA. This idea warrants further investigation of symptoms as a measure of OSA severity or as a metric of susceptibility to OSA complications. 2. Genetics. Genetic factors are likely to play an important role in these individual differences, as suggested by the trait-like behavior of individual vulnerability to cognitive impairment from sleep deprivation [169][170][171]. While OSA has been long been recognized to be a complex heritable trait, studies evaluating the genetic causes of OSA and its component endotypes have only recently begun. This situation reflects the lack of availability of sleep testing in many longitudinal cohort studies. Further investigation of the genetic architecture of OSA is strongly encouraged, as this approach is likely to help explain individual differences in susceptibility to OSA and its clinical consequences. 3. Blood biomarkers. Panels of biomarkers have the potential to identify causal pathways affected by OSA, and thus to provide important prognostic and predictive information [172]. For example, quantifying inflammation, autonomic function, and oxidative stress pathways should provide insights into the risk of OSA-associated cardiovascular disease. The assessment of microRNAs and exosomes has also led to important insights both in terms of OSA biomarkers and potential therapeutic targets addressing OSA complications [173][174][175][176]. In addition to such hypothesis-driven biomarker panels, hypothesis-free methods of biomarker discovery are becoming increasingly accessible to the sleep field. Metabolomic, lipidomic, proteomic, and gene expression profiles made possible by advances in mass spectroscopy, microarray, and other technologies are being investigated with the potential to identify new biomarkers of OSA [177][178][179][180][181]. In theory, these techniques could identify diagnostic tests for OSA, prognostic markers for sleepiness and other consequences of OSA, new therapeutic targets to prevent OSA complications, and markers that can be followed to monitor successful therapy. 4. Machine learning. Machine learning and other hypothesis-free deep learning methods that identify complex patterns in empirical data are gaining traction in various medical applications. In addition to application to biomarker discovery, these methods could be applied to sophisticated signal processing of polysomnographic and other data to identify previously unrecognized patterns, and thus complement the hypothesis-driven approaches to alternative metric identification described in the above section on Alternative Metrics. Such methods will require appropriate validation and training sets but are being greatly facilitated by availability of Big Data [182][183][184]. 5. Wearable technologies. Wearable technologies are becoming ubiquitous and provide new opportunities to gain insight into pathophysiological abnormalities related to sleep and sleep disorders. For example, data from one device support its role compared to PSG [185][186][187]. Ongoing studies are comparing various simplified technologies for OSA diagnosis, although none is yet able to replace PSG. Wearable technologies provide the opportunity to record data inexpensively for multiple nights over extended periods of time, and future studies should assess the value of these longitudinal data (with variable parameters from each device) to improve prediction of OSA-associated morbidity, including cardiovascular risk. Novel measures that improve upon the AHI for the diagnosis and severity classification of OSA will be particularly transformative if they help to capture the variability in OSA endophenotypes. This concept may facilitate robust adaptive randomized clinical trials to be designed to allow patients at risk of particular complications and amenable to specific interventions to be studied rigorously. Novel OSA metrics should be developed with an eye toward making complex measurements accessible to clinical practitioners, so that scientific advances can be readily disseminated into practice to improve patient care.
2021-03-12T06:16:05.831Z
2021-03-09T00:00:00.000
{ "year": 2021, "sha1": "b1c51068b96f7eefab50969a43ad5f75115f63cb", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/sleep/article-pdf/44/7/zsab030/45297643/zsab030.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2bb93788644e12a38ebbb4d1beddae06c4ce564d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
455470
pes2o/s2orc
v3-fos-license
Influence of serum collected from rat perfused with compound Biejiaruangan drug on hepatic stellate cells. AIM To observe the effect of compound Biejiaruangan decoction (CBJRGC) (composite prescription of Carapax trionycis for softening the liver) on proliferation, activation, excretion of collagen and cytokine of hepatic stellate cells (HSCs) and to find the mechanism of prevention and treatment of hepatic fibrosis by CBJRGC. METHODS Using MTT, immunohistochemistry and image analysis technology, the related indexes for proliferation, activation, excretion of collagen and cytokine of hepatic stellate cells were detected in 24 h, 48 h, and 72 h after administration of different dosages of CBJRGC. RESULTS Statistical analysis showed that serum collected from rat perfused with CBJRGC could restrain the proliferation of HSC in 48 h and 72 h especially in high and medium dosage groups, markedly decrease the expression of desmin, synapsin and platelet derived growth factor (PDGF) in HSC in 24 h, 48 h and 72 h, as well as the expression of alpha-SMA, collagen III, TIMP and TGFbeta1 in 48 h and 72 h, decrease the excretion of collagen I in 72 h. CBJRGC serum had no significant effect on collagens I, III and TIMP in 24 h. CONCLUSION CBJRGC serum has a good curative effect on hepatic fibrosis. Its main mechanism may be related to the following factors. The drug serum can restrain the proliferation and activation of HSC, decrease the number of activated HSC and the total number of HSC, the excretion of collagens I, III, enhance the degradation of collagen and restore the balance of synthesis and degradation of collagen, inhibit the expression of transforming growth factor beta1 (TGFalpha1) and platelet derived growth factor (PDGF) in HSC, block and delay the process of hepatic fibrosis. Synapsin is a new marker of activation of HSC, which provides a theoretical and testing basis for neural regulation in the developing process of hepatic fibrosis. INTRODUCTION Hepatic fibrosis is an inevitable pathological process of chronic liver disease to hepatic cirrosis. Hepatic fibrosis is caused by excessive deposition of extracelluar matrix (ECM), which is the result of more synthesis and less degradation of ECM. A clinical and experimental study has found that liver cells, hepatic stellate cells (HSC), kupffer cells and sinus endothelial cells all take part in the formation of hepatic fibrosis, in which HSC plays a very important role [1] . Activation of HSC is commonly regarded as the major link of hepatic fibrosis and the main resource of synthesis of ECM [2] . The main characteristic of activation of HSC is excessive proliferation of HSC [3] . In addition, desmin is regard as a marking protein of HSC and α-SMA is regarded as a marker of activation of HSC [4] . A foreign study has reported that in process of activation of HSC, the expression of synapsin can increase [5] . Activation of HSC can lead to excessive synthesis of collagen. MMP and TIMP also jointly take part in the synthesis and degradation of collagen [6] . Multiple cytokines including TGFβ1 and PDGF play a very important role in proliferation, activation of HSC and synthesis of ECM [7] . Therefore, it may be a good strategy to restrain the amount of activated HSC, decrease the synthesis and excretion of collagens I, III and TIMPs, and promote the synthesis and excretion of MMPs. Aiming at HSC, it has become a control issue in anti-hepatic fibrosis to restrain its proliferation, decrease the synthesis of ECM and accelerate the degradation of ECM, and even inverse activated HSC to silent HSC. Traditional Chinese medicine has shown its own advantage in treating some difficult diseases. Approved by the government, compound Biejiaruangan decoction (CBJRGC) has been used as the first traditional Chinese medicine for treating hepatic fibrosis (approval document number: 1999 2-102). Clinical observations in Beijng, Shanghai and Hubei Province proved that its effective rate was 78.9% in 121 patients by the first hepatic puncture and in 52 patients by the second hepatic puncture. However, further study of its detailed anti-hepatic fibrosis mechanism is still needed. On the basis of the abovementioned theory and research developments, our study with the cell culture as technical platform, was to observe the influence of serum collected from rats perfused with CBJRGC on activation and proliferation of HSC in vitro, and using immunohistochemistry and image analysis technology to observe its influence on the expressions of desmin, α-SMA, synapsin, collagens I, III, TIMP, TGFβ1 and PDGF of HSC in vitro. RPMI1640 was produced by Gibco, fetal bovine serum was produced by Hyclone, and 96-well plates by Costa. Dimethyl sulfoxide (DMSO), ethylenediaminetetra-aceticacid (EDTA), 3- (4,5-dimethy1-thiazol-2-yl), 2,5-diphenyl tetrazlium bromide (MTT), N-2-Hydroxyethlpiperazine-N'-2-ehtane sulfonine acid (HEPES), and trypin were all products of Sigma. Rat desmin monoclonal antibody and rat α-SMA monoclonal antibody were bought from DAKO, rat synapsin monoclonal antibody was bought from Santa Cruz. Rat collagen I and III monoclonal antibodies, rat TIMP and PDGF as well as TGF-β1 monoclonal antibodies, ABC and DAB test kits were all bought from Beijing Zhongshan Biotechnology Inc. SD rat HSC line The HSC line was established in our laboratory and prepared after long-term generation. Preparation of SD rat serum [8] Normal SD rat serum A normal rat weighting 350 g fasted for 12 h was injected with diethylether for anesthesia. Under the sterile condition, 10 mL blood was obtained from abdominal aorta, then held for 2 h at room temperature. Blood serum was made by centrifugation at 427 g for 10 min, inactivated at 56 for 60 min, and frozen at -60 . Hepatic fibrosis model SD rat serum Adapted Hernandez-Munoz method was used to establish animal model of hepatic fibrosis [9] , 0.2 mL CCl 4 (Olive oil, 1:6 dilution) was injected into abdominal cavity, three times each week for 7 wk. Serum preparation and preservation were the same as those of the normal SD rat serum. Drug serum With 3.5, 7 and 14 times of human body dosage as low, medium and high dosage groups respectively, CBJRGC was perfused into rat stomach 3 times at 12 h intervals. Rats were fasted for 12 h before the third perfusion and blood sampling was conducted from abdominal aorta 2 h after the third perfusion. Serum preparation and preservation were the same as those of the normal SD rat serum. Cell culture and grouping Rat HSCs were inoculated in RPMI1640 with 100 g/L fetal bovine serum, and cultivated at 37 in an incubator containing 50 mL/L CO 2 to logarithm growth time. After treatment with digestive fluid, HSCs were suspended by adding D-Hank's fluid, deposited by centrifugation at 190 g with 5 min, and then counted. Using RPMI1640 containing 100 g/L fetal bovine serum, HSCs were adjusted to a density of 5×10 4 /mL and added to a 24-well plate containing flying sheet, 0. Influence of each group serum on proliferation of HSC at different time points HSC just after digesting phase showed a global form under contrast microscope. After cultivated for 12 h, HSCs were pasted to the wall, changing into the oblate form. There were obvious lipid droplets in cytoplasm. Few cells started to show the extension of cytoplasm. After cultivated for 24 h, most cells showed the extension of cytoplasm, and some cells showed multi-angle pseudopodium and typical star-like form. The influence on proliferation of HSC detected by MTT method is presented in Table 1. Tables 2, 3 Influence of each group serum on collagen I, III and TIMP of HSC at different time points The influence of each group serum detected with TN-8502 image analysis system on collagens I, III and TIMP of HSC at different time points is shown in Tables 5, 6, 7 and Figures 3, 4. Influence of each group serum on TGFβ1 and PDGF of HSC at different time points The influence of each group serum detected with TN-8502 image analysis system on TGFβ1 and PDGF of HSC at different time points is shown in Tables 8, 9 DISCUSSION In past studies about traditional Chinese medicine, the method of adding directly coarse extract of the medicine to environment of cells was adopted in most experiments in vitro. Because of the complicated components of traditional Chinese medicine, it could not effectively reflect its pharmacological role. Therefore, we adopted the blood serum pharmacological method in our experiment [8] . After giving the drug to the animal orally, we took the drug serum as the drug source to add to the response system in vitro. This method could not only present the biotransformation in vivo, but also overcome confusion of other substances, aiding to makea pharmacokinetic study by finding the effective locus and the activity of components of traditional Chinese medicine. To avoid influence of different species animal sera on cells, animals were adopted in our experiment which were the same species with cultivated HSCs. Considering that the tested object of the drug was cells and the drug with biological reactivity in organism, we gave the drug to the animal orally with an equivalent dosage, to assure the maximum homeostasis concentration of the drug after bioconversion in vivo. The results of our experiment indicated that our experiment method was effective, stable and reliable. A recent investigation has shown that proliferation and activation of HSCs are not only the central link of hepatic fibrosis, but also the cytological background of hepatic fibrosis [10,11] . Therefore, inhibiting the proliferation and activation of HSC has important significance in prevention and treatment of chronic liver disease and anit-hepatic fibrosis. Cultivated in a non-coating plastic Petri dish, HSC could be automatically activated, thus possessing the biological characteristics of activation in vivo, and becoming an ideal anti-hepatic fibrosis cell model [12] . Just as shown by the results of our experiment, with elongation of cultivating time, HSC showed multi-angle pseudopodium and typical star form. Cell proliferation and collagen synthesis were two important behaviors in activation of HSC. MTT chromatometry methods could be employed to detect cell ability of proliferation, which was based on the principle that succinic dehydrogenase in mitochondria of living cells could make ectogenical MTT recover to non-dissoluable blue and purple crystals and deposit in cells. DMSO could dissolve blue and purple crystals in cells. Adopting enzyme labled instruments to detect the Absorbency value at some wave-length, the quantity of cells could be reflected indirectly. The results of our experiment showed that there was no significant difference (P>0.05) between all the groups 24 h after serum was added. It means that adding serum to cultivate for 24 h did not exert any effective influence and HSCs still proliferated at its original rate, suggesting that intervention in proliferation of HSCs needs a time process, which from accepting excitation of cell signal to changing proliferation quantity of HSC, is undoubtedly longer than 24 h. Thus, how to restrain the activation of HSC at earlier time to hold back quick proliferation of HSC, has undoubtedly important significance in prevention and cure of hepatic fibrosis. Our results showed that proliferation of HSC in model group in 48 h and 72 h had a significant difference compared with other groups, indicating that the model group might contain substances which could promote quick proliferation of HSC. The reason may be that after cultivated for 48 h and 72 h, the PDGF of HSC in the model group increased to the effective content, leading to excessive proliferation of HSC. But a 24 h cultivation was not enough for PDGF to increase to the effective content. This could explain our results that there was no significant proliferation of HSC in the model group after 24 h serum culture. The results of our experiment also showed that after cultivated for 48 h and 72 h, there were significant differences among the high, medium and low dosage drug serum groups and the model group (P<0.05), and between the high and medium dosage drug serum groups and the control group (P<0.05), indicating that different dosage drug sera could inhibit the proliferation of HSC, and had an especially significant effect on the high and medium dosage groups. CBJRGC serum could significantly decrease and inhibit the proliferation of HSC. Its mechanism may be related to the inhibition of self-excretion or para-excretion of PDGF and TGFβ 1 [13] , thus inhibiting the proliferation of HSC by decreasing the conversion from HSC to α-SMA. Changes in form and function after HSC is activated could lead to production increase and degradation decrease of ECM, eventually resulting in deposition of hepatic collagens and hepatic fibrosis. Desmin could be regarded as a marker of HSC [14] , and α-SMA as the marker of activation of HSC [15] . α-SMA is a kind of filament protein which is about 7 nm in diameter and mainly exists in smooth muscle cells as a portion of cytoplasm framework and a functional unit of cell contraction. Under normal conditions, this protein mainly exists in smooth muscle cells and myofibroblasts. There is a very small amount of this protein in HSCs of rats. In the animal model of liver diseases, HSC lost desmin expression and was changed to the expression of α-SMA in its activation process. So, α-SMA could be regarded as the marker of activation of HSC [16] . In the process of inducing rat hepatic fibrosis by CCl 4 [17] , the dynamic change of desmin male cells was a mono-cusp curve, increasing in number in the earlier period, reaching the peak in 12 wk and then gradually decreasing. A foreign study has found that increase of expression of synapsin could be regarded as another marker of activation of HSC [18] . Our results showed that the dynamic changes in A value of male staining of α-SMA, desmin and synapsin and in the area occupied by α-SMA and synapsin male cells were almost synchronical. The area occupied by desmin male cells showed a tendency to decrease with elongation of cultivating time. This result was consistent with that reported by Li et al. that HSC lost the expression of desmin in its activation process [19] , suggesting that in the process of hepatic fibrosis, α-SMA, desmin and synapsin all can be regarded as the markers of activation of HSC, but each has its own characteristics. The expression of desmin in the earlier period of activation was significant, but weakened with elongation of activating time, while the expression of α-SMA and synapsin developed with elongation of activating time. It is estimated that activation and proliferation of HSC can synchronically occur. Our results also showed that after cultivated for 24 h, the expression of α-SMA had no significant difference among all groups, and after cultivated for 48 h, α-SMA male cell area was smaller than that of desmin, indicating that in the period of cultivation for 24 h and 48 h, proliferation was the main form of HSC growth, but with elongation of cultivating time, HSC began to be partly activated and changed to expression of α-SMA. Seventy-tow h after cultivation, α-SMA male cell area was bigger than desmin male cell area, the amount of activated HSCs was more than that of silent HSCs and lots of HSCs altered in phenotype, further indicating that the activation of HSC could be synchronically expressed as proliferation and transformation of HSC. The results of our experiment showed that CBJRGC serum could inhibit the expressions of α-SMA, desmin and synapsin in HSC and inhibit the activation of HSC, whose drug effect was closely related to the concentration of drug serum. Synapsin can be regarded as a new marker of activation of HSC. Its significance is to provide an important theoretic and testing basis for neuroregulation in the process of hepatic fibrosis. So far, no such study has been found in China. Our experiment firstly showed that after cultivated for 24 h, synapsin could express in HSC, and after cultivated for 48 h and 72 h, synapsin could continue to express. Expression of synapsin occurred before it was cultivated for 24 h and continued to maintain a high level. CBJRGC serum could significantly inhibit expression of synapsin in HSC. Nevertheless, further studies such as the precise time of synapsin expression, the essential significance of the increase in its lasting expression, whether the nervous system involves regulation of stress condition of liver, and whether CBJRGC serum takes part in nervous regulation in hepatic fibrosis, are still needed. Under normal conditions, liver contains collagen of types I, III, IV, V, and VI. Collagens I and III take the most proportion of collagen, accounting for about 60% of total collagen in liver. When hepatic fibrosis occured, the proportion of collagens I and III might reach 95% of total collagen in liver [20] . Thus, in hepatic fibrosis, deposited ECM mainly consists of collagens I and III. Synthesis of collagen could reflect alterative ability of individual fiber hyperplasia of cells [21] . Our experiment showed that after cultivated for 24 h, there was no significant difference of collagens I and III among all groups, indicating that within 24 h after acceptance of activating signal, HSC did not yet achieve enough time to excrete collagen. The outcome of our experiment showed that activated time should be longer than 24 h, so that activated HSC and transcription of collagen and cytokine could be altered. In further investigations we should observe and analyze changes in collagen-mRNA in this time process to test whether CBJRGC serum can influence collagen-mRNA. The results also suggested that to inhibit collagen in its transcriptional stage might have important significance in clinical treatment. In our experiment, collagen III began to markedly express after cultivated for 48 h, but collagen I started to markedly express after cultivated for 72 h, showing that expression of collagen III was earlier than that of collagen I, and that in the early time of hepatic fibrosis, collagen III expression took the most part of expression of collagen. Therefore collagen III can be the testing index of earlier hepatic fibrosis. The outcome of our experiment also showed that high and medium dosage groups could significantly inhibit the expression of collagen III in 48 h and 72 h and the expression of collagen I in 72 h. It means CBJRGC serum could play a role in anti-hepatic fibrosis at the earlier time when collagen was significantly expressed. It further suggested that CBJRGC serum could help quickly recover equilibration of synthesis and degradation of ECM in hepatic fibrosis, thus helping to cure hepatic fibrosis. Besides, low dosage group could only significantly inhibit the expression of collagen III in 48 h, but there was no significant difference compared with control group in 72 h. It is estimated that the effect of CBJRGC serum in inhibiting collagen III expression is related to the serum concentration of the drug. MMP and TIMP mainly take part in regulation of equilibration in synthesis and degradation of collagen [22] . Among numerous MMPs, MMP1 is the chief MMP, decomposing collagens I and III in liver [23] . TIMP1 is an inhibiting factor of MMP, which plays its role by irreversibly binding to activated MMP1. Therefore, the imbalance of ratio of MMP/TIMP 1 plays a very important role in hepatic fibrosis. MMP can be inhibited by many specific or non-specific inhibitors, which at present, mainly include TIMP and α2macroglobulin. TIMP 1 is the most important inhibitor of MMP and is negatively correlated to the activity of MMP. TIMP is a kind of coding protein of multigene family [24] . It could irreversibly bind to activated MMP and inhibit the degradation of ECM [25] . So far, there are four kinds of TIMP isolated from tissues and cloned [26] . TIMP-1 is a kind of 28.5 ku glycoprotein, mainly inhibiting the activity of MMPs-1 and MMP-9. As the specific inhibitor of MMP, TIMP plays a very important role in hepatic fibrosis. TIMP could inhibit MMP, which is the important reason for specific descent of degradation of ECM [27] . HSC is the main source cell of TIMP and MMP. 1492 ISSN 1007-9327 CN 14-1219/ R World J Gastroenterol May 15, 2004 Volume 10 Number 10 The results of our experiment showed that after cultivated for 24 h, there was no significant expression of TIMP in all groups, indicating that HSC did not have enough time to excrete excessive TIMP. The activated time should be longer than 24 h, and within this time, transcription of TIMP of activated HSC might be altered. In further studies, we should observe and analyze the change of TIMP-mRNA within this time, so as to find whether CBJRGC serum can influence TIMP-mRNA. It is also suggested that to inhibit TIMP in transcriptional stage might have important significance in clinical treatment. In this experiment, TIMP of the model group maintained high expression in 48 h and 72 h, indicating that there was some substance to promote high expression of TIMP in the model group. Further study on the precise characteristics of the substance is suggested. Researches have found that HSC is the key cell in hepatic fibrosis. Under the stimulation of chronic injury and inflammation, HSC can be activated from normal silent behavior to MFB, and meanwhile can secrete and synthetize excessive ECM, forming the foundation of hepatic fibrosis. Previous studies have indicated that activation and phenotype conversion of HSC are closely related to TGF-β1. TGF-β is a kind of polypeptide molecule of hormone activity, which is produced from kupffer cells by self-excretion and para-excretion and could take part in many pathological and physiological processes [28] . TGF-β has at least 5 sub-types, but there are only TGF-β1,TGF-β2 and TGF-β3 in human tissue cells. After binding to the recipient on the membrane, TGF-β could phosphorate and activate its signal transduction molecule (SMAD protein) of intracytoplasmic downstream, which could subsequently enter the nucleus, regulating the transcription of related target gene [29] . TGF-β1 exhibits the significant biological activity of TGF-β and is the main cytokine inducing the production of collagen. Through the mechanism of para-excretion and self-excretion, TGF-β1 could start and maintain the activation of HSC, regulating cell proliferation, accelerating transcription of collagen and proliferation of ECM [30] . The results of our experiment showed that after cultivated for 24 h, there was no significant expression of TGF-β1 among all groups. It showed that it was not enough for HSC to secrete TGF-β1 24 h after it received the activated signal. It is suggested that in further studies we should observe and analyze the change in TGF-β1-mRNA within this time, so as to find whether CBJRGC serum can influence TGF-β1-mRNA. Our results also showed that after cultivated for 48 h and 72 h, TGF-β1 in the model group maintained high expression, suggesting that it could continuously stimulate activated HSC to produce collagen and accelerate hepatic fibrosis. There might be some substance in the model group which could promote high expression of TGF-β1. Study on the precise characteristics of this substance is still needed. The results also showed that the high and medium dosage groups could markedly inhibit the expression of TGF-β1 in 48 h and 72 h, but the low dosage group did not obviously inbibit the expression of TGF-β1 in 48 h and 72 h, suggesting that CBJRGC serum can inhibit the expression of TGF-β1 and its effectiveness is related to the concentration of the drug serum. PDGF is a kind of splitting agent and can promote activation and proliferation of HSC. It has been found PI3-K is the important pathway of intramembrane signal transduction [31] . The outcome of our experiment showed that after cultivated for 24 h, the A value and male cell area of PDGF were not completely consistent. In 24 h, the A value of PDGF in the model group was significantly higher than that in the other groups, but the male cell area did not show any significant difference compared with the other groups. Although PDGF in the model group achieved significant expression, the PDGF might not entirely come from the excretion of HSC and might include original PDGF existing in the model group. While PDGF bound to PDGF recipient of HSC and accelerated the proliferation of HSC, the absolute proportion of HSC in the situation of excretion of PDGF did not significantly increase. The outcome of our experiment showed that the high, medium and low dosage groups all could obviously inhibit the excretion of PDGF by HSC. In summary, we suggest that further studies on the mechanism of anti-hepatic fibrosis of CBJRGC serum should focus on mRNA expression of TIMP1, collagens I, III and TGF-β1 and signal transduction within the cell.
2018-04-03T03:09:44.680Z
2004-05-15T00:00:00.000
{ "year": 2004, "sha1": "ea956b3bac563f4d1c95e9efea9a0d2dade6005d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v10.i10.1487", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "fde3b102850a3bba6704edd4f4ada10fd3644fb6", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
81860354
pes2o/s2orc
v3-fos-license
Patient Perspectives on the Challenges and Responsibilities of Living With Chronic Inflammatory Diseases: Qualitative Study Background Collectively, chronic inflammatory diseases take a great toll on individuals and society in terms of participation restrictions, quality of life, and economic costs. Although prior qualitative studies have reported patients’ experiences and challenges living with specific diseases, few have compared the consequences of disease management in daily life across different types of inflammatory diseases in studies led by patient partners. Objective The aim of this study was to identify the significant consequences of inflammatory arthritis, psoriasis, and inflammatory bowel diseases on daily life and explore commonalities across diseases. Methods A cross-sectional Web-based survey was designed by patient research partners and distributed by patient awareness organizations via their social media channels and by sharing a link in a newspaper story. One open-ended item asked about burdens and responsibilities experienced in daily life. Informed by narrative traditions in qualitative health research, we applied a thematic content analysis to participants’ written accounts in response to this item. This is an example of a study conceived, conducted, and interpreted with patients as research partners. Results A total of 636 Canadians, with a median age band of 55-64 years, submitted surveys, and 80% of the respondents were women. Moreover, 540 participants provided written substantive responses to the open-ended item. Overall, 4 main narratives were generated: (1) daily life disrupted; (2) socioeconomic vulnerabilities; (3) stresses around visible, invisible, and hiding disabilities; and (4) actions aimed at staying positive. Ways in which participants experienced social stigma, pain and fatigue, balancing responsibilities, and worries about the future appeared throughout all 4 narratives. Conclusions People living with chronic inflammatory diseases affecting joints, skin, and the digestive tract report important gaps between health, social, and economic support systems that create barriers to finding the services they need to sustain their health. Regardless of diagnosis, they report similar experiences navigating the consequences of lifelong conditions, which have implications for policy makers. There is a need for outcome measures in research and service delivery to address patient priorities and for programs to fill gaps created by the artificial administrative separation of health services, social services, and income assistance. Introduction Background Patient engagement in health research has been building over the last two decades, with examples of effective collaborations between patients and researchers being reported with increasing frequency. The benefits of patient engagement across the research process include identifying research questions of greater relevance to patients' concerns, improved participant enrollment and retention rates, and knowledge translation strategies that are more readily understood or adopted by community members [1]. Benefits to patients involved as investigators or research partners include a sense of empowerment, confidence, and contribution to the greater good that arises from meaningful engagement in the research process from inception to dissemination [2]. This paper describes findings from a project led by patient research partners. It describes the consequences of inflammatory arthritis, psoriasis, and inflammatory bowel diseases on daily life and explores commonalities across diseases. In particular, this paper focuses on inflammatory types of arthritis (such as rheumatoid arthritis, ankylosing spondylitis, and psoriatic arthritis), psoriasis, Crohn disease, and ulcerative colitis. All of these are systemic, autoimmune conditions [3][4][5]. Their clinical presentation ranges from mild to severe, and they are characterized as episodic, meaning people live with the uncertainty of exacerbations and remissions either from the natural course of the disease or its medical management [3][4][5]. People who have 1 disease, for example, psoriasis, are at higher risk of concurrently having one of the other diseases, for example, arthritis or Crohn disease [4]. Studies on the impact of living with these inflammatory conditions show disruption to normal daily activities [6][7][8][9][10], reduced productivity [9][10][11], and high personal costs because of the loss of ability to work and medical and other costs associated with health maintenance, which threaten financial security [6,11]. Among women with early rheumatoid arthritis, McDonald et al found that the uncertainty of having an episodic illness with fluctuating symptoms was particularly problematic as women experienced good days (able to engage in typical routines and daily activities), bad days (experiencing limitations in typical routines and daily activities), and worse days (often halting usual activities because of pain, fatigue, or recovering from symptom flares) [12]. Adapting to activity disruption threatened self-identity and sense of self [13]. Similar experiences have been reported by adults living with established inflammatory bowel disease [14,15] and psoriasis [13,16,17]. For example, among men and women with inflammatory bowel disease, unpredictable symptoms restricted social activities, employment, travel, and shopping, presenting enormous challenges to leading a normal life or maintaining the appearance of normality to others [14]. A survey of Canadians with Crohn disease or ulcerative colitis reported participation restrictions in leisure activities and interpersonal relationships to be the most frequently reported consequences of the disease, at 64% and 52%, respectively [6]. Objectives Given the frequency of activity disruptions reported in these (and other) qualitative studies, which by nature focus on relatively small numbers of participants, we recognized an opportunity to draw connections across disease groups with a larger number of participants. Such studies are valuable to patients because they corroborate their experiences; show they are not alone; and provide strategies for living well, interacting with health professionals, and advocating for resources. They are valuable to professionals for enhanced understanding of the impact of living with different diseases, placing patient experiences in context, and ultimately help improve patient-provider communication for more compassionate care [18]. By inviting a large number of people to respond to an open-ended question typical of qualitative research, this study potentially verifies and extends the transferability of findings from small studies. The study examines similarities and differences across respondents with a wide range of inflammatory diseases. Its specific purpose is to describe the consequences of inflammatory arthritis, psoriasis, and inflammatory bowel diseases on daily life and explore commonalities across diseases. Design A cross-sectional descriptive design was used with a Web-based survey. This paper focuses on written text responses using qualitative content and narrative analysis. Ethical approval was obtained from the behavioral ethics review board of the researchers' university. Study Context and Role of Patient Partners It has been recommended that patient and public involvement in research be explicitly reported [19]. Each of the 4 patient research partners (CK, AS, GA, and MA) is affiliated with a national public awareness or charitable organization focused on education, information sharing, and encouraging research. Two are members of organizations focused on arthritis and joint diseases, 1 works with an organization for gastrointestinal and inflammatory bowel diseases, and 1 with an organization for psoriasis and inflammatory skin diseases. They volunteered as consumer and patient partners along with researchers to develop a grant application in response to a specific call for proposals to fund research teams with a focus on chronic inflammatory diseases. The bid was successful, creating PRECISION, a pan-Canadian team of over 30 researchers including patients working on a series of interconnected studies. of complications and consequences of these diseases and testing novel health services aimed at preventing or mediating those complications, priorities identified through patient-researcher collaboration [20]. This context is important because this study is a direct consequence of the way patients chose to inform PRECISION's objectives. The patient partners designed a survey to gather data to strengthen the patient perspective component of the grant application. When the volume and depth of data received was greater than anticipated, a systematic data analysis plan was developed in collaboration with 4 PRECISION researchers to give voice to the concerns raised by survey respondents. The role of the patient partners in this paper thus included survey design and implementation, participant recruitment, assistance throughout data analysis and interpretation, and review of manuscript drafts. Participants The patient partners, through the social media and e-newsletters of their 4 organizations, distributed the survey link to their subscribers nationwide. The patient partners also connected with a newspaper reporter who wrote a brief story that included the survey link in the print version of a metropolitan daily newspaper and the reporter's blog. There were no explicit inclusion criteria other than the survey notice that specifically invited people with inflammatory joint, skin, or bowel diseases to have a say in research and complete the survey anonymously. Consent was implied by submitting a completed survey. The survey was open for 3 weeks in the summer of 2013 and was hosted online on SurveyMonkey. Survey Content Patient research partners designed a Web-based survey to identify patient priorities for research to help justify the objectives of PRECISION. The patient research partners invited all team members to contribute items for inclusion in the survey and then vetted a large number of potential items to reduce the total number and ensure clarity of the retained items. In addition to basic demographic information (eg, diagnosis, sex, age group in 10-year age bands, and urban vs rural place of residence), the survey contained closed-response and open-ended items to gather patient perspectives on medication use, knowledge about potential disease complications, treatments and interventions, lifestyle habits (eg, physical activity), and experiences living with inflammatory diseases. The responses to closed-response items helped justify the grant application with respect to needs around specific diseases, complications, medications, and physical activity [21]. In this paper, we focus on text responses to the following open-ended question: what are some of the burdens and responsibilities you face in managing or living well with your illness? There was no word limit imposed on stories written in response to this item nor was it required that participants enter any text. Data Analysis Responses were downloaded verbatim into an Excel file for tabulation (keeping text responses linked to demographic descriptors such as age and diagnosis) and analysis. The burdens and responsibility question generated numerous stories and commentary. Tallying was avoided because the spontaneous responses to the open-ended question meant that some respondents introduced new topics that, if tallied, would not represent the proportion of respondents who shared that view; counting was not found to yield specific or meaningful data [22]. Accordingly, we drew upon narrative traditions that allow personal accounts and experiences to conduct a thematic content analysis of these text responses [23,24]. We sought to understand what people experienced rather than how they described it, making thematic content analysis more appropriate than other forms of narrative models for this dataset [23]. Thematic content analysis is suitable to participatory types of research because it is generally understandable by all audiences, highlights similarities and differences within the dataset, and allows for socially relevant interpretations to inform policy development [25]. Trustworthiness depends in part on the description of the analytical process. We read and reread all responses to become familiar with the data and then identified common and repeated elements to broadly classify the issues and topics of concern to respondents. Our analysis began with open coding of the data in which we flagged phrases of interest from the responses. The initial codes were then clustered into categories based on recurring elements and common subjects. These categories were then analyzed for the character of the responses they contained and their narrative context. Categories were further clustered to derive tentative themes. Themes were then discussed and agreed upon by the team through discussion and review of written descriptions with supporting quotes. The final analysis was represented by 4 narrative themes. Validating the Analysis The preliminary content analysis was developed by 2 researchers (GGM and CLB) with qualitative research experience, who brought different lenses to the dataset (one is male, early career, and educated in the social sciences; the other is female, health professional, and senior researcher). The preliminary topics and supporting evidence (data extractions) were discussed by all coauthors at a team meeting, and draft categories were developed and circulated by email, and additional comments and interpretations were gathered through sequential iterations appraising data and interpretations. As the patient partners were representatives of organizations each dedicated to different disease groups, their feedback served as a form of member checking as to whether findings resonated with experiences and concerns of their respective groups. The 4 patient partners and 4 researchers thus co-constructed narratives reflecting the common experiences within the dataset and agreed upon quotes to represent each narrative. Collectively, the 8 collaborators bring perspectives from men and women, young adult to late middle-aged, and health care, research, or lived experience across inflammatory skin, joint, and bowel diseases, experiences that contribute to the trustworthiness of interpretations. As a final step to enhance transparency and trustworthiness, the analytical process and findings were reviewed with a peer experienced in qualitative methodology and health research. Demographics We received 636 unique surveys. Respondents' age varied from 18-24 years to 85-94 years (median age band 55-64 years), and 80.0% (509/636) of the respondents were women, which reflects the higher prevalence of women affected by most of the diseases in this study. The majority, 71.1% (452/636), were from British Columbia (the location of the newspaper with the survey link), with additional respondents from all other Canadian provinces and 2 territories. Most (91%) lived in a city with at least one hospital. Moreover, 42.9% (273/636) reported multiple health conditions, often 2 of the 3 inflammatory disease categories part of PRECISION, for example, Crohn disease and arthritis. Consequently, the following proportions sum beyond 100%: 86.0% (547/636) reported inflammatory joint diseases, 25.9% (165/636) reported psoriasis, and 18.1% (115/636) reported inflammatory bowel diseases. Of the 636 respondents, 540 (85.0%) responded to the burdens and responsibilities question. These varied in length from a single phrase (eg, "maintaining mobility and managing pain when I have flare-ups") to lengthy accounts of concerns for themselves and their families, descriptions of living with their disease or diseases in daily life, and efforts to take charge of their unique situation. Overall, responses outlined the ways in which the health care system and society in general are both helping and failing this population. Overall, 4 key narratives were crafted to represent the substance of the large number of text entries: daily life disrupted; visible, invisible, and hidden disability; socioeconomic vulnerability; and staying positive. Verbatim data show considerable overlap among the themes; therefore, some quotes easily support more than 1 key narrative. Examples of social stigma, pain and fatigue, balancing work and family responsibilities, and worries about the future contributed to all 4 narratives. For example, experiencing symptoms such as pain and fatigue were precursor to the first 3 narratives related to disruptions in daily life, disability perceptions, and social vulnerabilities, and coping with symptoms was apparent in staying positive, the fourth narrative. Each narrative is described below; alphanumeric labels link to quotations in Tables 1-3. Each quote references the sex, age band, province or territory of residence (using postal abbreviation), and reported diagnosis. Daily Life Disrupted Respondents told stories characterized by disruptions to tasks, activities, and roles, ranging from inconveniences to major shifts in how they participated in life. The most frequently cited antecedent to disrupted activities was persistent and sometimes unrelenting pain and fatigue, reported by more than half of the sample. Managing symptoms necessitated setting priorities that tended to place obligatory work or household responsibilities ahead of equally important but more discretionary activities such as maintaining social connections or enjoyable leisure activities (Table 1: A1 and A2). Although employment was often stated as high priority, many respondents struggled to sustain participation in work. Repeatedly, respondents outlined difficulties fulfilling the roles that others expected of them or shared serious concerns for the future if they were unable to continue work or take care of their own health (Table 1: A3 and A4). Descriptions of disrupted daily routines and the need for planning ahead were more often reported by those with joint or bowel diseases than those with skin conditions. Disruption was a prominent narrative in social situations, and some found it very stigmatizing to "say no to social activities" and "curtail my hobbies and be vigilant of travel plans" to manage symptoms. Respondents experienced adversity in their social environments, feeling forced to adapt to circumstances and relationships that did not give credence to their illness experience (Table 1: A2). They reported concerns about being inadequate as friends, partners, or family members, and some expressed feeling inferior to their peers at work. Respondents with inflammatory bowel diseases reported constant stress over whether or not they will be able to access a bathroom facility at a moment's notice as curtailing social interaction (Table 1: B1 and B2). Visible, Invisible, and Hiding Disability Collectively, descriptions debated the extent to which these conditions are or are not visible, how that affects interpersonal relationships, and whether or not there is a need to consciously hide disability. A clear cluster of responses related to appearing sick versus well, of how "looking well does not always mean feeling well" and how this could be burdensome when trying to "give your family a break from your disease. Relationships take a beating." Visible disease characteristics such as psoriatic plaques affected relationships ( Table 2: C1 and C2). Although some respondents spoke about visible characteristics of their diseases, there were more descriptions of how invisible disability (appearing normal) led to individuals feeling marginalized ( Table 2: D1) and wanting to explain, increase awareness, or find a way to foster understanding, assistance, or universal accessibility ( Socioeconomic Vulnerabilities Respondents explained how they simply did not have the energy to concurrently maintain employment and family responsibilities and attend to their own health, which resulted in financial strain (Table 3: F1, F2, and G1). They spoke of "falling through the cracks" between health care and social systems because eligibility requirements for programs denied them access. They described experiences where the health system or government priorities and budget constraints shifted definitions of disability in ways that excluded them from accessing the pensions or resources they needed or relied upon in the past (Table 3: G2 and G3). Some respondents reported difficulty in being taken seriously by their doctors (Table 3: K1 and K3) and consequently suffered setbacks in their treatment and health or expended time and effort coordinating and seeking out proper health care (Table 3: K2 and K4). For those living alone, their living arrangement was frequently cited as exacerbating the negative effects that their disease or diseases have on their quality of life (Table 3: F2 and K4). Repeatedly, respondents explained how their disease makes them economically vulnerable because of employment insecurity or loss and the high cost of treatment and medication. It was difficult to buy items such as healthy food or services not funded by health plans to help them prevent complications (Table 3: H1, H2, and H3). The high cost of biologics as well as their unpredictable and potentially serious side effects or worries about long-term effectiveness were a burden common to many respondents regardless of diagnosis (Table 3: J1 and J2). The pressure and stress of dealing with health and social systems fostered a fear of the future and what it might hold (Table 3: J2). Staying Positive Although the above 3 themes speak about undesirable consequences of inflammatory disease, there is a contrasting narrative arising from these written entries that tells a more positive story of resilience and adaptation. Examples of collaborative care, where health professionals and patients work together to ensure treatment both parties found appropriate, was 1 example. Some respondents shared strategies they found effective: Principal Findings Disruptions to daily life, systemic vulnerability, coping with (in)visible disability, and staying positive are interconnected aspects of living with chronic inflammatory diseases. Written passages from Canadians living with inflammatory joint, skin, or bowel diseases support 4 intertwined narratives, none of which exists in isolation, illustrating challenges encountered on a regular basis, regardless of diagnosis. The reasons for disruptions differed across diseases and individual experiences, but the overall consequences were quite similar. For example, the difficulty of maintaining steady employment and income threatens financial stability; consequently, one is less able to afford the goods and services that, alongside medical care, support a healthy lifestyle that makes the difference between inflammatory disease being a manageable condition rather than a miserable one. When daily life is disrupted, the relationships that hold peoples' lives together begin to unravel, whether it is a relationship with one's employer who sees inflammatory disease as a liability or one's coworkers, friends, or family who do not understand the burdens imposed by the disease. Many respondents stated a need to try to hide their disability, having encountered or anticipated a lack of understanding or compassion from those around them as essential to supporting a positive self-identity. Managing diseases, relationships, and life roles was a balancing act, consistent with prior smaller but more in-depth studies [12,14,15]. Thus, this survey of a large number of patients confirms the experiences described in prior research. Some respondents regarded the responsibility to maintain a positive attitude while coping with chronic pain and disability as an ongoing mental and emotional challenge. However, this was not a universal experience because other respondents appeared to have mastered a positive perspective. They dismissed disease-related challenges as part of life and focused on things that mattered to them, such as enjoying with family and friends and enjoying activities, regardless of their health conditions. As the survey item used the phrase "burdens and responsibilities," it solicited responses regarding difficult experiences; however, the small number of respondents who spontaneously presented a positive narrative instead was nonetheless critical. What is unknown, given the limitations of a single, written submission from each participant, is the extent to which a positive perspective can be sustained by the individual's resources such as access to health care, economic security, the presence of strong social networks, or responsibilities like caring for others-all of which contribute to health disparities. It is also possible that these descriptions of resilience, such as inflammatory diseases, are episodic or reflect a stage of adaptation to living with a long-term condition [26]. On the basis of the findings of this study, those with highly positive descriptions credited respectful, collaborative relationships with health care providers and understanding family, friends, and employers with supporting their outlook on life. The findings suggest that many respondents' needs are not well served by a system that isolates each individual problem to the exclusion of seeing the bigger picture. This bolsters evidence for a biopsychosocial approach that integrates the social experiences of patients with the psychological and physical impacts of their disease or diseases. Finding solutions to the consequences of long-term illness requires a patient-led research agenda because as Rose argues, public and patient engagements are forms of civic participation and citizenship that work toward the democratization of science [27]. Patient engagement in research is an avenue for their concerns and priorities to be represented, and by extension, better addressed in health and social sectors. This confirmatory study with 540 participants shows that many health needs are unmet from the patient perspective, explained in part by lack of attention to social determinants of health. That patients seek symptom relief, strategies to support daily life, a functioning social safety net, and empathic social support and health services is not new, but the repetition across multiple patient experiences indicates these important and long-standing issues have yet to be resolved. This suggests one role for these findings is to inform system, policy, and service delivery change needed to resolve these issues. Examples for engaging patients in research are widely available [28][29][30], and our experience had both strengths and room for improvement. Researchers are generally motivated to try public engagement because they feel it will increase the relevance of their findings, whereas patients may be motivated by the desire for more user-oriented services [28,18]. A moral rationale for patient-partnered research is that it honors and respects the patients' voice, supports participation, minimizes occupational disruption, and advances a role for patient organizations in public education of the need for societal supports, large and small [29]. Moving forward, a measure of patient engagement in research that can serve as a guide for assessing the quality and depth of patient engagement in a given project may be useful and lead to more user-oriented research [30][31][32]. Strengths and Limitations The large sample in this study was a major strength as it ensured that all relevant topics to the study populations were uncovered. There are 2 key limitations. First, the survey was originally designed to inform research priorities and questions and not as an original research study; thus, items were neither standardized nor pilot tested. Second, the single open-ended question is a minimalist form of data elicitation, and although this paper presents a qualitative analysis, it was not a prospectively designed qualitative study. Although the opportunity to probe further (as in other forms of qualitative inquiry) was not possible with this mode of collecting written narratives, we had narrative texts from over 500 Canadians. Typical qualitative research involves theoretically informed designs with in-depth descriptions from a small number of participants. What was lost in depth is counterbalanced by breadth, enhancing transferability to Canadians with similar diagnoses. We believe that this study is a valuable contribution to inflammatory disease research, despite the methodological limitations of qualitative analysis of open-ended survey questions. Through its rigorous self-awareness of the limitations of its data, relevance in identifying cross-cutting issues from other studies, and engagement with patient partners, this study meets the criteria set out by LaDonna et al that mark it as an exception to the general weakness of such methodological designs [33]. Prior studies of living with chronic inflammatory diseases have eloquently illustrated the burden and responsibility within a disease group such as arthritis, inflammatory bowel disease, or psoriasis [7,14,17]; our survey extends those findings across a large number of people and inflammatory diseases. The survey format allowed respondents anonymity and freedom to speak their thoughts, in contrast to the more personal interaction of a research interview. An advantage of this approach may be a lesser likelihood of social desirability-shaping responses, that is, that respondents tell the researcher what they believe the researcher wants or expects to hear. The limitation, however, of having to take responses at face value without more probing means that some clarity of meaning may be lost. As a survey administered "by patients for patients," a platform was provided for critical input from respondents that may otherwise be elusive in more structured quantitative and qualitative studies alike. Public engagement in research happens most often at the stage when researchers need patient input to help identify a relevant research question [34]. Although this was the case with our study, patient partners remained engaged throughout the research process, beyond the initial phase when it is advantageous to securing funding. We consider it a strength of this analysis that it was undertaken with respect to the values of patient and public engagement outlined by Gradinger et al, namely, a concern for the ethical, political, and normative values as well as for the process-based values such as respect, partnership, and equality [35]. When initiated, our survey was intended to demonstrate to the funding agency that patients were actively engaged in the proposal from inception. However, the insight gained from the survey not only helped develop a proposal to better understand the medical complications of inflammatory diseases but it also generated substantial data on the social and emotional consequences that are integrally tied to the provision of health care services and the patient-provider relationship. Conclusions Analysis of written responses to a survey created by patients for patients living with chronic inflammatory diseases shows many common experiences regardless of diagnosis, including disruptions to daily life and socioeconomic vulnerabilities that create and contribute to worries about the future. The issues raised by this paper concern the interrelatedness of health, social, and economic support systems and the gaps between them that create barriers to finding and accessing the services people with inflammatory diseases need to maintain their health. However, respondents also describe examples of patient-provider partnerships and social systems that contribute to personal resilience and capacity to participate in life. This paper brings together the narratives of a large sample of patients to emphasize commonalities in the experiences of inflammatory disease patients, who are often analyzed in the isolation of their specific diseases than as a broad category. It illustrates a meaningful collaboration between patients and researchers that suggests a patient-led research agenda in chronic inflammatory diseases would foreground the role of the social determinants of health in shaping disease outcomes. Such findings should inform policy and service delivery through system change.
2019-03-18T14:03:03.787Z
2018-11-21T00:00:00.000
{ "year": 2018, "sha1": "dd794dee3670ad33f5e737d5ed6ba650ec9b4f22", "oa_license": "CCBY", "oa_url": "https://jopm.jmir.org/2018/4/e10815/PDF", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8d7c0d92e99bd2117d6a4653ef99d8280401c9b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
739460
pes2o/s2orc
v3-fos-license
No Evidence of XMRV or Related Retroviruses in a London HIV-1-Positive Patient Cohort Background Several studies have implicated a recently discovered gammaretrovirus, XMRV (Xenotropic murine leukaemia virus-related virus), in chronic fatigue syndrome and prostate cancer, though whether as causative agent or opportunistic infection is unclear. It has also been suggested that the virus can be found circulating amongst the general population. The discovery has been controversial, with conflicting results from attempts to reproduce the original studies. Methodology/Principal Findings We extracted peripheral blood DNA from a cohort of 540 HIV-1-positive patients (approximately 20% of whom have never been on anti-retroviral treatment) and determined the presence of XMRV and related viruses using TaqMan PCR. While we were able to amplify as few as 5 copies of positive control DNA, we did not find any positive samples in the patient cohort. Conclusions/Significance In view of these negative findings in this highly susceptible group, we conclude that it is unlikely that XMRV or related viruses are circulating at a significant level, if at all, in HIV-1-positive patients in London or in the general population. Here we set out to assess the prevalence of X-MLVs, including XMRV, in an HIV-1-positive patient cohort in London. HIV-1positive patients were investigated because those who have been infected by a sexual route, by intravenous drug use, by perinatal infection or iatrogenically are likely to have been at greater risk than the general population of other viral infections spread by similar routes (e.g. HBV, HCV, HTLV) [18][19][20][21][22]. Although no definitive route of infection has so far been found for XMRV, all four known human retroviruses, HIV-1, HIV-2, HTLV-1 and HTLV-2, share the same routes of transmission, namely the transfer of blood or other body fluids. We therefore hypothesised that if XMRV or related viruses were circulating in London they would be likely to be detected in HIV patients. We found no evidence for X-MLV or XMRV infection in 540 DNA samples purified from peripheral blood leukocytes of HIV-1 infected individuals. Ethics Statement The University College London Research Ethics Committee has specifically exempted this study from review because it was an assay development, and waived the need for consent due to the fact the patient material used was fully anonymised. Collection and screening of HIV-1-positive samples Samples were collected from consecutive patients attending Mortimer Market Centre HIV service over a period of two months and were anonymised before processing. Patients were 4.5:1 male:female, age range 15-85 yrs, median age 42 yrs, ,1% intravenous drug users, 15% were born outside the UK. 80% of patients were on highly active anti-retroviral therapy (HAART); 94% of those treated had a viral load ,50 copies/ml. Approximately 8 ml blood was collected into a BD-vacutainer containing EDTA, and stored at 4uC until processing. Genomic DNA from the buffy coat fraction was extracted using the QIAamp DNA kit (Qiagen) and eluted in 60 ml. TaqMan Polymerase Chain Reaction (TaqMan PCR) TaqMan PCR of genomic DNA using primer sets 1 and 2 were performed as described ( Figure 1, Table 1) [3,12]. Sample levels were 5 ml per PCR reaction for controls, and ,1000 ng DNA (usually 5 ml) for patient samples per PCR reaction. Positive controls for the TaqMan PCR were either a synthetic plasmid encoding the target XMRV-int sequence for primer set 1 or Balb/ c mouse DNA (Sigma D4416) for primer set 2 (X-MLV-gag). Cycling conditions were 95uC for 15 secs and annealing/extension at 60uC for 1 min after an initial denaturation of 10 min. The HIV-1 positive samples were checked for PCR inhibitors by amplification of GAPDH as previously described [23]. No evidence for XMRV or X-MLV sequences in leukocyte DNA from a cohort of HIV-1 positive individuals In order to assess the claim that XMRV infection is common in the human population we screened leukocyte DNA purified from anonymised blood samples of 540 HIV-1-positive patients visiting Mortimer Market HIV service. Approximately 20% of this cohort (,108 patients) had not been treated with any anti-retroviral therapy. We performed TaqMan PCR using previously described primer sets 1 (XMRV-int) and 2 (X-MLV-gag) ( Fig. 1, Table 1) [3,12]. Primer set 1 detects XMRV but was designed to discriminate against related X-MLVs, whereas primer set 2 readily amplifies diverse X-MLV sequences present in the mouse genome [3,12]. The positive control for primer set 1 was a plasmid containing the XMRV integrase target sequence and the positive control for primer set 2 was Balb/c mouse genomic DNA. We were able to detect (at the 50% probability level) as few as 5 copies of the XMRV-int plasmid and 0.2 pg (1/20 th of a genome) of Balb/c DNA respectively ( Figs. 2A and 2B). To assess the sensitivity of the PCRs when detecting XMRV integrated into genomic DNA, we extracted DNA from 22Rv1 cells. This cell line contains around 10 copies of the XMRV provirus [24]. Using primer sets 1 and 2, it was possible to reliably amplify 1 cell equivalent of DNA. To investigate the sensitivity of the assay through the entire extraction and amplification process, 0, 10, 50, 250 & 1000 22Rv1 cells were mixed with leukocytes taken from HIV-1-positive patients, blinded to the operator, then extracted using an identical protocol to the cohort samples and amplified using primer set 2 ( Table 2). All cycle threshold (Ct) values were in the linear range of the assay (,40). Absolute values were not calculated, as we cannot be sure that XMRV/X-MLV sequences in the 22Rv1 cells exactly match the primer sequences. However, it was possible to detect ten 22Rv1 cells added to the buffy coat fraction extracted from 8 ml blood at Ct 35 (equivalent to approximately one 22Rv1 cell per 3 million white blood cells) ( Table 2). No signal was obtained from the samples that did not have any 22Rv1 cells added. In the HIV patient screen, the quantity of leukocyte DNA per patient per PCR reaction averaged 362 ng (IQR 188-468 ng, 54-135610 3 genomes). We did not detect positive PCR signals from any of the patient DNA samples using either primer sets 1 or 2, indicating that neither XMRV nor any other X-MLV amplifiable with these primers were detectable in these samples. All samples were positive for GAPDH by TaqMan PCR at expected levels, showing that no PCR inhibitors were present. Discussion In order to establish whether XMRV infects the human population and whether it is associated with human disease it is extremely important to be able to detect and quantitate XMRV specifically and sensitively. The majority of screens carried out so far have used highly sensitive nested-PCR protocols [1,2,4,7,8,10,11,[13][14][15]17]. However, these sensitive nested protocols are prone to false-positives from contamination [14,25] which can come from reagents, from X-MLVs growing as contaminants in human tumour cell lines, from amplicon contamination or from positive controls [26][27][28]. In order to minimise the risk of contamination in this study, real-time PCR was used instead of nested-PCR so that amplicons were not routinely exposed to the laboratory environment during the procedure. No 22Rv1 cells were grown in the laboratory until all screens were completed. We could not detect XMRV or any other X-MLV sequences in HIV-1-positive patients using TaqMan PCR, suggesting that XMRV and related viruses are either entirely absent or at least extremely uncommon in this population cohort. There are several possible explanations for the discrepancy between this and some previous studies, including that XMRV does not in fact establish infection in human peripheral blood at detectable levels, or that the geographic differences between the cohort studied here and elsewhere are critical. Alternatively, the possibility that the positive findings reported by others were due to occult laboratory contamination should be seriously considered [29]. It has been shown that some anti-retroviral therapies can suppress XMRV infection [30,31], but as approximately 20% of our cohort were untreated, this would mean that ,108 patients were studied whose samples should have contained viral loads unsuppressed by any drugs that potentially interfere with XMRV (or X-MLV) replication. Our negative findings are entirely consistent with several other studies that have failed to detect any trace of XMRV infection in HIV-positive patients [14][15][16][17]. Henrich and colleagues [14] tested 43 HIV-infected patients (50% untreated) from Boston, Massachusetts using nested XMRV PCR with two different sets of primers and reported no positives. Barnes et al [15] failed to detect any XMRV in 230 HIV-1 patients from Switzerland and the United Kingdom using PCRs targeting XMRV gag or env sequences. 101 of the patients tested were not receiving antiretroviral drugs. Similarly, Kunstman et al [16], using realtime PCR, detected no XMRV sequences in the blood cells of 562 HIV-infected men enrolled in the Chicago component of the Multicenter AIDS Cohort Study. Finally, Cornelissen and colleagues [17] failed to detect any XMRV by nested PCR in 93 seminal plasma samples from 54 HIV-1-infected men living in the Netherlands. Other reported studies have focussed on detection of XMRV in patient groups that are different from those described here, for example, patients with chronic fatigue syndrome [2,[7][8][9] or prostate cancer [1,6,10,11], or to assess whether XMRV could be responsible for conditions of unknown aetiology [12][13][14]32]. At the current time there is no consensus on definitive strategies for testing for XMRV, or X-MLVs, and it remains unclear why the results of published studies differ so widely in their reported prevalence of XMRV in patient populations and healthy controls, although several independent studies have demonstrated how PCR contamination from mouse DNA or DNA from human cell lines infected with xenotropic MLVs might explain XMRV detection [26][27][28][29]. Other possibilities include differences in the patient groups studied, for example the cohort selection criteria as well as geographic factors. There are also differences in techniques used to detect the virus. Here, we used qPCR in order to rapidly screen a large patient group with a high degree of sensitivity. The use of primer set 2, which is not specific to XMRV and targets diverse X-MLV gag sequences, allowed us to detect viruses closely related to XMRV in addition to XMRV itself. By assaying DNA rather than RNA we sought proviral DNA rather than evidence for active replication. Nucleic acid testing remains the goldstandard for monitoring blood-borne viral diseases such as HIV and HCV, and is more specific than serological tests. Sequencing positive PCR products provides confirmatory evidence, and can illuminate when contamination has occurred [29]. In conclusion, this study failed to find any evidence of XMRV or X-MLV infection in a cohort of HIV-1-positive patients. In view of these negative findings in this highly susceptible group, we 22Rv1 cells were added to leukocyte samples taken from approximately 8 ml blood, extracted, and amplified using primer set 2. The expected number of copies of XMRV (assuming 10 copies per 22Rv1 genome [24]) and the results obtained from the TaqMan PCR are shown. doi:10.1371/journal.pone.0018096.t002 conclude that it is unlikely that XMRV or related viruses are circulating at a significant level, if at all, in HIV-1-positive patients in London or the healthy population.
2014-10-01T00:00:00.000Z
2011-03-23T00:00:00.000
{ "year": 2011, "sha1": "556a4f9186fa0c4823a12add756800decc89fc9d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0018096&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "556a4f9186fa0c4823a12add756800decc89fc9d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15966581
pes2o/s2orc
v3-fos-license
Cyto-, myelo- and chemoarchitecture of the prefrontal cortex of the Cebus monkey Background According to several lines of evidence, the great expansion observed in the primate prefrontal cortex (PfC) was accompanied by the emergence of new cortical areas during phylogenetic development. As a consequence, the structural heterogeneity noted in this region of the primate frontal lobe has been associated with diverse behavioral and cognitive functions described in human and non-human primates. A substantial part of this evidence was obtained using Old World monkeys as experimental model; while the PfC of New World monkeys has been poorly studied. In this study, the architecture of the PfC in five capuchin monkeys (Cebus apella) was analyzed based on four different architectonic tools, Nissl and myelin staining, histochemistry using the lectin Wisteria floribunda agglutinin and immunohistochemistry using SMI-32 antibody. Results Twenty-two architectonic areas in the Cebus PfC were distinguished: areas 8v, 8d, 9d, 12l, 45, 46v, 46d, 46vr and 46dr in the lateral PfC; areas 11l, 11m, 12o, 13l, 13m, 13i, 14r and 14c in the orbitofrontal cortex, with areas 14r and 14c occupying the ventromedial corner; areas 32r, 32c, 25 and 9m in the medial PfC, and area 10 in the frontal pole. This number is significantly higher than the four cytoarchitectonic areas previously recognized in the same species. However, the number and distribution of these areas in Cebus were to a large extent similar to those described in Old World monkeys PfC in more recent studies. Conclusions The present parcellation of the Cebus PfC considerably modifies the scheme initially proposed for this species but is in line with previous studies on Old World monkeys. Thus, it was observed that the remarkable anatomical similarity between the brains of genera Macaca and Cebus may extend to architectonic aspects. Since monkeys of both genera evolved independently over a long period of time facing different environmental pressures, the similarities in the architectonic maps of PfC in both genera are issues of interest. However, additional data about the connectivity and function of the Cebus PfC are necessary to evaluate the possibility of potential homologies or parallelisms. Conclusions: The present parcellation of the Cebus PfC considerably modifies the scheme initially proposed for this species but is in line with previous studies on Old World monkeys. Thus, it was observed that the remarkable anatomical similarity between the brains of genera Macaca and Cebus may extend to architectonic aspects. Since monkeys of both genera evolved independently over a long period of time facing different environmental pressures, the similarities in the architectonic maps of PfC in both genera are issues of interest. However, additional data about the connectivity and function of the Cebus PfC are necessary to evaluate the possibility of potential homologies or parallelisms. Background Several studies carried out in different contexts and based on different theoretical premises indicate that the great expansion observed in the primate prefrontal cortex (PfC) was accompanied by the emergence of new cortical areas during phylogenetic development [1][2][3][4][5]. As a consequence of this process, this region of the primate frontal lobe was converted into a structurally and functionally heterogeneous area. The primate PfC can be initially divided into lateral, medial and orbital surfaces and further subdivided into areas with distinct architectonic and connectional characteristics. This heterogeneity may explain the variety of behavioral alterations and the diversity and specificity of cognitive deficits observed in human and non-human primates after lesions or reversible suppression of restricted areas of the PfC [6][7][8][9][10][11][12][13][14][15][16][17][18]. Architectonic studies of primate PfC confirm this heterogeneity. In Old World monkeys, Brodmann [1] divided the PfC into six different areas. Subsequently, Vogt and Vogt [19] differentiated nine areas in the Cercopithecus dorsolateral PfC (DlPfC). In 1940, Walker [20] carried out a specific study on the rhesus PfC (Macaca mulatta), in an attempt to adapt his observations to the patterns noted by Brodmann [21] in the human brain. Walker [20] defined nine cytoarchitectonic areas in the rhesus PfC ( Figure 1A) which would be comparable to areas of similar nomenclature in the human brain. This cytoarchitectonic division proposed by Walker is the most universally accepted. However, subsequent studies carried out in different contexts and using connectional, cyto-, myelo-and chemoarchitectonic techniques ( Figure 1B,C) have modified this initial parcellation of the monkey PfC either by the subdivision of pre-existing areas or by the modification of their limits [5,[22][23][24][25][26][27][28][29]. Walker (1940). In B and C maps from more recent studies of Macaca PfC by Carmichael and Price (1994) and Preuss and Goldman-Rakic (1991), respectively. D, from von Bonin (1938). In this parcellation the Cebus PfC was subdivided into three areas, FGP, frontalis granularis posterior; FGA, frontalis granularis anterior; FO, frontal orbital area and limbic anterior area, LA, in medial surface. All of these studies were carried out in Old World monkeys, whereas the PfC of New World monkeys has been poorly studied. The evolutionary history of this group of primates is still unclear and subject to disagreement [30] but it is accepted that they have evolved independently from Old World monkeys over a period of 35 million years. The effect of this parallel evolution on the organization of phylogenetically recent cortical areas such as those of the PfC still needs to be elucidated. The capuchin monkey (Cebus apella) was chosen for this study due to its similarity with the most intensively studied Macaca monkey. Cebus exhibits brain and body sizes comparable with those of several species of macaque monkeys, reducing possible allometric differences. In addition, the pattern of cortical fissuration is virtually identical in Cebus and Macaca, facilitating anatomical comparison. Unlike other New World monkeys commonly used in brain research, such as squirrel monkeys and marmosets, the Cebus PfC is the only one that consistently exhibits a well-defined arcuate sulcus in the frontal lobe separated from and arching around the caudal end of the principal sulcus (prs; Figure 2), an anatomical configuration that some authors consider as one criterion that distinguishes cercopithecoids from ceboids [31]. Although this anatomical similarity raises the possibility of potential homologies or parallelisms, the remarkable lack of more consistent data about the architecture, connectivity and function of the Cebus PfC prevents any progress in this issue. The only study on the architecture of the Cebus PfC, carried out in the context of an overall analysis of the entire cerebral cortex, distinguished it in four different areas ( Figure 1D). The Cebus PfC parcellation proposed by von Bonin [32] differs considerably from the macaque parcellation proposed by Walker [20] (Figure 1A), a fact that may indicate great architectonic differences in the PfC of these two species. In view of the limitation of von Bonin's study, such as the use of a single animal and only Nissl staining, a more comprehensive architectonic study of the Cebus PfC is necessary to evaluate possible architectonic similarities and differences between Cebus and Macaca. In the present study, we used the traditional Nissl and myelin staining methods besides histochemistry to lectin Wisteria floribunda agglutinin and immunohistochemistry to SMI-32 antibody, two architectonic tools widely employed in the demarcation of cortical and subcortical morphofunctional areas of several species. Results In this study, twenty-two areas were differentiated in the Cebus PfC (Figures 3; 4). Considering the cortical similarity observed between Macaca and Cebus, each area was designated by the same numeric terminology adopted in previous studies carried out in Old World monkeys, which follow the architectonic scheme used by Walker [20] ( Figure 1B). This terminology was adopted not to establish homologies but rather to permit a rapid topographic comparison due to the widespread acceptance of the division proposed by Walker for the primate PfC. External morphology of the PfC in Cebus monkeys The pattern of cortical fissuration of the Cebus brain has been addressed by several authors emphasizing its great similarity with the macaque brain [33,34]. The Figure 2 Surface view of the lateral, medial and orbital prefrontal cortex of Cebus apella, showing the anatomical division adopted in this study. Dotted lines define approximate borders between gyris and solid lines indicate the sulci. external anatomical aspect of the Cebus PfC is illustrated in Figure 2. Following the criteria adopted in previous studies carried out in monkeys, the Cebus PfC was divided into three regions: lateral, medial (MPfC) and orbital (orbitofrontal cortex, OfC). The lateral region extends from the frontal pole to the arcuate sulcus, including the dorsolateral PfC and part of the ventrolateral convexity. Although in the initial description of von Bonin [32] the caudal limit of "area frontalis granularis" of Cebus extends caudally in relation to the arcuate sulcus ( Figure 1D), it was observed that this sulcus established a limit between the agranular-dysgranular cortex of the precentral gyrus (PrG) and the granular cortex of the prefrontal area. The MPfC occupies the medial surface of the PfC from the frontal pole to the anterior extremity of the cingulate sulcus (cgs). However, since architectonic studies of PfC in macaques include the precallosal extension of the anterior cingulate gyrus (ACgG), this area was also included in the present study. Finally, the OfC occupies the ventral surface of the PfC extending from the frontal pole rostrally to the anterior perforated substance. Overview of staining patterns Nissl The cytoarchitecture of the Cebus PfC (and the frontal lobe as a whole) revealed a granular -dysgranularagranular rostrocaudal gradation. An example of this transition could be observed in the superior frontal gyrus (SFG), occupied by areas 10 and 9d. Caudally, layer IV gradually narrowed, disappearing in the precentral gyrus (PrG). This type of cortex, bordering the agranular cortex, characterized by a rudimentary layer IV with no clear laminar demarcation is designated dysgranular, and represents a transition between the granular and agranular isocortex. In the lateral surface of the PfC, areas 10, 12l, 46v, 46d, 46vr, 46dr, 8v, 8d, and 45 had granular characteristics, with well developed layers II and IV, clearly demarcated from adjacent laminae. Although a few subtle cytoarchitectonic differences had been observed in these areas, the border between them was not always noted using this staining method. A similar transition was observed in the medial and orbital surfaces of the Cebus PfC ( Figure 5A,B). WFA The plant lectin Wisteria floribunda agglutinin (WFA) labels N-acetylgalactosamine residues of the extracellular matrix. Areas with intense WFA staining differed from faintly stained areas by the density and intensity of perineuronal nets (PNs) and by the different intensity of the neuropil. The cortical labeling was arranged in bands that could occupy one or more layers. Generally, infragranular layers showed the densest staining in each area, with the labeling occasionally reaching the white matter. In some areas, layers II and III were also labelled, although less intensely than infragranular layers. Nets were observed surrounding the soma and proximal segment of the axon and dendrites of non-pyramidal and Figure 3 Surface view of the lateral, medial and orbital prefrontal cortex of Cebus apella, with the architectonic parcellation based on results of the present study. Dotted lines define approximate architectonic borders; solid lines indicate fundus of sulci, and dashed lines define lip or angulus of sulcus. In orbital view, temporal pole has been cut off to expose posterior orbital surface. some pyramidal neurons mostly distributed in layers V and VI ( Figure 5E,F). An overall rostral to caudal labeling gradient was observed, with the agranular and dysgranular regions of the caudal PfC showing the densest WFA labeling. SMI-32 SMI-32 exhibited a heterogeneous labeling pattern across the Cebus PfC. Two bands with varying levels of SMI-32 immunoreactivity were usually observed over layers III and V. These bands which were designated High magnification photomicrographs showing cellular details of techniques used in this study. In A and B, photomicrographs taken from layer V of Nissl stained sections. Small, medium, and large-sized pyramidal neurons can be observed. In C and D, cell bodies and dendrites of pyramidal cells showing SMI-32 immunoreactivity in cortical layers III (C) and V (D). Note intense staining in cell bodies, apical (arrows) and basal (arrowheads) dendrites. In E perineuronal nets (PNs) stained with WFA ensheath layer III non-pyramidal neurons in area 45, and in F PNs surrounding layer V pyramidal neurons in area 32. In all cell types, staining intensity decreases from perikaryon to distal portions of dendrites. In G, myelin staining of area 9m. Note thick vertical fascicles and outer band of Baillarger (asterisk). Calibration bar in F applies to all figures except G supra and infragranular bands showed immunoreactivity present in small to large pyramidal neurons, including their proximal processes and fragments of apical dendrites ( Figure 5C,D). In the brain sections examined in this study, the greatest density of SMI-32 positive neuronal soma was noted in supragranular layers, mainly in layer IIIc, and some in layer IV. Comparatively, few immunoreactive neuronal soma were observed in infragranular layers. In addition, a variable level of neuropil immunoreactivity both in the supra-and infragranular bands was observed. Myelin The black-gold staining pattern distinguished densely myelinated areas in the lateral PfC from less stained areas in the medial and orbital surfaces. In addition to this basic characteristic, in some areas the visualization of vertical fascicles or the inner and outer bands of Baillarger, (ibB and obB) allowed to establish areal boundaries ( Figure 5G). Architectonic parcellation Lateral PfC (Table 1,) Area 10 Nissl The frontopolar region had a well developed layer II. Layer III contained small-sized cells with weak stain, except in IIIc, where they were more stained and larger. Layer IV was well developed. Cells in Va were more densely packed than in IIIc, and Vb almost blended with layer VI where small-sized neurons predominated ( Figure 6E). WFA This area was not sharply demarcated in relation to the adjoining caudal areas using this technique ( Figure 6G). It exhibited a weaker WFA staining pattern than that observed in area 9. Supragranular layers exhibited discrete pale nets, and the neuropil was weakly stained. The labeling was somewhat more intense in layers V-VI. SMI-32 The supragranular band consisted of weak neuropil labeling, profiles of apical dendrites and soma of sparsely distributed pyramidal neurons. The neuropil in the infragranular band was more densely labeled, exhibiting few immunoreactive neurons in layer Va ( Figure 6H). Myelin The frontal pole revealed poor to moderate myelination, basically concentrated in infragranular layers, where thin vertical fibers extended from the white matter ( Figure 6F). The SFG had moderate myelination, becoming more intense caudalwards. Area 9 This area occupied part of the lateral (area 9d) and medial (area 9m) surfaces of the superior frontal gyrus (SFG). On the DlPfC, 9d ventrally reached the border between the SFG and the medial frontal gyrus (MFG); and on the medial surface area 9m extended up to the cingulate sulcus (Figures 3; 4). It was limited caudally by the cortex of the PrG but this transition could not be sharply demarcated. Nissl In this area, layer II was not well developed. Layer IIIa contained small-sized cells, sparsely scattered with weak to moderate stain. Layers IIIb and IIIc had small and medium-sized cells, respectively. Cells of IIIc were slightly more stained and separated from layer Va by a poorly developed layer IV. Layer Va exhibited well pigmented medium-sized cells and layers Vb and VI had small-sized cells and no clear limits ( Figure 7A,I). Radial striations were observed in the infragranular layers reaching layer III. This architectonic pattern can also be observed in 9 m ( Figure 6A). WFA In 9d, the most intensely stained band coincided with layer VI, reaching the white matter ( Figure 7C,K). This band exhibited numerous nets surrounding nonpyramidal and a few pyramidal neurons, besides the neuropil being densely stained, decreasing in layer V. Layer V showed nets involving small-and medium-sized cells and the staining could also be observed surrounding vertical fibers that occasionally reached layer III. In IIIc the neuropil is faintly stained, but some nets could still be observed. In the medial extension of this area (9m), the staining intensity in layer VI increased although the labeling in supragranular layers was weaker. In addition, the labeling of vertical fibers was denser than that observed in the dorsal surface ( Figure 6C). SMI-32 Caudally, area 9d exhibited denser immunoreactivity than area 10. The bilaminar pattern was less evident; and there was an intense labeling of neuropil and processes. Several small-to medium-sized dense immunoreactive pyramidal neurons were observed in layers IIIc, IIIb and IV. In the infragranular layers, the number of immunoreactive neurons was small and the labeling was restricted mainly to neuropil and fragments of apical dendrites ( Figure 7D,L). The labeling of 9m was similar to 9d, although less intense ( Figures 6D; 8C). Myelin In 9d, infragranular layers were heavily myelinated, with prominent vertically oriented fiber bundles extending from the white matter to layer III. The obB was easily discernible and supragranular layers exhibited a sparse plexus of fine myelinated fibers ( Figure 7B,J). The medial extension of the SFG (area 9m) showed similar staining pattern, but supragranular layers were more myelinated and obB more evident than in 9d ( Figures 6B; 8B). Periprincipalis areas (46d, 46dr, 46vr and 46v) Following the nomenclature adopted by Walker [20], the periprincipalis region was designated area 46. However, in the present study this region was subdivided into four different architectonic sectors: 46d and 46v in the dorsal and ventral walls of the prs respectively, and areas 46dr and 46vr in the dorsal and ventral crowns. Nissl In the banks of prs, area 46d exhibited a well developed and densely packed layer II, showing clear limits with layer III. Layer IIIa had small-sized neurons, moderately stained. Neurons in IIIc layer were small-to medium-sized and intensely stained. Layer IV was well developed and in Va neurons were intensely stained. The limit between layers Vb and VI was not clear, because both had medium-sized cells and moderate pigmentation ( Figure 9A,G). In 46v, the architectonic characteristics were similar, but pyramidal neurons in layer III were less densely packed than in 46d. In these areas (46d and 46v) supragranular layers (II and III) were more developed than infragranular layers ( Figures 9A; 10A). In the dorsal crown of prs, area 46dr exhibited transitional characteristics between areas 46d and 9d. The most distinctive aspect was the density decrease in layers II and IV dorsalwards. Va exhibited medium-sized cells slightly more stained than in IIIc. Layers Vb and VI had pale stained small-sized cells, with no clear definition between both layers ( Figure 9A,D). In the ventral crown of prs, area 46vr exhibited similar characteristics to 46v but with layer IV somewhat more developed, showing densely packed cells. Layer III presented clear lamination and the supra-and infragranular compartments were equally prominent. Radial striations could be noted in the infragranular layers, mainly in layer V ( Figures 9A; 10E). WFA The staining in 46dr was weaker than in area 9d. Discrete nets surrounding small cells with the neuropil weakly stained were observed in supragranular layers. Deep layers had a staining pattern similar to 9d, but somewhat less intense. In the caudal half of the prs, the labeling was more intense, but rostrally it was weak, with no clear demarcation with the adjacent area 10. The walls of the prs exhibited lower levels of WFA reactivity. In the dorsal bank (46d), a small number of nets involved non-pyramidal neurons in layers III and IV, and a faintly stained band of neuropil with some darkly stained nets were present in deeper layers. In the ventral bank (46v) the WFA staining was still weaker ( Figure 10C). WFA staining increased in the ventral crown of the prs (46vr; Figure 10G). The most intensely stained band coincided with layer V, with moderately labeled neuropil and a high concentration of nets mainly encircling nonpyramidal neurons. In layer VI the labeling was a bit weaker. Superficial layers had poorly stained neuropil, with nets surrounding small and medium-sized nonpyramidal cells. SMI-32 Dorsally, the immunoreactivity in area 46dr pattern was less intense and showed clear-cut limits with area 9d (Figure 9C,F). Small-to medium-sized dense immunoreactive pyramidal neurons were observed in the supragranular band, especially in layer IIIc and sparsely in layer V. The lips of the principal sulcus were slightly immunoreactive, clearly distinguishing this region (areas 46d and 46v) from neighboring areas 46dr and 46vr ( Figure 9C). In the upper lip (area 46d), immunoreactive neuronal structures were discrete in relation to area 46dr, occupying only layer III and forming occasional clusters ( Figure 9I). The infragranular band contained only neuropil and apical dendrite profiles. Immunoreactivity was less pronounced in area 46v than in area 46d, exhibiting discrete soma immunoreactivity ( Figures 9C,I; 10D). Area 46vr showed a significant increase in the SMI-32 immunoreactivity, permitting a clear distinction with area 46v ( Figure 9C). Layers IIIb-IIIc had many small-to medium-sized pyramidal neurons. The infragranular band consisted essentially of neuropil, neuronal processes and some pyramidal cells. Myelin Area 46dr exhibited lighter myelination than 9d. The obB was narrower and less stained, and vertical fibers were more sparse and thinner ( Figure 9B,E). Myelination increased in area 46d. The supragranular layers displayed delicate oBe, constituted by thin horizontal fibers. These characteristics were also observed in 46v, but here the supragranular layers showed lower levels of myelin staining (Figures 9B,H; 10B). In 46vr, infragranular layers were more heavily myelinated than in area 46v with evident vertical fiber fascicles, however the obB was not clearly discernible. The supragranular layers were poorly myelinated ( Figures 9B; 10F). Area 12 This area occupied part of the ventrolateral convexity of the lateral PfC (area 12l), reaching the orbital surface of the fronto-orbital gyrus (FOG; area 12o). Nissl In the ventrolateral convexity, layer IV seemed narrower in 12l than in 46vr, and the lamination in layer III was less evident. There was no obvious predominance between supra-and infragranular layers. Caudally, some darkly stained cells could be distinguished in layers IIIc and Va, similar to the adjoining area 45 ( Figure 11A). In 12o, layer IV was narrower than in 12l and cells in IIIc were somewhat larger and more stained. Supragranular layers were more prominent than the infragranular ones. WFA The cortex in area 12 was more intensely stained than the adjacent cortical areas 45 and 46vr. In 12l, the most intensely labeled band coincided with layer V, reaching layer VI. The neuropil was intensely labeled and a high concentration of nets could be observed (Figure 11C). Layers III and IV exhibited a band of WFA staining with nets surrounding medium-sized neurons and weakly stained neuropil. On the orbital surface, the staining pattern of 12o was similar, but the labelling of layers III-IV was discrete. Caudally, the emergence of the precentral opercular cortex (PrCO) in the ventral PrG caused a variation in the WFA staining. WFA labeling was more intense than that observed in area 12, concentrating on layers V-VI and reaching the white matter. SMI-32 Ventrally, the labeling pattern in area 12l was denser than in area 46vr, increasing the number of immunoreactive neurons in the supra and infragranular bands ( Figure 11D). Area 12o had immunoreactive characteristics similar to 12l but somewhat less intense. The bilamination was clear, with numerous pyramidal neurons both in the supragranular and infragranular bands, although with the greatest number in layer III ( Figure 12A). Myelin Area 12l exhibited stronger myelination than area 46vr, with evident obB and heavy staining in the infragranular layers ( Figure 11B). Area 12o showed similar staining pattern but somewhat less intense than 12l. Prearcuate areas (45, 8d and 8v) Technical artifacts due to sulcus presence and plane-ofsection problems impaired a clear analysis of the prearcuate region, near the caudal end of the prs. Area 45 occupied the anterior bank of the inferior arm of the arcuate sulcus, extending anteriorly to the caudal third of the inferior frontal gyrus (IFG; Figures 3; 4). Dorsally, still in the anterior bank of the arcuate sulcus areas 8d and 8v were distinguished. Nissl area 45 exhibited granular layers II and IV very well developed and clear limit between layers II and III. In IIIc, large-sized and darkly-stained pyramidal cells gave this area a peculiar characteristic. These large and well-stained cells were also observed in Va. Vb and VI showed small-sized cells. In 8v granular layers II and IV were well developed. In IIIa and IIIb cells were small, sparsely packed and with low staining. In contrast to area 45, IIIc and Va displayed medium-sized cells, somewhat more stained in Va. Layers Vb and VI had poorly stained small-sized cells. The cytoarchitectonic pattern in 8d was similar, but cells in layer IV were somewhat more sparsely distributed. Radial striations were observed both in 8d and 8v ( Figure 7A,E). WFA Area 45 demonstrated a large number of strongly stained nets and moderately stained neuropil in layer IV, reaching layers III and Va, besides a less stained band in layer VI. Between these two bands there were a few nets and the neuropil was discretely stained. Dorsally, the labeling in 8d was weaker than that observed in area 45, with only one band of neuropil being visible in layer V and a few nets moderately stained in layers V and III ( Figure 7G). In 8v, the neuropil in layer IV was somewhat more intense than in 8d. SMI-32 Area 45 had moderate labeling level. The supragranular band contained soma profiles surrounded by intense neuropil. There was also an increase of neuropil labeling in layer IV, but the bilaminar aspect remained. The infragranular band exhibited a moderate neuropil labeling and scarce dendritic profiles. Dorsally, area 8d had a moderate level of immunoreactivity, with medium-to small-sized pyramidal cells in layer IIIc, and in a lesser degree in layers IV, IIIa and V ( Figure 7H). Ventrally, in 8V the labeling was similar, but with a somewhat denser neuropil. Myelin Myelination increased ventrally in the anterior bank of the as. Area 8 was not clearly subdivided and exhibited well myelinated obB with thin vertical fibers extending from the white matter ( Figure 7F). Superficial layers were poorly myelinated with a fine fiber plexus. Area 45 revealed a heavy myelination pattern in deep cortical layers although without clear organization of vertical fibers. Superficial layers were moderately myelinated in this area, with a fine plexus of sparsely distributed fibers. OfC and gyrus rectus ( Table 2) Area 11 Nissl On the orbital surface, area 11 m exhibited a thin and sparse layer II. Layer III was also sparse and contains small-to medium-sized pyramidal neurons with some densely stained neurons in IIIc. The limit between layers IIIc and IV was well-defined. Neurons in Va were somewhat more densely packed. Layers Vb and VI had small-sized neurons and the limits between layers were not visible. In 11l, layer IV was narrow and Va showed well stained medium-sized cells, a characteristic that differentiates this area from the adjoining 12o. WFA Area 11l ( Figure 12B) exhibited a band in layer V, reaching layer VI and the white matter. In this band, darkly stained nets were observed surrounding neuronal soma and horizontal fibers, and the neuropil labeling was moderately stained. In IV we observed small nets and weakly stained neuropil, besides pale nets in layer III. In the mos, this arrangement gradually disappeared, and labeling was almost absent in the parafundic cortex. Medially, 11 m showed a compact band in layer VI, with moderately stained neuropil and darkly stained nets involving medium-sized cells. In the remaining layers, the labeling was almost absent. SMI-32 There was a clear decrease in the density of SMI-32 staining on the orbital surface. 11l had clear-cut limits with its neighbouring area 12o ( Figure 12A). This area exhibited a faint labeling of neuropil in the supra and infragranular bands. Small densely stained pyramidal neurons could be observed in layer III, and rarely in infragranular layers. Medially, 11 m still preserved bilaminar characteristics. Layer III contained immunoreactive neurons and moderate immunoreactive neuropil with a broader infragranular band ( Figure 12A). Myelin Area 11l had sparse myelination, with obB and infragranular layers less stained than area 12o. In 11m, the obB was faintly stained and ibB was not discernible. Vertical fiber fascicles were observed in infragranular layers in this area, resembling the aspect observed in 46vr ( Figure 12C). Area 13 Nissl Following Walker's parcellation of Macaca PfC [20], the present study designated the central orbital region of Cebus monkey as area 13. However, due to the architectonic heterogeneity, it was further subdivided into lateral (13l), intermediary (13i) and medial (13m) areas. In 13m, layer II was thin and compact, without clear limits with IIIa. Layer IIIb was welldeveloped, constituted by a discrete cluster of smallsized neurons; while IIIc exhibited moderately stained small-to medium-sized neurons. In Va, neurons had medium-sized and were more stained than in IIIc, and Vb blended with layer VI. Comparatively with 13m, 13i exhibited less developed layers II and IV. Layers Vb and VI had moderately stained small-sized neurons. These cytoarchitectonic characteristics could also be observed in 13l, however, this area contained larger cells in Va. WFA Staining becomes more intense caudally in the central orbital region. The 13i was the most intensely stained, showing a bilaminar aspect. The superficial band was weakly stained, with sparse labeling of neuropil and pale nets involving small non-pyramidal neurons. This superficial band was almost indiscernible in the adjoining areas 13l and 13 m ( Figure 13B). The deep band occupied the lower part of layer V, with a well-stained neuropil and larger and more abundant nets than observed in the superficial band. This deep band reached layer VI and the white matter, extending over the orbital extension of the claustrum. WFA staining decreased medially. In 13 m the WFA labeling pattern was narrow, being mainly observed in layer VI ( Figure 13B,E). SMI-32 Among the subdivisions of area 13, only 13i had a bilaminar pattern ( Figure 13C,F). The staining intensity in the infragranular band was similar in the three subdivisions, with neuropil and fragments of cellular bodies without clear pyramidal shape. The supragranular band exhibited moderate immunoreactivity only in 13i with discrete neurons and processes distributed in layer III. Myelin In the central orbital region, area 13i had light to moderate myelination, but stronger than the adjoining cortical areas 13l and 13m. The obB was clearly discernible in 13i and a fine fiber plexus could be observed in supragranular layers. Deep layers had moderate staining. In 13l and 13 m the staining was lighter and more diffuse, with short vertical fiber bundles extending from the white matter to 13 m ( Figure 13A,D). Area 14 Nissl In the anterior part of the gyrus rectus (GRe), area 14r had layer II poorly developed with no clear limits with layer III. Layers IIIa and IIIb had sparse small-sized cells with moderate staining. In IIIc, cells were slightly more stained and larger than IIIb. Layer IV was not well-developed and small-sized neurons predominate in infragranular layers. The limit between layers Vb and VI was not clear ( Figure 11E). Caudally, 14c exhibited cytoarchitecture similar to area 14r, although decreasing the cellular density in granular layers II and IV and radial striations in infragranular layers. WFA The WFA staining considerably decreased in the GRe. At the anterior level, area 14r was not sharply labeled from the adjacent areas 11l and 10 ( Figures 8A; 11G). The most important characteristic that differentiates area 14 from the laterally located area 11 was absence of labeling in supragranular layers and the less stained white matter. The labeling concentrated in a narrow band over layer VI, with pale nets and moderately stained neuropil. Caudally (14c), the labeling was weaker and WFA involved thin vertical fibers in deeper layers ( Figure 13B). SMI-32 In the GRe, the staining intensity was extremely weak. Rostrally (14r) the infragranular band had modest immunostaining, corresponding primarily to the neuropil and a few perikarya, and very few immunoreactive somas in layer IIIc. Caudally, 14c exhibited very light immunostaining, limited to neuropil in the infragranular band ( Figures 8C; 11H; 12A; 13C). Myelin The GRe (area 14r) had lower levels of myelin staining than neighbouring areas. Only some short and thin vertical fibers emerging from the white matter were present but did not reach supragranular layers ( Figure 12C). Caudally, 14c was still less myelinated ( Figures 8B; 11F; 13A). Medial PfC (Table 3) As the dorsal, anterior and ventral borders of this surface represent medial extensions of areas already described, we will describe its central portion. Areas 32 and 25 NIssl In the medial surface, area 32 occupied most part of the ACgG. Area 32c was situated anteriorly to the corpus callosum and circumscribed dorsally and ventrally by the cgs and rostral sulcus, respectively. Infragranular layers were more developed in relation to the supragranular compartment. Layer II was poorly developed and had no clear limits with the densely packed layer III. There was no obvious lamination in layer III, while layer IV was absent. Layer V had radial striations and small-to medium-sized cells, which were somewhat larger and more stained than in layer III. There was no clear limit between layers V and VI ( Figure 8D). Rostrally, area 32r showed an overall larger cortical thickness than 32c. A thin and cell-sparse layer IV could be visualized. Layer Va contained well-stained mediumsized cells, contrasting with small-sized cells observed in layer VI ( Figure 14A). Ventrally to the rostrum of the corpus callosum, area 25 showed supragranular layers more developed than area 32c, and there was an evident subdivision between layers II and III. Layer III had sparse clusters of smallsized neurons with moderate staining, no clear lamination and scarcely discernible layer IV. In layer V, almost blending with layer VI, cells were slightly larger and more densely packed than in layer III ( Figure 14E). WFA Areas in the medial surface were sharply labeled with WFA ( Figure 8A). Area 32c exhibited a narrow but very intense WFA band in layer VI, reaching layer V. Patches of deeply stained neuropil nearly prevented the visualization of nets that, when visible, revealed strong labeling pattern (Figures 8A,F; 13B). Rostrally, 32r had similar labeling pattern, however the band over infragranular layers was wider, exhibiting some very pale nets reaching layer III ( Figure 14C). Area 25 could be differentiated from the adjacent dorsal area 32c and ventral area 14c by clear-cut boundaries (Figures 13B; 14G). The labeling was weak and only a narrow band over layer VI reaching layer V could be observed. SMI-32 Area 32c had discrete immunoreactivity. It was observed only a sparse immunoreactive band of neuropil in infragranular layers with very few pyramidal neurons and dendrites concentrated in layer V. Rostrally, the labeling in 32r was somewhat more intense and some pyramidal neurons could also be seen in layer IIIc. The labeling in 25 was similar to that observed in area 32c, but still lighter ( Figures 8G; 14D,H). Myelin On the medial surface, myelination considerably decreased caudalwards. Area 32c exhibited moderate to poor staining pattern ( Figure 8B,E). Thin and short vertical fibers extending from the white matter were observed, but rarely targeted superficial layers. The labeling in 32r was also weak, but obB and ibB were still discernible ( Figure 14B). Ventrally, area 25 was poorly myelinated, but somewhat more stained than area 32c ( Figures 13A; 14F). Discussion The parcellation of the Cebus PfC adopted in this study considerably modified the scheme initially proposed by von Bonin [32] for this species ( Figure 1D) but is in line with previous studies on Old World monkeys [5,20,[24][25][26][27][28][29]. Thus, the remarkable anatomical similarity observed between the brains of genera Macaca and Cebus may extend to architectonic aspects. Comparison with previous architectonic maps of the monkey PfC In this study, twenty-two different areas were differentiated in the Cebus PfC (Figures 3; 4). This number is greater than that previously recognized by von Bonin [32] in the same species using cytoarchitectonic criteria ( Figure 1D). Due to the different approaches used in the two studies, it is difficult to compare the current results with those obtained by von Bonin and even by other authors who have used only cytoarchitectonic techniques. In fact, the inherent subjectivity related with cytoarchitectonic observations has led to different interpretations concerning the limits between areas and the criteria defining them. To avoid this ambiguity, the combination of several architectonic tools allowed more direct and reproducible denition of the extent and boundaries of areas in the PfC. Two other factors should be mentioned when analyzing the differences between the present findings and those reported by von Bonin. First, von Bonin's observations were based on a single animal. Second, the criteria used by von Bonin and his collaborators to divide cytoarchitectonic areas were more restricted than those used by others, justifying their criticism of the division proposed by Walker [20] for the Macaca PfC [22]. More recent studies, however, using different architectonic methods, connectional and physiological data have confirmed the existence of a larger number of areas in the primate PfC, corroborating Walker's initial observations [5,[23][24][25][26][27][28][29][35][36][37][38][39]. In the lateral surface of the Cebus PfC, while von Bonin [32] (Figure 1D) differentiated only two areas, the posterior and anterior "area frontalis granularis", eleven areas were differentiated in the current study: 9d, 8(d and v), 45, 46 (d, dr, v, vr), 12l, and 10, topographically comparable with the homonymous areas described for Macaca by Walker [20]. There are, however, some differences between the present observations and those of Walker. In the Macaca genus, there is disagreement on the description of the prearcuate region. Walker [20] recognized area 8A in the anterior bank of the superior ramus of the arcuate sulcus (sras), area 8B extending dorsally in the SFG, and area 45 occupying part of the inferior ramus of the arcuate sulcus (iras), extending rostrally. While more recent studies confirm these findings [5,28], others confined area 8 to the prearcuate region [25]. Recent analysis of area 45 also differed from the division initially proposed by Walker. Using architectonic and connectional criteria, Petrides and Pandya [29] designated the area which lies in the ventral part of the rostral bank of the lower limb of the arcuate sulcus as 45B, and its rostral extension as 45A. This division was also used by Gerbella et al. [39], who provided a detailed description of the architectonic organization of the caudal ventrolateral PfC of the macaque monkey, including part of the prearcuate region, by using a combination of cyto-, myelo-, and chemoarchitectonic criteria. They identied two areas that are almost completely limited to the anterior bank of the ias, 8/FEF dorsally, and 45B ventrally, 32r A thin and cell-sparse layer IV can be visualized. Similar to 32c, but the labeled band over IG layers is wider, with some pale nets reaching layer III. Immunoreactivity somewhat more intense than 32c, with some pyramidal neurons in layer IIIc. Moderate myelination. Discernible oBb and iBb. Thin and short vertical fibers in IG layers. 25 Layer IV scarcely discernible. SG layers somewhat more developed than in 32c. Clear limits between II and III. Small-sized cells sparsely packed with no clear lamination in III. Layer V almost blends with layer VI. Poorly stained. Clear-cut limits with the adjacent dorsal 32c and ventral 14c. Narrow band over layer VI reaching layer V, with a few nets and dense neuropil near white matter. Similar to 32c, but still lighter. Poorly myelinated, but somewhat more stained than 32c. and two other areas occupying the ventral prearcuate convexity, area 8r, rostral to area 8/FEF, and area 45A, rostral to 45B. In this study, based on coronal sections, the prearcuate region was subdivided into three areas, 8d dorsally, 45 ventrally, and 8v between them. Dorsally, a transitional region between areas 9d and the agranular cortex of the PcG was observed; however, this region was not consistently characterized as an architectonically independent area. Functional studies indicate that, in Cebus as well as in Macaca, the region designated as area 8d coincided with the frontal eye field [40][41][42][43][44][45][46]. The map presented in this study also differed from Walker's descriptions [20] regarding the precise localization of area 46 and its borders with areas 9 and 12. This region has been thoroughly studied in primates because of its main role in complex cognitive processes related to the working memory [6][7][8][9][10]47,48] and also because of its possible recent phylogenetic origin [5]. In his study, Walker [20] describes area 46 as extending dorsally and ventrally in relation to the prs, occupying part of the MFG and IFG. More recent studies on Macaca diverged in defining this area. While Barbas and Pandya's [25] findings are in general agreement with Walker's map, Preuss and Goldman-Rakic [5] ( Figure 1D) recognized areas 46d and 46v in the walls of the prs and areas 46dr and 46vr in the dorsal and ventral rims of the prs respectively, a division compatible with our observations in Cebus. Likewise, Petrides and Pandya [28,37] confined area 46 to the lips of the rostral extent of the prs, while they designate the cortex on the lips of the caudal portion of the prs and the immediately adjacent cortex as area 9/46, indicating that this area had been included as part of area 9 in the classic maps of the human cortex. Cytoarchitectonically, the Cebus OfC presented a progressive differentiation from a homotypical granular cortex near the frontal pole to an agranular pattern in the caudal region, a characteristic also observed in Macaca [13,26]. Some of the architectonic tools used in this study show a similar transition from rostral to caudal. Rostral areas for example, have an extremely weak WFA staining near the frontal pole becoming more intensely stained caudally. In Cebus, von Bonin [32] recognized only two areas in the OfC, the orbital extension of the "frontal granular anterior area" and the "frontal orbital area", both having several common architectonic characteristics. In the present parcellation, the OfC was divided into five different areas. Although the same designation used by Walker [20] was adopted, some areas of the Cebus OfC were subdivided and the limits changed due to their heterogeneity. These findings are in accordance with recent studies on Macaca. According to Carmichael and Price [27], in Macaca the medial orbital sulcus (mos) also divides area 13 into medial (areas 13b and 13a) and lateral (area 13m) areas ( Figure 1C). In Cebus, the medial sector of area 13 (13m), lateral to area 14, seems to partly correspond to area 13a described by Amaral and Price in Macaca [24], areas 13a and b described by Carmichael and Price [27] ( Figure 1C), and was probably included in area 14 (14L, 14VL) by Preuss and Goldman-Rakic [5] ( Figure 1D). The present division of area 12 into two areas (12o and 12l) is in line with the cyto-and myeloarchitectonic divisions proposed by Barbas and Pandya [25] and Preuss and Goldman-Rakic [5] in Macaca. Petrides and Pandya [29] designated this area as 12/47, in order to standardize the human and monkey architectonic characteristics. Thus, indicating that the previously labeled area 47 in the human brain is similar in architecture to Walker's area 12. In Cebus, area 14 broadly coincided with the GRe, and consisted of rostral (14r) and caudal (14c) areas. This basic parcellation is largely in agreement with previous studies on the Macaca PfC that described the GRe as consisting of at least one rostral and one caudal sector, such as areas 14 and 25 of Barbas and Pandya [25]; 14r and 14c of Carmichael and Price [27], and areas 14a, 14l, 14vl, 14v, and 14 m of Preuss and Goldman-Rakic [5]. Caudally, in the immediate vicinity of the anterior olfactory nucleus and the prepiriform cortex, the OfC assumed a clear agranular aspect. In macaque monkeys, these agranular-periallocortical areas (Oa-p and O-Ins, Figure 3) that correspond to the caudal continuation of areas 12 and 13, had received different designations. Many authors have associated this region of the primate cortex with the insula and the claustrum [49]. In the medial region of the PfC, the present parcellation was consistent with previous studies on Macaca, especially with the maps from Vogt et al. [36] and Carmichael and Price [27], which recognized the medial projections of areas 9, 10, 14 and the limbic areas 32, 25 and 24 in the medial wall of the frontal lobe. In Cebus, the "limbic anterior area" described by von Bonin [32] was subdivided into areas 32r, 32c and 25. Area 32c, rostral to the corpus callosum, partly corresponds to the macaque area 32 of Barbas and Pandya [25], Vogt et al. [39] and Carmichael and Price [27] (Figure 1C), and area PL of Preuss and Goldman-Rakic [5]. Dorsal and ventrally, this area was separated from superior and inferior adjacent cortical regions by clear-cut boundaries of WGA and SMI-32 staining intensity. Rostrally, Cebus's area 32r seemed to correspond with area MF of Preuss and Goldman-Rakic [5]. Area 25, ventral to area 32c, resembled the one of equal designation described by Vogt et al. [36], Carmichael and Price [27] and Barbas and Pandya [25] in Macaca, as well as area IL of Preuss and Goldman-Rakic [5]. Validity of areas and functional implications Due to the lack of more detailed information about the connectivity and function of the Cebus PfC, it is difficult to know if the areas revealed in this study correspond to functional cortical areas. Several studies, however, indicate that some of the architectonic tools used in this investigation are in fact able to accurately identify brain morphofunctional areas and their boundaries. WFA staining, for example, has been successfully used to define functional cortical areas in marsupials [50], rats [51,52], mice [53] and Mongolian gerbils [54]. Furthermore, WFA staining also has an area-specific distribution pattern within the human visual, motor and somatosensory cortices [55][56][57], and thalamus [58]. The functional significance of the heterogeneous distribution of some of the probes used in this study throughout the Cebus PfC is not completely known. SMI-32 labels a subpopulation of pyramidal neurons in the primate cerebral cortex [71], and other neuronal types in the thalamus and cerebellum [70]. Based on these results, it is possible to conclude that SMI-32 can identify neurofilament components in neuronal populations with different morphological, functional and connectional characteristics [60]. In the cerebral cortex, SMI-32 positive neurons are mainly located in layers III and V, but depending on the cortical area, the proportion of these cells in each layer may vary. It is known that layer III is the main source of ipsi-and contralateral corticocortical projections and layers V and VI are preferentially associated with subcortical targets [72]. The larger number of positive SMI-32 neurons in layer III, observed in the present study, seems to indicate that this probe preferentially labels corticocortical projection cells in Cebus PfC. Regarding WFA, the extracellular matrix (EM) and PNs have been associated with stabilization and formation of synapses, guiding of axons to their targets, maintenance of the composition of the extracellular compartment, formation of a link with the intracellular cytoskeleton, and concentration of growth factors surrounding certain neurons [73]. These functions attributed to EM and PNs might potentially modify local neuronal activity and thus contribute to the functioning of neuronal networks. Additionally, the fact that PNs were initially observed surrounding GABAergic, fastspiking non-pyramidal neurons has led some authors to suggest that PNs are mainly involved in local inhibitory circuits [74,75]. However, the present results and those obtained in other studies analyzing different species and cortical areas [76] indicate that a significant amount of pyramidal neurons, mainly in infragranular layers, have a dense covering of PNs ( Figure 5E,F). This fact might indicate a relationship between PNs and corticofugal excitatory circuits. Conclusions This study indicated the existence of structural similarities between the Cebus and Macaca PfC. Cortical areas, such as area 46 on the DlPfC [5,9], considered evolutionary specializations of anthropoid primates were identified in the Cebus PfC, based on their topographical and architectonic characteristics. Additional information on the connectivity, chemical structure (in progress at our laboratory), and function of the Cebus PfC could clarify how these phylogenetically recent cortical areas have responded, from an evolutionary and adaptative perspective, to the different environmental pressures faced by New and Old World monkeys during the 35 millions of years of parallel evolution. Methods For this study, five young adult male Cebus apella monkeys obtained from the Primate Center at the School of Dentistry of Araçatuba (UNESP -Univ Estadual Paulista, São Paulo, Brazil) were used. Experimental procedures were conducted according to the Guidelines for the care and use of mammals in neuroscience and behavioral research [77] and were approved by the local laboratory animal care and use committee (Comissão de Ética na Experimentação Animal -CEEA-FOA/UNESP # 2007-002476). All efforts were made to reduce the number of animals and to minimize suffering. Animals were anesthetized with sodium pentobarbital (30 mg/kg, i.p.) and transcardially perfused with 0.9% saline (800 ml) followed by 1500 ml of 4% paraformaldehyde in 0.1 M acetate buffer, pH 6.5, and subsequently by 1500 ml of 4% paraformaldehyde in 0.1 M borate buffer, pH 9.0. Brains were exposed and blocked with the aid of a stereotaxic device. Blocks were then removed from the skull and placed in a cryoprotective solution containing 10% glycerol and 2% dimethyl sulfoxide in 0.1 M borate buffer, pH 9.0, at 4°C. Three days later, blocks were transferred to a similar solution but with increased concentration of glycerol (20%) for four additional days according to previously described methods [78]. To avoid formation of crystals that may occur during the freezing of large brain pieces, the blocks were immersed in isopentane at -80ºC for one hour to allow quick freezing and then sectioned at 40 μm on the coronal plane using a freezing microtome (SMR 2000, Leica Instruments, GMbH, Germany) and dry ice. Sections were collected in ten different series in a solution of 0.1 M phosphate buffer, pH 7.3. WFA histochemistry Free-floating sections were treated with 1% H2O2 in 0.1 M Tris-buffered saline (TBS) for 30 min, washed and subsequently incubated with 2% bovine serum albumin (BSA) in TBS for 1 h. Following three rinses in TBS, sections were incubated with biotinylated WFA (Sigma, L1766) at a concentration of 3 µg/ml TBS-BSA for 16 h, gently shaken at 4°C. The sections were then rinsed in TBS and incubated for 1 h in Extravidin-Peroxidase (Sigma, E2886). Lectin-binding sites were visualized with the chromogen VIP (Vector, SK4600) which yielded a red-purple reaction product. Sections were then mounted on gelatinized slides, dehydrated through a graded alcohol series (70-90-100-100%, 1 min each), cleared in xylene (three changes, 5, 10, and 30 min each) and coverslipped with DPX. In control experiments, biotinylated WFA was omitted and no specific staining was observed in these sections. Black Gold II staining Sections were previously mounted on 1% gelatin-coated slides and air dried, rehydrated in distilled water for 2 min and transferred to a 0,2% Black-Gold solution at 60°C , for 12-18 min. This solution was made by adding 100 mg of Black-Gold II (Histo-Chem Inc. # 1BGII) to 50 ml of 0.9% NaCl. Incubation was interrupted when the horizontal parallel fibers of layer I were visible. Sections were rinsed for 2 min in distilled water, fixed for 3 min in a sodium thiosulfate solution, rinsed in tap water for 10 min (two 5 min changes), dehydrated through a graded alcohol series (70-90-100-100%, 1 min each), cleared in xylene (5, 10, and 30 min) and coverslipped with DPX. Nissl stain The conventional thionin staining method was used to establish the general cytoarchitectonic characteristics and to aid the localization of the laminar distribution of the other stains. Analysis Sections were examined by brightfield microscopy. Selected images were digitalized at high and low magnifications using a Leitz Aristoplan microscope or a Carl Zeiss stereomicroscope (STEMI 2000-c) respectively, both coupled with a Carl Zeiss Axiocam MRc5 digital camera. To eliminate the background originated by digitalization, color balances, brightness, contrast and sharpness were corrected in each preparation. Because the distinction of limits between areas may vary among observers, the sections were independently examined by three of the researchers of this study and, when necessary, a consensual border was adopted. As defined in previous studies using similar multiarchitectonic approaches, the areas were defined only if they showed differential staining patterns in at least two morphological methods and if they could be consistently found in all animals studied [35]. Two-dimensional schematic representations of the lateral, orbital and medial surfaces of the Cebus PfC were designed (Figure 4), showing the approximate location of areal boundaries, as presented in previous studies ( Figure 1). The designation terms of sulcus and gyrus for Cebus monkeys used by von Bonin [32], the Template Atlas of the Primate Brain [79], and A stereotaxic atlas of the brain of the Cebus monkey (Cebus apella) [80] were adopted in this study.
2014-10-01T00:00:00.000Z
2011-01-13T00:00:00.000
{ "year": 2011, "sha1": "f94aceb922b759b0a3e717469812358b18fba0cd", "oa_license": "CCBY", "oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-12-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f94aceb922b759b0a3e717469812358b18fba0cd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
86249547
pes2o/s2orc
v3-fos-license
Genetic and environmental effects on seed weight in subspecies of big sagebrush: Applications for restoration The sagebrush steppe is a patchwork of species and subspecies occupying distinct environmental niches across the intermountain regions of western North America. These ecosystems face degradation from disturbances and exotic weeds. Using sagebrush seed that is matched to its appropriate niche is a critical component to successful restoration, improving habitat for the threatened greater sagegrouse and other species. The need for restoration is greatest in basin habitats composed of two subspecies: diploid basin big sagebrush (A. tridentata subsp. tridentata) and tetraploid Wyoming big sagebrush (subsp. wyomingensis). In this study we assess seed weights across five subspecies-cytotype groups of big sagebrush and examine the genetic and environmental components. Our goal is to determine if seed weight can be used as a diagnostic test for subspecies and seed certification. Seed weight was measured from 55 wild collections and from progeny derived from these collections and grown in two common gardens. A linear mixed-effect model showed 91% of the variation in seed weight is explained by genetic, genetic 3 environment and environmental effects (conditional R1⁄4 0.91). Moreover, genetic effects alone, subspecies-cytotype groups, explained 39% of the variation (marginal R 1⁄4 0.39). Of the five subspeciescytotype groups, most had overlapping weights using conservative 99% confidence intervals. However, diploid tridentata and wyomingensis had non-overlapping 99% confidence intervals. To demonstrate the application of seed weighing to assess the subspecies purity of commercial seed lots, we compared confidence intervals of tridentata and wyomingensis developed from the experimental data to seed weights of commercial lots. The results showed that only 17% of the commercial seed lots certified as wyomingensis had mean seed weights that fell within the confidence intervals for this subspecies. The remaining lighter seed lots (83%) matched weights of tridentata. While restoring sagebrush ecosystems is a multifaceted problem, a fundamental component to restoration is ensuring the appropriate seed is used. We found seed weight is principally affected by genetic factors, with limited environmental effects. Seed weighing is an effective application to assess subspecies purity of wyomingensis and tridentata seed and could be used as a certification step for evaluating commercial collections used in restoration. INTRODUCTION Even with due diligence in site preparation and planning, obtaining goals set for ecological restoration can be forfeit if the seed and seedling traits (i.e., genetics) are not appropriate for the site. Restoration of arid lands is fraught with challenges including weed control and infrequent episodes of favorable weather necessary for seedling establishment (Hardegree et al. 2011). These challenges are common in efforts to restore sagebrush ecosystems (Davies et al. 2011). Sagebrushes (Artemisia subgenus Tridentatae) are the cornerstone of North American cold deserts, fostering a diverse assemblage of native grasses and forbs, insects, and obligate fauna including the greater sage-grouse (Centrocercus urophasianus). Sagebrush ecosystems have been degraded primarily by fire and human disturbances and subsequent displacement by exotic weeds. Degradation of sagebrush ecosystems has been the principal cause of sage-grouse loss (Connelly et al. 2004), leading to its threatened listing status. Sagebrush is composed of several species, but big sagebrush (Artemisia tridentata) is by far the most ubiquitous in North American cold deserts. This species can be divided into three widespread subspecies (tridentata, vaseyana and wyomingensis), based on characters including growth, cytotype and chemical compounds Welch 1982, McArthur et al. 1988). In addition, these characteristics reflect genetic and physiological attributes (Kolb andSperry 1999, Richardson et al. 2012), and most importantly for restoration and resiliency, different adaptive strategies. Subspecies are associated with climatic and / or soil properties (Barker and McKell 1983, McArthur and Sanderson 1999, Still and Richardson 2015. Subspecies vaseyana occupies cooler, more mesic mountain environments, whereas tridentata and wyomingensis typically occupy warmer, drier basins Plummer 1978, Mahalovich and. Within these basins, tridentata occupies drainages, washes and floodplains with deeper soils and lengthened summer retention of soil moisture (Barker and McKell 1983), supporting greater growth and fecundity (McArthur and Welch 1982). In contrast, wyomingensis exhibits slower growth and occupies arid uplands, which comprise much of the landscape found in basin habitats. The spatial separation of tridentata and wyomingensis habitat can be meters depending on the topographic and soil heterogeneity. Polyploidy, hereafter referred to as cytotype, is an important taxonomic and adaptive characteristic. McArthur and Sanderson (1999) have shown wyomingensis to be exclusively tetraploid, while vaseyana and tridentata include both diploid (2x) and tetraploid (4x) populations. It is important to note, especially in the context of this study, that diploids of tridentata and vaseyana are more frequent comprising of about 75% of the samples collected. Moreover, 4x-tridentata occurrences were generally found in Washington State and in the southern periphery (i.e., Arizona, New Mexico and southern California) of this species' distribution (McArthur and Sanderson, 1999). These areas are generally outside the geographic area of where much of the sagebrush restoration and seed collection is conducted (i.e., the Great Basin). From the research described above, a clear picture has emerged that A. tridentata subspecies occupy distinct environmental niches. Among these niches, there is a disproportionate need for restoration in wyomingensis habitat. The vast majority of area that is degraded by disturbances and weed invasion has occurred at the driest end of the spectrum of sagebrush habitats (Chambers et al. 2007). These areas are largely comprised of wyomingensis habitat (Rowland et al. 2010). Therefore, it is critical that seed be identified to subspecies prior to use in restoration (Shaw et al. 2005). Sagebrush seed is wild collected and purchased by commercial seed brokers who pay seed collectors based on bulk weight. From there, seed is cleaned and subsequently purchased by private landowners (e.g., mining companies) and by U.S. state and federal land management agencies, principally the Bureau of Land Management (BLM). Subspecies purity is verified by site inspection, typically by the Utah Crop Improvement Association (http://www. utahcrop.org/). However, assessing purity of wyomingensis seed collections based on site inspection is particularity challenging because (1) the frequent co-occurrence of wyomingensis with 2x-tridentata and (2) 2x-tridentata produces ca. four times more seed per plant than wyomingensis (Richardson, unpublished data from com-mon gardens). This difference in seed yield is likely amplified in natural stands given the resource disparities in the niches occupied by the two subspecies. Therefore, harvesting 2xtridentata in an intended wyomingensis collection potentially has a major influence on subspecies purity. Intraspecific variation in seed size can be an important strategy for environmental adaptation. There is now broad support among studies that increased seed size can have a positive effect on emergence and survival (Bonfil 1998, Sõber andRamula 2013), and could be especially important in arid ecosystems (Larson et al. 2014). Other studies have noted a positive relationship between seed size, increasing DNA content (Chung et al. 1998) and cytotype (Bretagnolle et al. 1995). Here, we investigate seed weight differences among subspecies and cytotype of big sagebrush. Our aims are to: (1) determine if significant seed weight differences exist between subspeciescytotype groups, (2) assess the extent to which differences among subspecies-cytotype groups are attributable to genetic, genetic 3 environmental (G 3 E) or environmental effects based on a linear mixed-effects model, (3) use seed weight means and confidence intervals derived from tridentata and wyomingensis to evaluate commercially collected seed purchased by the BLM for restoration and (4) discuss the potential applications to assess wyomingensis seed purity. Seed collections Experimental collections of big sagebrush seed were obtained from three collection sources: from wild collected, hereafter referred to as original seed, and from two common gardens, Majors Flat, Utah and Orchard, Idaho, USA. Majors Flat is a relatively cool and mesic climate, and Orchard is a relatively warm and dry climate. The original seed was collected from 55 natural populations located throughout the distribution of the species in fall 2009 (Fig. 1). The original seed was used to generate seedlings for the common gardens, and seedlings derived from a single wild-collected plant are considered to be maternal half-sibs, hereafter referred to as a family. Families from each population were arranged so that at least one family member was represented at both gardens. Common garden locations were chosen based on logistics and contrasting ranges in climate for big sagebrush (Appendix : Table A1). Seedlings were grown in a greenhouse for 3 months, hardened outside, and planted in the spring of 2010 into the two common gardens. Seed was collected from a total of 44 populations from the Orchard garden and 55 from the Majors Flat garden in 2012 and 2013. The discrepancy in number of collected populations was due to poor seed yield of some populations at Orchard. Of the 55 populations of original seed, only 39 could be included in this study because the seed supply had been exhausted in generating plants for the common gardens (Table 1). Subspecies and cytotype was determined by a combination of morphology, UV fluorescence, flow cytometry and genetic markers for the experimental collections. As described in Richardson et al., (2012), tissue samples from the original seed collection were used to evaluate morphology and UV fluorescence (Stevens and McArthur 1974). Flow cytometry was performed on at least one plant per family in each population. These data were used to create five subspecies-cytotype groups: subspecies tridentata and vaseyana included both diploid (2x) and tetraploid (4x) populations, while subspecies wyomingensis was exclusively tetraploid. Similar findings were reported by (McArthur and Sanderson 1999). The seed weight data from experimental collections were used to evaluate weights of subspecies-certified commercial collections. Commercial seed lots, provided courtesy of the BLM, were harvested from geographically defined sites in the Great Basin in 2013 and 2014 (Table 1; Appendix: Table A2). These collections are bulked on a site basis (ca. 5 to 10 hectares) and used for restoration plantings. Seed was certified for location and subspecies based on site inspection and morphological examination by the Utah Crop Improvement Association. A total of 30 seed lots were examined (weight and UV test) from 5 tridentata, 18 wyomingensis and 7 vaseyana. Seed cleaning and measurements For the common garden and original collections, seed harvesting was conducted in November to ensure all seed had ripened. Flowering stalks were clipped, bagged and then dried at room temperature for a few days to aid in cleaning. Seed was cleaned of chaff using soil sieves of decreasing size from 1, 0.5 and 0.4 mm screen size. Any remaining chaff was removed by hand. Cleaned seed was placed in plastic bags and stored in a À208C freezer until weighing. An analytical scale (0.1 mg readability and repeatability) was used to weigh 10 randomly selected seeds per sample. For the original seed collections, typically five families per population were selected for weighing and seed from individual plants were subsampled 3 times. In the common gardens, typically two plants from different families were collected from each population. Individual plants were subsampled 6 times. This process was replicated for two years, 2012 and 2013, for both common garden collections (Table 1). For the commercial collections, seed was cleaned as described above, and then each seed lot was subsampled 10 times. Increased subsampling was conducted because the collections were from bulked seed. To better confirm the relationship between seed weight and composition of 2x-tridentata and wyomingensis in commercial collections, two high and two low weight, wyomingensis-labeled seed lots were chosen for flow cytometry analysis. Seeds were germinated and grown for two weeks, and then the leaves were harvested for flow cytometry. Flow cytometry was conducted on 25 to 30 seedlings of each seed lot. Subspecies v www.esajournals.org tridentata and wyomingensis were putatively assigned 2x and 4x cytotypes, respectively. Statistical analysis To assess the variance components affecting seed weight, the experimental collections (i.e., original and common gardens) were fitted to linear mixed-effect models (LMM) using restricted maximum likelihood. A LMM was conducted in R v3.1.2 (R Core Team 2014) using the packages lme4 v1.7 (Bates et al. 2012) and lmerTest v2.2 (Kuznetsova and Brockhoff 2015). The study design described above was used to develop the LMM response function. Subspeciescytotype groupings were specified as fixed effects and random effects. Random effects were arranged in a nested hierarchy with the following structure: collection þ year 3 collection þ population 3 (year 3 collection) þ family 3 (population 3 (year 3 collection)). The response function can also be segregated into genetic, environmental and G 3 E components. Subspecies-cytotype groups are considered genetic components, collection and year as environmental and population and family as G 3 E. For the LMM, P values were generated using F tests with Satterthwaite approximation for fixed effects and likelihood-ratio chi-squared tests for random effects. Variances and P values were used as a guide to evaluate the importance of model predictors. In addition, variability explained by fixed and random effects and fixed effects alone were evaluated using conditional and marginal R 2 , respectively (Johnson 2014). Seed weight confidence intervals were calculated for each subspecies-cytotype group. These calculations were completed with the 'profile' method at 99% confidence limits using the confint function in the lme4 package. RESULTS A box plot illustrates the distribution of observed data and notable patterns (Fig. 2). First, v www.esajournals.org mean seed weights of subspecies-cytotype groups from the experimental data ranged from 1.8 mg, 2x-tridentata from the original collection, to 3.5 mg, 4x-vaseyana from Majors Flat. Second, subspecies-cytotype groups had similar seed weights among collection sources; however, the Majors Flat collection had slightly higher seed weights compared to other collections. Third, 2xtridenata had consistently the lowest weight and 4x-vaseyana and wyomingensis had the highest seed weight across collections (Fig. 2). The statistical significance of these observations is explored below. Results from the LMM indicate that seed weight is largely determined by genetic factors. Fixed effects, genetic groupings of subspecies and ploidy, explained 39% of the variation (marginal R 2 ¼ 0.39), and combined fixed and random effects explained 91% of the variation (conditional R 2 ¼ 0.91). Based on the experimental design, the random effects can be segregated into two factors: (1) environmental, collection and collection 3 year, and (2) G 3 E, population and family, nested within the environmental factors. Collection alone was not significant. However, collection 3 year, had a significant variance component (0.025, p ¼ 0.004). This environmental variance was relatively small compared to G 3 E variances of population (0.096, p , 0.0001) and family (0.127, p , 0.0001; Table 2). Substantial differences in slopes were estimated between the intercept (2x-tridentata) and other subspecies-cytotype groups. Tetraploid tridentata and vaseyana are predicted to weigh 41% (0.72 mg) and 20% (0.47 mg) heavier, respectively, compared to their 2x counterparts ( Table 2). The largest differences among subspecies-cytotype groups were approximately 1 mg. These differences were found between the 2x-tridentata and wyomingensis and between 2x-tridentata and 4xvaseyana (Table 2). Confidence intervals at the 99% level support distinct weight difference between 2x-tridentata and wyomingensis and 2xtridentata and 4x-vaseyana. Most importantly, geographically co-occurring 2x-tridentata and wyomingensis had the widest margin between upper and lower confidence intervals ( Fig. 3; Appendix: Table A3). Environmental effects, collection 3 year, had varying degrees of positive and negative effects on the seed weight intercept. These effects ranged from À0.19 mg for the original collection to 0.27 mg at the Majors Flat garden in 2012 (Fig. 4). Consistent collection 3 year effects are estimated for the Orchard garden that appear to not be substantially different than zero. However, the year effect was considerably different at Majors Flat. Following the 0.27 mg effect in 2012, the 2013 effect was not different from zero (Fig. 4). Significant differences in LMM slopes among subspecies-cytotype groups (Table 2) translated into non-overlapping confidence intervals for two group comparisons. Diploid tridentata (1.77 mg) and wyomingensis (2.76 mg), and 2x-tridentata and 4x-vaseyana (2.81 mg) were distinct. Other group comparisons had overlap in confidence intervals ( Fig. 3; Appendix: Table A3). Confidence intervals from 2x-tridentata and wyomingensis were used to evaluate seed weights of commercial seed lots used in restoration. This comparison focused on seed lots certified for either subspecies wyomingensis or 2x-tridentata for Fig. 2. A box plot of seed weights for subspeciescytotype groups for experimental collections: Majors Flat and Orchard common gardens and the original source. T ¼ subsp. tridentata, V ¼ subsp. vaseyana, and W ¼ subsp. wyomingensis. Numbers 2 and 4 indicate cytotype. Sample sizes of subspecies-cytotype groups in each collection can be found in Table 1. v www.esajournals.org two reasons: (1) these subspecies are by far the most abundant in basin ecosystems and (2) differentiation of vaseyana from other subspecies can be accomplished with a UV fluorescence test (Stevens and McArthur, 1974). Of the 18 seed lots labeled as subspecies wyomingensis, only three (17%) had mean weights that fell within the confidence intervals of wyomingensis. The 15 seed lots with lower weights fell within the confidence intervals of 2x-tridentata. Of the five seed lots labeled as subspecies tridentata, four (80%) fell within the confidence intervals of 2x-tridentata. To illustrate the comparisons between seed lot and experimental collection weights, smoothed histograms of subsampled weights from each seed lot are shown in conjunction with confi- Fig. 3. Ten-seed weight parameter values (dots) and 99% confidence intervals (whiskers) for subspeciescytotype groups of Artemisia tridentata. Data values can be found in Appendix: Table A3. v www.esajournals.org dence intervals from the experimental collections (Fig. 5). Upon examination of UV fluorescence in commercial seed lots, it was found that six out of seven (86%) of the vaseyana seed lots had elevated UV fluorescence at the expected level (rating of 3 or greater; Appendix: Table A2). Examination of cytotype by flow cytometry in wyomingensis-labeled seed lots further confirmed the correspondence between weights and the proportions of 2x-tridentata and wyomingensis seed. Flow cytometry was conducted on leaves of germinated seeds from two seed lots with high weights (ca 2.3 mg), exceeding the lower confidence limit for wyomingensis (lot A and C), and two seed lots with low weights (ca 1.7 mg), below the confidence limit for wyomingensis (lot B and D; Fig. 5). Higher-weight seed lots were largely comprised of wyomingensis (i.e., tetraploids) at 97% and 57% for lots A and C, respectively. However, lower weight seed lots were largely 2x-tridentata. Seed lots B and D consisted of 11% and 0% wyomingensis seedlings, respectively. This confirms wyomingensis seed present in seed lots A and C raised the seed weights compared to seed lots B and D that were principally comprised of 2x-tridentata. Therefore, seed weight can be used as a surrogate to assess the relative proportions or purity of commercial seed lots of 2x-tridentata and wyomingensis. Genetic and environmental effects The feasibility of developing a seed-weightbased application for evaluating big sagebrush subspecies is contingent on the degree of environmental influence. Large environmental effects on seed weight would negate precision needed to determine subspecies. In big sagebrush, variation in seed weight can be largely assigned to genetic effects, mainly from polyploidization, while purely environmental effects were small. These results are supported by other studies reporting a positive relationship between increasing DNA content and seed size (Caceres et al. 1998, Chung et al. 1998). The causal mechanism for this increase is unknown, but Beaulieu et al. (2006) proposed increasing DNA content and concomitant enlargement of cell size could cause a seed size increase. While environmental effects accounted for a small amount of the variation, patterns appear to be largely predictable among gardens and in years. At Majors Flat, a relatively cool and wet v www.esajournals.org site typically of vaseyana habitat, positive seed weight effects were observed in 2012, a 0.27 mg effect. However in the following year, 2013, essentially no effect was found (Fig. 4). The yearly effect appears to be inexplicable based on weather. Precipitation and temperature were relatively similar between years at this garden (e.g., ca. 415 mm precipitation). One possible explanation could be competition. The rapid growth at Majors Flat led to interspace closure between most plants by 2013, resulting in a greater degree of competition. Concomitant to a reduction in seed weight in 2013, per plant seed yield was dramatically reduced by 82% between 2012 and 2013 at Majors Flat (Richardson, unpublished data). Reduced allocation of resources to reproduction has been well documented in competition studies (Weiner 2004). In big sagebrush, competition may have a large effect on seed yield, but also a small effect on seed weight. Competition may also explain the decreased seed weight in the original seed collection. In most cases, the original seed collection was derived from mature stands where plants were in close intra-and interspecific competition. As expected given competition, the original collection had more negative effects on seed weight than common gardens (Fig. 4). At Orchard, other factors may be at work affecting seed weight. At this warm and dry garden, growth rate was much slower than at Majors Flat, and therefore, plant competition appears to be negligible. However, Orchard weather could be a factor. This garden received almost half the precipitation of Majors Flat and experienced longer, drier summers (Appendix : Table A1). In 2012 and 2013, ten populations at Orchard did not produce any seed. Despite these extreme conditions, the environmental effect on seed weight was only slightly negative at this garden (Fig. 4). Previous studies of intraspecific seed weights have largely shown a significant genetic effect. Our seed weight results of big sagebrush show significant genetic effects between subspeciescytotype groups. In addition, within groups, populations and families had significant effects, although these factors have an interaction with environment (Table 2). Our results further support the large body of literature that seed characteristics are under genetic control (Biere 1991, Castro 1999, Halpern 2005. Comparison of Artemisia tridentata weights from 23 commercially collected seed lots to confidence intervals derived from experimental collections. Weights of seed lots labeled as subsp. wyomingensis (dark gray) and 2x-tridentata (white) are plotted as a histogram and smoothed with a kernel density function. Solid red and blue vertical lines represent the median of the experimental collections for 2xtridentata and wyomingensis, respectively. Dashed lines show the upper and lower 99% confidence intervals. Seed lots labeled with letters indicate high (A, C) and low (B, D) weight seed that was germinated for confirming cytotype proportions. 2012) and have high heritability (Zas and Sampedro 2014). As described elsewhere (Zas and Sampedro 2014), partitioning maternal genetic versus maternal environment effects requires methodology beyond the scope of this study. However, the variation in seed weight exhibited in original maternal environment and seed from daughters in common gardens of contrasting environments showed seed weight is consistent across environments compared to other traits. For example, the Orchard garden had a 42% reduction in growth and 84% reduction in seed yield compared to Majors Flat (Richardson, unpublished data). Beyond being a diagnostic trait between 2xtridentata and wyomingensis, it is not certain how seed weight could influence plant fitness. For instance, does seed size have a functional role in seedling establishment? No formal study in big sagebrush has been completed on seed size and its effect on seedling growth, but other studies support seed size having a positive effect on establishment, survival and growth (Bonfil 1998, Larson et al. 2014). However, it is possible negative trade-offs (e.g., seed predation) may adversely affect seed dispersal and establishment (Meyer andCarlson 2001, Gómez 2004). Observations from a broader range of taxa in the Tridentatae may support the importance of seed weight. For example, seed weights of low sagebrush, A. arbuscula and black sagebrush, A. nova, which occur in more xeric sites (McArthur and Stevens 2004), are considerably heavier than big sagebrush (Jorgensen andStevens 2004, Meyer 2008), suggesting greater aridity may have a positive effect on seed size in this subgenus. Restoration implications The evaluation of commercial seed lots shows the mixed success in the seed collection and certification of big sagebrush subspecies. Subspecies vaseyana certified seed lots were largely correctly collected and labeled. Of the seven vaseyana seed lots, six have higher UV fluorescence ratings (.3), within the expected range for this subspecies (Appendix : Table A2). Similarly, four out of five tridentata seed lots had seed weights within the expected range. However, seed certification of wyomingensis seed has largely failed over the two years that were evaluated in this study. We suspect the results from 2013 and 2014 commercial seed lots, where wyomingensis seed lots were comprised largely of seeds from 2x-tridentata, likely reflect commercial seed collections from previous years. While onsite certification can determine if a particular area is suitable for collecting a given subspecies, certification agencies do not have the personnel to monitor seed collectors. It is our intent that seed weighing of basin subspecies can be used not only to evaluate the composition of subspecies in commercial collections, but also inform the seed industry and land management. Seed weight results could be useful to the seed industry in determining the best sites for collection. Land managers could utilize seed weights to infer subspecies composition (i.e., tridentata versus wyomingensis), choosing a seed lot that best suits the restoration site, matching proportion of tridentata and wyomingensis habitat to composition found in the seed lot. This research brings into question whether many of the previous restoration efforts in wyomingensis habitat were destined to fail due to planting the wrong subspecies, 2x-tridentata, regardless of other factors important to establishment. Recent monitoring and landscape analyses have shown that the Emergency Stabilization and Rehabilitation (ESR) program methods (BAER Guidebook version 4, 2006) failed to produce adequate establishment of sagebrush and sage-grouse cover, especially for sites in wyomingensis habitat . In light of our research, it is likely that planting the wrong subspecies of big sagebrush in the wrong habitat with exotic seeded competitors (e.g., Crested wheatgrass, Agropyron cristatum) would certainly provide poor odds for the establishment of big sagebrush cover. The BLM and U.S. Forest Service, which oversee much of the land occupied by sagebrush, have issued policies that direct the use of genetically appropriate plants for restoration (USDA 2008, USDI 2008. Along with the development of seed transfer zones, our proposed application of assessing subspecies tridentata and wyomingensis based on seed weight will aid in fulfilling these policies. ACKNOWLEDGMENTS We thank numerous volunteers, BLM, USFS, and Utah DNR staffs for assistance in collection of seed and garden maintenance. Thanks to thoughtful reviews from Drs. L. Chaney, E. D. McArthur and S. E. Meyer. Funding was provided by the USDI and USFS: GBNPP and USDI GBLCC and USFS: National Fire Plan (NFP-13-15-GSD-35) and climate change funding. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
2019-03-30T13:11:53.409Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "3c06b32cfc7afa45daaf0aa3a7bcc5dc4271036c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1890/es15-00249.1", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0bd43ce6553855617702848790d44dacd7091d61", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
267545021
pes2o/s2orc
v3-fos-license
Mixed-Mode Solar Drying and its Effect on Physicochemical and Colorimetric Properties of Zompantle (Erythrina Americana) A mixed-mode solar drying was developed to evaluate the physicochemical and colorimetric properties of Zompantle (Erythrina americana). A 22-factorial design was used; the operation mode (mesh shade and direct) and airflow (natural convection and forced convection) were established as factors in this design. The initial moisture content in the Zompantle flower was reduced from 89.03% (w.b) to values that ranged from 3.84% to 5.84%; depending on the operation mode of the dryer, the final water activity ranged from 0.25 to 0.33. The Zompantle’s components as proteins (4.28%), antioxidant activity (18.8%), carbohydrates (4.83%), fat (0.92%), fiber (3.71%), ash (0.94%), and total soluble solids (3°Brix) increased as the water was evaporated during the drying. The increment in the Zompantle’s components depends on the operation mode; in direct mode and natural convection, the proteins, antioxidant activity, carbohydrates, fat, fiber, ash, and total soluble solids were 6.99%, 61.69%, 79.05%, 1.20%, 3.84%, 8.70%, and 45 °Brix, respectively. The total drying efficiency was 14.84% with the direct mode and natural convection (DM-NC) and 17.10% with the mesh shade and natural convection (MS-NC). The Hue angle measures the property of the color; the indirect mode and natural convection keep the hue angle close to the initial value (29.2 °). The initial chroma value of the Zompantle flower was 55.07; the indirect mode and natural convection kept high saturation (37.58); these dry conditions ensured a red color in the dehydrated Zompantle. Dehydrated Zompantle’s flowers could have several practical applications, such as an additive in traditional Mexican cuisine. Supplementary Information The online version contains supplementary material available at 10.1007/s11130-024-01147-0. Introduction Zompantle, Erythrina americana Mill, is an endemic tree; it was a sacred tree for Aztecs, who took advantage of its medical properties that are still used nowadays [1].There are 113 species of Erythrina worldwide, mainly in tropical and subtropical regions, and it is commercialized in South America, Central America, and West Africa.Twenty-seven of these species are found in regions of Mexico such as México City, Guanajuato, Hidalgo, Jalisco, Michoacán, Morelos, Oaxaca, Chiapas, Guerrero, Nuevo León, Puebla, Querétaro, Tabasco, Tamaulipas, Veracruz and Yucatán [2].Erythrina americana Mill, also known as Zompantle, colorin, cochizquilitl, gasparito, pemuche, machete, pichoco, and alcaparras, among other nicknames, belongs to the leguminous family [3].The Zompantle's blossoms are fried or boiled in stews or sauces and are appreciated because of their high protein and lipid content, representing a great food alternative.However, due to its high content of humidity, Zompantle is highly transitory and has a propensity to decomposition reactions; regularly, the studied species of edible flowers have short commercial longevity, which varies from 4 to 10 days, while the maximum total longevity varies from 6 to 14 days [4].Consumption of dry flowers has increased due to their nutritional and medicinal value; therefore, drying has been the most used among numerous preservation methods.In the open literature, some studies have been reported on the drying of edible flowers such as pumpkin flower [6], walnut male flower [7], daylilies [8], magnolia liliiflora [9], marigold flower, jasmine, carnation [10].However, conventional methods such as freeze drying, combined far-infrared radiation with hot air convection, hot air drying, microwave drying, vacuum drying, hybrid drying, and dehumidified drying have been reported [11].Nowadays, the application of solar technologies in food preservation has been developed.Solar drying is an effective, safe, and lowpriced food preservation technique with minimal environmental impact.This process is more economical than storage because dried flowers occupy less space, weigh less, and do not require refrigeration [5].A solar dryer uses solar radiation, forced convection, and natural ventilation to decrease the humidity content in a product [12].This technology can be used to obtain quickly processed products, store them for long periods, and use them conveniently to manufacture formulated foods.Drying decreases the water-related activities of plants and consequently inhibits the growth of microorganisms while decreasing the rate of the biochemical reactions, thus extending the shelf life of the products at room temperature [13].Solar dryers can be classified into two kinds: active and passive.Fans are integrated into the cabin in active systems to advance the humidity drag.In contrast, air flows naturally in passive systems because of lift forces due to density differences due to a temperature gradient.In direct solar dryers, part of the radiation that the product is exposed to is absorbed by the product itself, which increases the temperature and the product's water evaporation.The humidity exchange with the material's interior to the immediate environment depends on the diffusion phenomena.At the same time, mass diffusivity relies on different factors such as shape, structural components, and humidity content.On the other hand, in indirect dryers, the air that passes through the collector is heated and transported to the drying chamber to transfer the thermal energy to the dehydrated product.For this research, an active dryer mixed type was used; during the drying process, the solar irradiance was attenuated by using a mesh shade not only to decrease the drying temperature but also to evaluate the effect of solar and ultraviolet irradiance on physicochemical properties of Zompantle during the drying process.To the best of our knowledge, there are no reports in the literature about solar drying of the Zompantle flower.Zompantle's flowers could have several practical applications, for instance, as an additive not only in traditional Mexican cuisine but also for dishes such as pasta, creams, flours, and even formulated foods.The impact and degree of incidence of this research is that results that were obtained in this study will be applied to specific problems such as the use of fossil fuels with rising prices, which deteriorate the environment, food waste in rural areas of the country, hunger and energy poverty; development of community centers for solar dehydration of food in rural areas to reduce food waste, seeking to contribute to reducing environmental impact, increasing regional community economic development, as well as greater production and availability of nutritious products in the market and at the rural families.Therefore, this work aimed to evaluate the effect of mixed-mode solar drying on the physicochemical and colorimetric properties of Zompantle the flower. Material and Methods Detailed description is provided in the supplementary information file. Characterization of Zompantle Flower The initial moisture content and water activity of the zompantle flower were 89.03% (w.b.) and 0.970, respectively (Table 1); zompantle is highly transitory and has a propensity to decomposition reactions; regularly, the studied species of edible flowers have short commercial longevity, which varies from 4 to 10 days, while the maximum total longevity varies from 6 to 14 days [4]. According to Hamrouni [13], flowers have a high moisture content of up to 80% (w.b).Isis [14], Pinedo et al [15], and Lara et al [16] reported 87.02, 88.1, and 86.6% (w.b) of moisture content in the zompantle; however, the moisture content of Zompantle ranges from 85.25 to 91.77 depending on the post-harvest phase.The colorimetric analysis in the zompantle flower showed positive values in a (45.93) and b (30.35) parameters (Table 1); according to the Hunter system, the redness and yellowness are represented by a and b values on the positive side; therefore, a dominant red color in zompantle flower was observed.On the other hand, the color property is measured by the Hue angle; in this case, the zompantle flower showed a hue angle of 33.32°; according to the literature, when the hue angle ranges from 0° to 90 °, the color pass from red to yellow color in the product [17] in this case, the hue is in the red zone.The chroma value in the zompantle flower was 55.07, indicating that the red color is pure and intense.The literature reports that pigment is related to purity or color saturation (chroma), which increases with increasing pigment concentration [6].Pinedo et al [15] said a chroma value of 15.85%, a hue angle of 45.31, and a lightness of 54.55; these results infer that the zompantle flower was less red because of the high lightness and low chroma.The proximal analysis in the raw zompantle flower showed 4.287% of proteins, 0.9455 ash, 0.9237% of fat, 3.0 °Brix of total soluble solids, the antioxidant activity of 18.8%, carbohydrates of 4.83%, and 3.71% of fiber; the results were close in some properties to the proximal analysis of zompantle reported by Sotelo et al [18] (4.10 °Brix, 87.60% moisture, 8.97% of ash content, 13.69 of fiber, and 56.64 of carbohydrates).Lara et al [16] reported the proximal composition of zompantle (86.6 g/100 g of moisture content, 26.2 g/100 g of proteins, 2.8 g/100 g of fat, 12.7 g/100 g of fiber, 5.8 g/100 g of ash, and 62.1 g/100 g of nitrogenfree extract).Some properties of the zompantle flower were reported by Pinedo et al [15] and López et al [19]. Drying Kinetics The solar drying process of Zompantle flower was carried out on 31 January, 1st -2nd of February, 7th-8th of February, and 9th-10th of February, 2023, using an active dryer mixed type (Supplementary information). The test began from 9:00 a.m. to 18:00 p.m.; two days were necessary to dry the Zompantle flower, except for the experimental test on 31 January (Fig. 1). Figure 1 shows the drying kinetics of Zompantle; the Zompantle was dehydrated in 480 min (8 h) the day 31 January, 660 min (11 h) the day 7th-8th of February, 660 min (11 h) the day first -2nd of February, and 720 min (12 h) the day 9th-10th of February (Supplementary information). The difference in the drying time was affected by the mesh shade and the forced convection.The maximum temperatures registered inside the dryer were 74.36 °C in direct mode and natural convection (31st of January) (Fig. 2A) and 67.73 °C with forced convection (1st -2nd of February) (Fig. 2B).As seen from the Figures, when the dryer operates in direct mode, the maximum irradiance is 1050 W/m 2 and 1112.01W/m 2 , respectively. The solar dryer receives the sun's radiation in the horizontal and vertical plane, favoring homogeneous drying by the temperature increment.When the dryer operates by using a mesh shade and natural convection, the solar irradiance decreased to 504.22 and 490.29 W/m 2 , and the maximum drying temperature was 62.28 °C (7th-8th of February) (Fig. 3A) and 44 °C with forced convection (9th-10th of February) (Fig. 3B). In all drying conditions, the maximum external ultraviolet radiation ranged from 31.99 to 40.40 W/m 2 depending on the ambient conditions; however, although the cover of the drying chamber was made of polycarbonate with ultraviolet protection, the ultraviolet radiation detected inside the dryer ranged from 0.1236 to 0.2873 W/m 2 depending the operation mode of the dryer.Rodríguez et al [20] reported that the cover material of solar dryers influences fruit properties due to the ultraviolet radiation and temperature; their results demonstrated that dried samples of strawberries in polyethylene cover favored the remaining 40.7% of anthocyanins, whereas in cellular polycarbonate where the ultraviolet radiation was zero only 15.5% of total anthocyanins were retained.In this study, the optical transmittance of cellular polycarbonate is too low; therefore, the ultraviolet radiation inside the dryer was near zero; therefore, the effect of ultraviolet radiation was not significant because the polycarbonate material limited the passage of ultraviolet radiation inside the dryer.In the literature, some studies have been reported about the solar drying of edible flowers; García et al [6] said the drying kinetics of pumpkin flowers by using modified solar dryers; in their results, eight to fifteen hours were needed to dry the pumpkin flower depending to the drying cover employed.Fernandes [21] reported three days of sun drying in Robinia pseudoacacia at 35 °C.The drying time, as well as the retention of the nutritional components of the edible flower, depends on the drying technology and ambient conditions used to dry the sample [22]. Moisture Content and Water Activity of Dehydrated Zompantle The initial moisture content in the zompantle flower was reduced from 89.03% to values that ranged from 3.84 to 5.84%, depending on the operation mode of the dryer (Supplementary Table 1) (Table 2). As seen from Table 2, when the dryer was operated with the mesh shade and direct mode with forced convection, high values in moisture content were obtained (5.38 and 5.84%).On the other hand, low moisture values (3.84 and 4.06%) were observed with natural convection using a mesh shade to attenuate the solar irradiance and direct operation mode.This behavior relates to high temperatures reached inside the dryer with natural convection.The analysis of variance (Supplementary Table 2) showed that the factors did not affect the zompantle's final moisture content.Usually, food with high moisture content is very prone to spoilage; in this case, the dried zompantle flower can be considered safe for storage because the final moisture was reduced by less than 10% [23].The initial water activity of the pumpkin flower was 0.970, and it was controlled by reducing the moisture content.As seen from Table 2, the final water activity ranged from 0.25 to 0.33; this water activity means that chemical reactions and biological processes will not take place.The analysis of variance revealed that drying conditions did not affect the final water activity in the dried product. Colorimetric Analysis in Dehydrated Zompantle Flower The zompantle flower showed an initial lightness value of 34.36 (Table 1); this colorimetric property tends to decrease to 23.08 and 27.89 (Table 2) when the operation mode was DM-NC and MS-FC.The analysis of variance showed that only the operation mode significantly affected the lightness.The lightness decreases due to the product's water content loss because the moisture content affects the reflectance color; therefore, the zompantle tends to be dark.On the other hand, the lightness increased to 36.08 in MS-NC.Changes in food color result from many factors, including the modification of cellular structure, changes in pH, degradation of carotenoids, and loss of water content.Ferouali [24] reported a decrease in color parameters (L, a, and b) by using indirect solar drying at 40, 50, and 60 °C in Punica granatum flower; the best preservation of color was at 40 °C.During the heating, the samples can develop degradation of pigments; firstly, a slightly yellow color is observed; then, the sample turns red, and if the sample contains high total soluble solids, a brown color can be observed [25].The Hue angle measures the property of the color, and it is expressed in degrees. The initial hue angle was 33.32°; this value means that the zompantle flower is near to red color.The analysis of variance revealed that the interaction effect of operation mode and airflow significantly affected the Hue color.The Hue angle starts at the +a axis and is expressed in degrees; 0° is red, and an increase in Hue angle means that the color goes to red, orange, and yellow (90°, +b).According to Table 2, the MS-NC keeps the hue angle close to the initial value (29.2 °).Chroma is an intensity measurement, taking values from 0 to 60.The initial chroma value of the zompantle flower was 55.07.According to the results, the sample becomes dark when the chroma values decrease; conversely, the color will be purer and more intense with high chroma.In this case, the MS-NC keeps high saturation (37.58) with a high hue angle (31.65).These dry conditions ensure a red color in the dehydrated zompantle.The variation between the raw and dried samples is known as the total color difference (∆E).As seen from Table 2, the ∆E values ranged from 29.2 to 40.63.According to the descriptive levels in ∆E, a range up to 12 means a noticeable difference concerning the standard [26].Although the lowest color difference (29.2) was observed in MS-NC, the color change resulted appreciably in this case. Physicochemical Properties in Dehydrated Zompantle The initial protein content in the raw zompantle flower was 4.29% (Table 1); however, all treatments observed an increment in this property at the end of the drying process.Table 2 shows that protein content ranged from 4.94% to 7.65% in the dehydrated zompantle; the highest protein content (7.65%) was observed in MS-NC.The analysis of variance showed that the independent variables affected significantly this response variable.The drying conditions conserved the protein content better than MS-NC.The initial fat content in the zompantle flower was 0.9237%, and this component ranged from 1.20 to 2.30%, depending on the drying conditions.In general, flowers apport low-fat content, and as a result, they are considered low-calorie foods.Pinedo [15] reported the physicochemical properties in edible flowers of wild plants of Mexico as A. salmiana, A vera, E. Americana, and M. geometrizans; in their investigations, 1.58, 2.95, 1.05 g, and 1.69 g/100 g of ether extract, respectively were reported.Ahluwalía [27] used different drying methods (vacuum and cabinet dried) in the dehydration of Marigold petals (Tagetes erecta); their results demonstrated that some constituents, such as proteins, ash, and fiber, increased, whereas properties such as antioxidant activity and total phenolic content decreased significantly.Ahluwalía [27] reported an increment in protein content from 2.0 to 4.28% in vacuum drying and 3.40 in cabinet drying; on the other hand, the ash content increased from 0.45 to 3.20% in vacuum drying and 2.02 in cabinet drying; finally, the fiber content increased from 1.67 to 10.9 and 12.50% in vacuum and cabinet drying, respectively.Some researchers have reported that the food's components increase as the water evaporates during the drying [6,28]. Ferouali [24] reported that the best preservation of bioactive molecules in Punica granatum can be obtained at 40 °C; however, Fernandes [21] mentioned that high carotene content in marigold can be obtained at 60 °C.Table 2 shows an increment in fiber, ash content, antioxidant activity, carbohydrates, and total soluble solids observed in the components of zompantle.The fiber in the raw zompantle flower was 3.71, and this property ranged from 3.84 to 5.89% in the dehydrated flower; the antioxidant activity increased from 18.8% to values ranging between 35.68 to 61.69%, and the total soluble solids increased from 3.0% to 33-45%, depending on the drying conditions.The analysis of the results showed that the MS-NC and DM-NC preserved the components of zompantle better.The supplementary information file provides mathematical modeling of drying kinetics and energy efficiency. Conclusions In this study, the drying conditions that conserved better the physicochemical properties of the Zompantle flower were by using the mesh shade to attenuate the solar irradiance and natural convection.At these conditions, the total efficiency was 17.10%, the maximum drying temperature was 62.28 °C, the complete proteins were 7.65%, fat 2.30%, fiber 4.93%, ash 8.08%, and total soluble solids 36 ° Brix.However, the antioxidant activity can be increased using direct mode and natural convection (61.69%) and mesh shade but with forced convection (51.14%).Using the dryer with the mesh shade and polycarbonate with ultraviolet protection can attenuate the amount of ultraviolet radiation inside the dryer (0.1249 W/m 2 -0.1131W/m 2 ) and decrease the drying temperature.The moisture content in the zompantle flower was reduced from 89.03% to values that ranged from 3.84% to 5.84%.The final water activity ranged from 0.25 to 0.33; this water activity means that chemical reactions and biological processes will not take place.An increment in total soluble solids, protein content, fat, ash, and fiber better preserved the zompantle components.With the mesh shade and natural convection, keep a high saturation (37.58) with a high hue angle (31.65); these dry conditions ensure a red color in the dehydrated zompantle.This study suggests using the solar dryer in indirect mode; in this operation mode, the Zompantle flower is not exposed to direct radiation; it is only dehydrated with the air that passes through the collector to the drying chamber.Dehydrated Zompantle's flowers could have several practical applications, for instance, as an additive not only in traditional Mexican cuisine but also for dishes such as pasta, creams, flours, and even formulated foods. Fig. 1 Fig. 1 Drying kinetics of Zompantle (Erythrina americana) carried out in a mixed-type solar dryer on the day 31 January 2023 Fig. 2 Fig. 2 Temperature, solar, and ultraviolet radiation during the drying process of zompantle flower carried out the days: A 31 January, B 1st -2nd of February Fig. 3 Fig. 3 Temperature, solar, and ultraviolet radiation during the drying process of zompantle flower carried out the days: A 7th-8th of February, and B) 9th-10th of February Table 2 Physicochemical analysis of dehydrated Zompantle MS Mesh shade, NC Natural convection, DM Direct mode, FC
2024-02-09T06:17:36.286Z
2024-02-08T00:00:00.000
{ "year": 2024, "sha1": "8f07e70ffacb580ce2956143dba9b8ce03d9300f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11130-024-01147-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fd7880ec630809c3ae702cd9a91c52137c486c98", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
246396546
pes2o/s2orc
v3-fos-license
ORTHOPAEDIC TELEMEDICINE SERVICES DURING THE CURRENT NOVEL CORONAVIRUS PANDEMIC ABSTRACT Introduction To evaluate the use of telemedicine by physicians specializing in orthopaedics and traumatology at the authors’ institution, and to assess the rates of satisfaction and resolution for this type of care. The current global coronavirus disease 2019 (COVID-19) pandemic has resulted in the expansion of telemedicine services. However, quality measures and barriers for physicians dealing with the rapid increase in patients have not been well described. Materials and Methods This study included 255 patients with orthopaedic complaints. Between 24 and 48 hours after the appointment, independent physicians, who did not participate in the initial appointment, contacted one another to assess the degree of satisfaction with the appointment, and whether there was a solution to the orthopaedic complaint. Results There was a need for referral for face-to-face consultation in only 13.8% of cases. When asked about the probability of recommending telemedicine to a friend/family member, the answer was 90.3%. The satisfaction rate with the service was 91.1% and 93.69% of patients would return for a telemedicine consultation. Telemedicine consultations solved the problem in 82.74% of cases. Conclusions Telemedicine care in orthopaedics proved to be a service modality with a high rate of satisfaction among the patients evaluated. Level of evidence III, Retrospective cohort study. INTRODUCTION Shortly after the outbreak and rapid spread of coronavirus disease 2019 (COVID-19), the World Health Organization declared it a pandemic on March 11, 2020 1 . Governments around the world are quickly realising the impact of COVID-19 on healthcare services and the economy. Amid reports regarding the spread of the causative agent-severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-there is also recognition that online tools, such as telemedicine, can play a critical role in the global response to this crisis. Telemedicine is ideal for the management of communicable diseases. A key factor in delaying the transmission of a virus is "social distancing" 2-5 , which is aimed at decreasing interpersonal contact. For patients with COVID-19, or those concerned about the possibility of being infected with SARS-CoV-2, telemedicine can help with remote assessment (screening) and the provision of initial care. For individuals not infected with SARS-CoV-2, especially those most at risk for being affected (e.g., elderly individuals with co-morbid or pre-existing medical conditions), telemedicine can provide convenient and remote access to routine care without the risk for exposure to hospital environments or waiting rooms in physicians' offices and outpatient clinics [6][7][8][9][10] . However, for telemedicine to be effective during the current COVID-19 pandemic and similar future events, we must ensure that the tool is properly integrated into existing health services. The purpose of this study was to evaluate the use of telemedicine in orthopaedics and traumatology consultations at our institution, and to assess the rate of satisfaction with and resolvability in this type of service. METHODOLOGY This prospective, observational study included 300 patients with orthopaedic complaints, who were treated by physicians specialising in orthopaedics and trauma from the authors' institution, using the institution's telemedicine platform. After initial consultation, the physicians completed a questionnaire addressing items such as diagnostic hypothesis, examinations requested, and proposed treatment, whether there was a need for referral for face-to-face consultation, and total consultation time. In the period from 24 to 48 h after the appointment, independent physicians, who did not participate in the initial appointment, contacted one another to assess the degree of satisfaction with the appointment and completed a questionnaire with questions including: age; what is the probability of indicating the telemedicine for a friend/family member; how satisfied you were with the service; if you would have a telemedicine consultation again; and if the consultation solved your problem. When the physicians get in touch to evaluate the care, the consent form is sent by e-mail or cell phone message, and if the patient agrees to participate, he/she will accept by e-mail or cell phone. Inclusion criteria Patients undergoing orthopeadic consultation using telemedicine during the study period. Exclusion criteria Patients who did not wish to or were not contacted for evaluation after care. RESULTS Of the 300 patients initially recruited, 255 were contacted. The mean age of the patients was 64.75 years (range, 15-95 years). The average consultation time was 22.65 min (range, 4-45 min). Summaries of the diagnostic hypotheses, requested examinations, and proposed treatment are presented in Tables 1 to 3, respectively. There was a need for face-to-face referral consultation in only 13.8% of cases. When asked about the probability of recommending telemedicine to a friend/family member, the response was "yes" in 90.3%, and the satisfaction rate with the service was 91.1%. A total of 93.69% of patients would return for a telemedicine consultation. Of the 15 patients who did not return, the reasons were as follows: they would not participate in telemedicine consultation in the orthopaedics specialty (n = 7); only if there was no face-to-face consultation (n = 5); only in cases of return (n = 2); and because physicians requested too many examinations in this type of care (n = 1). The consultation performed by telemedicine solved the problem in 82.74% of cases. DISCUSSION Due to the challenges imposed by the current coronavirus pandemic (i.e., COVID-19), we observed an increase in the use of telemedicine in orthopaedics. Current studies have reported that satisfaction rates with the use of telemedicine are comparable to those of face-to-face consultations, and patients who experience virtual consultation are more likely to seek this type of care in the future [11][12][13][14][15][16][17] . In our study, we obtained similar results, with a high rate of satisfaction, as well as a high percentage of patients who would return to use this type of care. Buvik et al. 11 conducted a randomised clinical trial involving 389 patients, of whom 86% preferred consultation by telemedicine in orthopaedics over personal consultation, and 99% indicated that they were satisfied or very satisfied with consultation by telephone. Sinha et al. 16 conducted a non-randomised study in which they compared paediatric follow-up after fracture performed by telemedicine and in person. Although the satisfaction levels of the two groups were similar, telemedicine reduced the costs and time associated with consultation. In addition, only 8 of the 101 patients who were treated with telemedicine preferred the next consultation to be in person. Bertani et al. 17 performed a prospective evaluation of paediatric orthopaedic consultations between 2009 and 2011, and found that consultation by telemedicine resolved 90% of diagnostic doubts, although the clinical outcome was reported to be "good" or "very good" in only 81% of patients. Haukipuro et al. 18 conducted a randomised clinical trial of orthopaedic services and found that the level of patient satisfaction was similar in the telemedicine (n = 76) and face-to-face (n = 69) consultation groups. In a study by Hurley et al. 19 , 268 orthopaedists were interviewed about the use of telemedicine, in which 84.8% of surgeons were currently using telemedicine, but only 20.5% of surgeons used it before the COVID-19 pandemic. The satisfaction rate with the use of telemedicine ranged from 20.9% to 70.3%. Among those who used telemedicine, 75% used it for new patients, 86.6% for routine monitoring, and 80.8% for postoperative patients. Orthopaedists were more easily able to perform the physical examination in patients who were already accompanied or who were in the postoperative period than in those who were undergoing the first consultation. Thus, they reported that after the COVID-19 pandemic, they tended to maintain telemedicine in these patients. A very important point is which patients experience the greatest benefit from using telemedicine. In our study, 15 patients reported that they would not return to consultations by telemedicine in orthopaedics. Of these, 46% reported that they would not participate in the orthopaedics specialty, 33.3% only if there was no face-to-face consultation, 13.3% only in return visits, and 0.06% stated that they would not participate the telemedicine consultation again because physicians ordered too many exams. To our knowledge, this was the first study to investigate the use of telemedicine in orthopaedics in our country to enable the assessment of the perception(s) of patients who are subjected to this type of care. A limitation of this study was that there was no comparison with face-to-face consultations, not only to compare satisfaction and resolution rates, but also cost(s). An important question is whether orthopaedists actually request a greater number of tests in telemedicine services, given the limitations to physical examinations that may exist. Because our study was performed during the COVID-19 pandemic, we did not perform a comparison with the face-to-face consultation; however, we aim to do so in the near future. In addition, due to the characteristics of the beneficiaries of the health plan in our study, the average age was high. Among elderly patients, it may be more difficult to use telemedicine. However, there was a high rate of satisfaction among the patients who participated in our study, suggesting that advanced age is not necessarily a limiting factor for the use of this type of care. CONCLUSION Telemedicine care in orthopaedics proved to be a service modality with a high rate of satisfaction among the patients evaluated, and a high proportion returning to this type of care. Telemedicine demonstrated a high rate of resolvability without the need for referral for face-to-face consultation.
2022-01-30T16:04:13.424Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "556b37ce10c7b451f3b3a9b6242576fcb294f309", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/aob/a/Fd8MsqRHFYYMgHHSfjLzzPP/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "408ac97458a968e31d1bdeb6d4bc2dfd830494e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13198641
pes2o/s2orc
v3-fos-license
Weak self-interactions of globular proteins studied by small-angle X-ray scattering and structure-based modeling We investigate protein-protein interactions in solution by small-angle X-ray scattering (SAXS) and theoretical modeling. The structure factor for solutions of bovine pancreatic trypsin inhibitor (BPTI), myoglobin (Mb), and intestinal fatty acid-binding protein (IFABP) is determined from SAXS measurements at multiple concentrations, from Monte Carlo simulations with a coarse-grained structure-based interaction model, and from analytic approximate solutions of two idealized colloidal interaction models without adjustable parameters. By combining these approaches, we find that the structure factor is essentially determined by hard-core and screened electrostatic interactions. Other soft short-ranged interactions (van der Waals and solvation-related) are either individually insignificant or tend to cancel out. The structure factor is also not significantly affected by charge fluctuations. For Mb and IFABP, with small net charge and relatively symmetric charge distribution, the structure factor is well described by a hard-sphere model. For BPTI, with larger net charge, screened electrostatic repulsion is also important, but the asymmetry of the charge distribution reduces the repulsion from that predicted by a charged hard-sphere model with the same net charge. Such charge asymmetry may also amplify the effect of shape asymmetry on the protein-protein potential of mean force. Introduction Protein-protein interactions govern the functional assembly of supramolecular structures 1,2 as well as the dysfunctional aggregation of misfolded proteins. 3 Weak protein-protein interactions also determine the thermodynamics and phase behavior of concentrated protein solutions, 4 of relevance for optimizing protein crystallization 5 and for understanding how proteins behave in the crowded cytoplasm. 6 Fundamental progress in these areas requires a quantitative understanding of how proteins interact with themselves in solution. Specifically, we need to know the effective solvent-averaged proteinprotein interaction energy or potential of mean force, w(r). Much of the available information about proteinprotein interactions in solution has come from scattering experiments via the osmotic second virial coefficient, B 22 , and the structure factor, S(q). [7][8][9][10][11][12][13][14][15][16] Whereas B 22 is an integral measure of the pair interaction, S(q) is the Fourier transform of the isotropically averaged proteinprotein pair correlation induced by the interactions. 17 Extraction of w(r) from S(q) is a nontrivial problem without a unique solution. 18 Typically, a parameterized interaction model, w(r; a, b, . . .), is postulated and S(q) is computed by molecular simulation 9,11,13 or by an approximate integral equation theory. 7,8,10,12,[14][15][16] The model parameters a, b, . . . are then optimized by comparing the computed S(q) with that determined by smallangle X-ray (SAXS) or neutron (SANS) scattering. The interaction models used in this context may be classified as colloidal or structure-based. Colloidal interaction models are typically 7,8,10,[13][14][15] based on the Derjaguin-Landau-Verwey-Overbeek (DLVO) potential, 19 often complemented with phenomenological short-range contributions. 20 In the DLVO model, the protein is described as a uniformly surface-charged sphere embedded in a dielectric continuum. Such highly idealized models have the virtue of simplicity but cannot do full justice to protein-protein interactions. [21][22][23][24][25][26] At the short and intermediate protein-protein separations, the irregular shape and the discrete and asymmetric charge distribution of real proteins cannot be ignored. Structure-based interaction models explicitly incorporate such structural features, either at atomic resolution or at a coarse-grained level. For computational expediency, the solvent is treated as a dielectric continuum; solvation-related interaction terms of a phenomenological nature are therefore sometimes included in the model. While this approach has been used extensively to compute B 22 , 27-33 relatively few studies have reported S(q) calculations with structure-based interaction models. 29,31 Here we report the structure factor S(q), determined by SAXS, for aqueous solutions of three globular proteins: bovine pancreatic trypsin inhibitor (BPTI), equine skeletal muscle myoglobin (Mb), and rat intestinal fatty acid-binding protein (IFABP). To extract information about the protein-protein interactions, we use Metropolis Monte Carlo (MC) simulations to compute S(q) for these solutions based on a coarse-grained structure-based (CGSB) interaction model with the individual amino acid residues as interaction sites. 28 This implicit solvent model incorporates excluded volume, van der Waals (vdW) attraction, and screened Coulomb interactions, and the charges of the ionizable residues are allowed to fluctuate. To gain further insight, we compare the experimental and CGSB S(q) with the (analytic) structure factors for two colloidal interaction models: the hard-sphere fluid in the Percus-Yevick (PY) approximation 34,35 and the hard-sphere Yukawa (HSY) fluid in the modified penetrating-background corrected rescaled mean spherical approximation (MPB-RMSA). 36,37 With only excluded volume and screened Coulomb interactions (no vdW attraction or other soft shortrange interactions) and without any adjustable parameters, the CGSB model reproduces the experimental S(q) nearly quantitatively for all three proteins within the q range 0.5-3.0 nm −1 accessed by the MC simulations. For Mb and IFABP, which were examined near isoelectric pH, the hard-sphere model predicts essentially the same S(q) as does the CGSB model in this q range. For the more highly charged BPTI, neither the hard-sphere model nor the charged hard-sphere model can reproduce the experimental S(q). The implications of these findings are discussed. SAXS Experiments Protein solutions for SAXS measurements were prepared by dissolving lyophilized BPTI, Mb, or IFABP, purified and desalted as described, 38 in MilliQ water. After adjusting pH by adding HCl or NaOH, the solutions were centrifuged at 13 000 rpm for 3 min to remove any insoluble protein. No buffers were used and the only electrolyte present is the counterions and a small amount of added salt (from pH adjustment) in the case of Mb. Relevant characteristics of the investigated protein solutions are summarized in Table 1. SAXS measurements were performed at the MAX-lab synchrotron beamline I911-4, equipped with a PILATUS 1M detector (Dectris). 41 The scattering vector q range (q = 4π/λ sin θ, where λ = 0.91 Å is the X-ray wavelength and 2θ is the scattering angle) was calibrated with a silver behenate sample. All measurements were performed on samples in flow-through cells at 20 • C with an exposure time of 1 min. The effect of radiation damage did not exceed the experimental noise. Reported scattering profiles I(q) were obtained as the difference of the azimuthally averaged 2D SAXS images from protein solution and solvent (MilliQ water). SAXS Data Analysis For a solution of NP protein molecules of volume VP contained in a volume V , the scattering intensity I(q) in the decoupling approximation, where the orientation of a protein molecule is taken to be independent of its position and the configuration of other protein molecules, can be factorized as [42][43][44] where nP = NP/V is the protein number density, ∆ρ is the protein-solvent electron density difference (the scattering contrast), P (q) is the form factor, and S(q) is the structure factor. Because of the non-spherical protein shape, Eq. (1) should involve an effective structure factorS(q), which, however, differs insignificantly from S(q) under the conditions of the present study. The form factor represents the scattering from an isolated protein molecule, whereas the structure factor reflects intermolecular pair correlations, In Eqs. (2) and (3), . . . signifies an equilibrium configurational average. According to Eq. (1), the structure factor, S(q; nP), at a protein concentration nP can be obtained by dividing the concentration-normalized intensity, I(q; nP)/nP, by the same quantity measured at a sufficiently low concentration, n 0 P , that S(q; n 0 P ) ≡ 1. We shall refer to I(q; n 0 P )/n 0 P = (VP∆ρ) 2 P (q) as the apparent form factor (AFF). As described in more detail elsewhere, 38 the AFF for each protein was constructed by merging concentration-normalized SAXS profiles from two different protein concentrations (the highest and the lowest in Table 1) and by smoothing the merged profile. The low q part of the AFF, where the SAXS profile is sensitive to protein-protein correlations, originates from the dilute solution with S(q) ≈ 1, whereas the high q part, which reflects intraprotein correlations, is derived from a concentrated solution with better signal-to-noise. CGSB Interaction Model and MC Simulation In the CGSB interaction model, each amino acid residue (plus the terminal amino and carboxyl groups) is represented by an isotropic interaction site, placed at the center-of-mass of the corresponding residue in the crystal structure of the real protein ( Fig. 1). (For simplicity, we shall refer to these interaction sites as residues.) The effective energy of interaction between residues i and j, separated by a distance rij, is taken to be u(rij) = kBT λB zizj rij e −κr ij +4ε σij rij 12 − σij rij 6 +δij(rC) . (4) The first term describes the electrostatic interaction in the Debye-Hückel approximation. Here, λB = 0.71 nm is the Bjerrum length for water at 20 • C, κ = (4πλB|ZP|nP) 1/2 is the inverse Debye screening length determined by the counterions (no added salt) of the protein with net charge valency ZP, and zi = 0 or ±1 is the valency of residue i. The second term in Eq. (4), a Lennard-Jones (LJ) potential with well depth ε and σij = (σi + σj)/2, describes exchange repulsion and vdW attraction. The vdW diameter σi was fixed by the residue molar mass, Mi, according to σi = [6Mi/(πρ)] 1/3 with ρ = 1 g mol −1 Å −3 . (Varying the density ρ by ±20% has negligible effect on the structure factor.) Finally, in the third term of Eq. (4), δij(rC) shifts the pair potential to zero at a spherical cut-off distance rC in the rang 0.1-5 κ −1 (4.8-27.2 nm). Relevant characteristics of the simulated protein solutions are collected in Table 2. MC simulations were performed at 293 K in the N V T ensemble with fluctuating protein charges (constant pH) using the Faunus framework. 48 The cubic simulation box, with periodic boundary conditions, contained NP = 500 rigid, coarse-grained protein molecules and the box volume was adjusted to match the experimental protein concentrations ( Table 2 and Fig. 2). Configurational space, that is, the position and orientation of each protein molecule and the protonation state of each ionizable group, was sampled by the conventional Metropolis algorithm 49 using the following In the first term, u(rij) is the pair potential from Eq. (4) and the double sum runs over all pairs of residues (in the same or in different protein molecules). In the second term, which ensures that the fluctuating charges conform to a Boltzmann distribution, 50,51 the sum runs over all ionizable residues and αn = 1 or 0 for residues in protonated and deprotonated forms, respectively. The intrinsic (in the absence of electrostatic interactions) pK • a,n was taken to be 3.8 (C-terminus), 4.0 (Asp), 4.4 (Glu), 6.3 (His), 7.5 (N-terminus), 9.6 (Tyr), 10.4 (Lys), or 12.0 (Arg). Shifts in the apparent acid dissociation constant, pKa,n, due to intramolecular and intermolecular electrostatic interactions are explicitly accounted for by the first term in Eq. (4). Charge fluctuations give rise to a short-ranged attractive protein-protein interaction. 52,53 During the simulation, the rigid protein molecules were subjected to combined mass-center translations and rotations (Table 2). (25 000 moves per protein molecule), while the protonation state of all ionizable residues were alternated between protonated and deprotonated forms (20 000 moves per protein molecule). Each production MC run was preceded by a tenfold shorter equilibration run. From the MC-generated ensemble of equilibrium configurations, we computed the average net protein valency, ZP = n zn (Table 2), and the isotropically averaged static structure factor, S(q). The latter was computed from the Debye formula, 42,43 where the double sum runs over all unique protein masscenter separations, Rij. The q range of the calculated S(q) is limited to > 0.5 nm −1 due to the finite size of the simulation box. Colloidal Interaction Models Two colloidal interaction models were examined, both of which describe the protein as a spherical particle. In both cases, we used analytic expressions for S(q) obtained from approximate but accurate solutions of the Ornstein-Zernike integral equation. 17 For the hard-sphere fluid, where excluded volume is the only interaction, we used the PY approximation, 34,35 which is virtually exact for a hardsphere fluid at the volume fractions of interest here. The HSY fluid includes, in addition to hard-core repulsion, a screened Coulomb (Yukawa) interaction between two uniformly charged spheres. For this model, we used the MPB-RMSA, 36,37 which yields S(q) in excellent agreement with simulations (for this model) over the full parameter space. 36,37 For convenience, we reproduce the analytic S(q) expressions for these two models in Supporting Information (Secs. S1 and S2). As in the case of the CGSB model, we did not fit any of the parameters in the colloidal interaction models. The hard-sphere diameter, σP, was set to 2.46, 3.46, and 3.30 nm for BPTI, Mb, and IFABP, respectively, which reproduce the actual protein volumes, VP, of 7.79, 21.7, and 18.8 nm 3 , respectively, obtained from the molar mass and partial specific volume of these proteins. 54,55 The protein volume fraction, φP, and net valency, ZP, were set to the values given in Tables 1 and 2, respectively. Structure Factor from SAXS Excess (protein solution minus water) scattering profiles, I(q), were obtained from SAXS measurements on solutions of BPTI, Mb, and IFABP at several concentrations. In Fig. 3 we have divided I(q) by the protein molar concentration, C P , to remove the trivial concentration dependence (see Eq. (1)). As expected, I(q)/C P is independent of C P at high q, where intramolecular scattering dominates. At lower q values, I(q)/C P decreases with increasing C P , indicating predominantly repulsive protein-protein interactions. The structure factor, S(q), in Fig. 4 was obtained, as described in Sec. 2.2, by dividing I(q)/C P with the AFF, also shown in Fig. 3. Under certain solution conditions (high pH, high salt concentration), BPTI exists in an equilibrium between monomeric and decameric forms. 56,57 Since the pronounced minima at q = 1.5 and 2.9 nm −1 in the decamer form factor 38,56 are not evident in our SAXS profiles (Fig. 3a), we conclude that decamers are not present in our BPTI solutions. The large intensity increase at q 0.2 nm −1 seen in all IFABP profiles (Fig. 3c) can be explained by a small fraction (∼ 10 −5 ) of protein in large aggregates (effective diameter ∼ 10 × σ P ). Rather than treating this structural heterogeneity explicitly, we incorporate the aggregate contribution in the AFF. To the extent that aggregation is concentrationdependent, this procedure may introduce artifacts in S(q) at q 0.2 nm −1 . Apart from this anomaly in the IFABP profiles, the AFFs for all three proteins agree well with the form factors computed with the crysol program 58 from the corresponding crystal structures (Fig. 1). Figure 4 also shows the structure factor predicted by the CGSB interaction model. This structure factor was computed from MC simulations at the experimental temperature, pH, and protein concentrations and with the structural model parameters determined by the protein crystal structures (Fig. 1). The only parameter that is not fixed by the protein structure is the LJ well depth ε (see Eq. (4)). Nominally, this parameter measures the strength of the average residue-residue vdW attraction across the aqueous solvent, but, in practice, it may also subsume short-range solvation-related interactions that are not explicitly accounted for in the CGSB model. For the CGSB calculations shown in Fig. 4, we have set ε = 0.005 k B T , corresponding to a negligibly weak apparent vdW interaction. (We cannot set ε = 0 since this parameter also scales the steep repulsive term in Eq. (4), which is essentially determined by the vdW contact separations, σ ij .) Structure Factor from CGSB Model The qualitative, and in some cases semi-quantitative, agreement found, in the q range (> 0.5 nm −1 ) accessed by the MC simulations, between the structure factors predicted by the CGSB model with ε = 0.005 k B T and measured by SAXS (Fig. 4) indicates that the solution structure can be fairly well described by an interaction model that only incorporates excluded volume and screened inter-residue Coulomb interactions. In other words, the vdW attraction and other short-range soft interactions are either individually negligibly weak or tend to cancel out. A tenfold increase of the vdW attraction to ε = 0.05 k B T , as used in previous applications of the CGSB model, 28,30,59,60 has little effect on S(q) at q > 0.5 nm −1 for the two proteins (BPTI and Table 2 and with (ε = 0.05 kBT , dashed curves) or without (ε = 0.005 kBT , solid curves) vdW attraction. Also shown is S(q) for BPTI from a simulation with fixed charges and no vdW attraction (dots). Mb) with significant net charge (Fig. 5). In contrast, a large effect is seen for IFABP (Fig. 5), likely because the electrostatic repulsion close to the isoelectric pH (Table 2) is so weak that the protein molecules come into vdW contact more frequently. The MC simulations with the CGSB model were carried out at constant pH. The protonation state of ionizable residues therefore undergoes thermal fluctuations and responds to the local electrostatic potential produced by charged residues in the same protein molecule and in nearby protein molecules. However, even for BPTI, which was studied at a pH where charge fluctuations are large (close to the pK a of carboxyl groups), the attractive electrostatic interaction produced by charge fluctuations 52,53 has negligible effect on the structure factor (Fig. 5). For Mb and IFABP, which were studied near neutral pH where charge fluctuations are less pronounced, the effect of charge fluctuations on S(q) should be even smaller. In the fluctuating-charge CGSB model, the protonation state of ionizable residues is affected by intramolecular and intermolecular electrostatic interactions. For all three proteins, the net protein charge, Z P , computed from this model ( Table 2) is within one unit from the Z P value obtained with experimental pK a values (Table 1). We find that Z P depends weakly on protein concentration ( Table 2). It might be expected that |Z P | should decrease in response to the increasing intermolecular electrostatic repulsion at higher protein concentration. But the opposite observed trend is due to the more effective screening of intramolecular electrostatic repulsion at higher protein concentration (the Debye screening length, κ −1 , is controlled by the counterions). Structure Factor from Colloidal Models The preceding analysis with the CGSB interaction model indicates that the structure factor is governed mainly by excluded volume and screened electrostatic interactions. To assess the importance of the irregular shape and the inhomogeneous charge distribution of the proteins, we consider two colloidal models where the protein is described as a sphere. These models are conceptually simple and computationally convenient since S(q) can be expressed in analytic form (see Secs. S1 and S2 in Supporting Information). The first model is the hard-sphere fluid, where the only interaction is the hard-core repulsion and the diameter, σ P , of the spherical protein is fixed by the requirement that the sphere has the same volume as the real protein (see Sec. 2.4). For IFABP the structure factor predicted by the hard-sphere model is virtually identical to that obtained with the CGSB model in the q range accessed by the MC simulations (Fig. 4c). For Mb the agreement between the two models is also good, although the hard-sphere S(q) is slightly displaced to larger q (Fig. 4b). For BPTI, on the other hand, the predictions of the two models differ markedly (Fig. 4a). For Mb and IFABP, the agreement between the two models indicates that shape asymmetry and charge inhomogeneity are unimportant under the examined solution conditions. All three proteins have similar (spheroid) aspect ratios of 1.5-1.6, but neither this asymmetry nor the (coarse-grained) surface roughness appears to influence S(q) significantly. In contrast to this finding, model calculations of the osmotic second virial coefficient, B 22 , for several proteins indicate that while coarse-graining at the amino acid level (as in our CGSB model) has little effect (compared to an all-atom description), a hardsphere model (with the same volume as the real protein) underestimates B 22 by ∼ 35%. 61 The excellent agreement between the two models for IFABP can be further rationalized by the nearly zero net charge at the examined pH (Table 2). Thus, at least for this protein, the inhomogeneous distribution of discrete charges appears to be unimportant. Mb has a larger, but still small, net charge (Table 2), which may account for slight shift of S(q) to smaller q values (corresponding to longer distances) when the longer-ranged electrostatic repulsion is accounted for (in the CGSB model). For BPTI at pH 4, where Z P ≈ +7, electrostatic repulsion suppresses S(q) more than for hard-core repulsion alone and also shifts the onset of this suppression to smaller q values, as expected from the longer range of the electrostatic repulsion (Fig. 4a). In a recent SAXS study of BPTI and Mb solutions, Goldenberg and Argyle found that the experimental structure factor for Mb (at pH 7) can be well described by a hard-sphere model. 16 While this conforms with our findings, it should be noted that these authors fitted both the hard-sphere diameter, σ P , and the protein vol-ume fraction, φ P , to the SAXS data. For Mb, the fit yielded σ P = 3.74 nm, 16 slightly larger than the experimentally based value of 3.46 nm used here. It should also be noted that the solvent used by Goldenberg and Argyle contained 1 m urea and 50 mm phosphate buffer. 16 Also for BPTI (at pH 7 with Z P ≈ +6), the hard-sphere model gave reasonable fits to the SAXS data, presumably because the buffer screened out most of the electrostatic interactions. 16 But the fitted hard-sphere diameter, σ P , was found to depend strongly on the buffer type, indicating that specific ion binding affects the proteinprotein interaction. 16 While we cannot compare the two models below q = 0.5 nm −1 since the MC simulations do not access this range, we can compare the hard-sphere model with the experimental structure factor. For Mb the experimental S(q) is slightly smaller than for hard spheres (Fig. 4b), consistent with a modest contribution from electrostatic repulsion. The more pronounced discrepancy seen for IFABP (Fig. 4c) can hardly be attributed to electrostatic repulsion since IFABP has a smaller net charge than Mb. Possibly, the drop of S(q) below q = 0.5 nm −1 is an artifact of incorporating the effect of IFABP aggregation in the AFF (vide supra). For the more highly charged protein BPTI, the S(q) predicted by the hard-sphere model differs substantially from the experimental and CGSB-based structure factors (Fig. 4a). We therefore investigated another colloidal interaction model, the HSY fluid, with a screened Coulomb repulsion in addition to the hard-core repulsion. The HSY model thus includes the two dominant interactions in the CGSB model, but the protein is now described as a sphere with a uniform surface charge density. As for the other models, we do not optimize the model parameters: the net charge, Z P ≈ +7, and the Debye screening length, κ −1 , are taken from Table 2 and the diameter, σ P = 2.46 nm, is fixed by the protein volume (see Sec. 2.4), as in the hard-sphere model. The structure factor for the HSY model is computed from the analytic MPB-RMSA integral equation approximation, which should be quantitatively accurate under our conditions. 36,37 As seen from Fig. 6a, the HSY model produces a too highly structured S(q). In other words, the electrostatic repulsion is too strong. The agreement with the experimental S(q) can be improved by reducing the net charge (Fig. 6b), but this ad hoc modification is difficult to justify. Since the MPB-RMSA approximation should be accurate, we conclude that the HSY model is responsible for the discrepancy. Specifically, we infer that the inhomogeneous charge distribution of the real protein produces a weaker (orientationally averaged) electrostatic repulsion than the same net charge distributed uniformly on a spherical surface. Indeed, the crystal structure of BPTI reveals a pronounced charge asymmetry, with all the negatively charged carboxylate groups confined to one half of the molecule (Fig. 1a). For the real protein, the electrostatic interaction should therefore be attractive for certain relative orientations so that the effective orientationally averaged potential of mean force, w(r), becomes less repulsive. 62 This anisotropy of the screened electrostatic interaction should also amplify the effect on S(q) of shape asymmetry by favoring close approach of two protein molecules for relative orientations with favorable electrostatic interaction. This coupling of excluded volume and electrostatic interactions in the potential of mean force, w(r), may be responsible for the observed shift of S(q) to smaller q (larger separations) and the suppressed peak in S(q), relative to the HSY structure factor (Fig. 6). Such effects should be less pronounced for Mb and IFABP not only because they have smaller net charge, but also because the discrete charge distribution is less asymmetric than for BPTI (Fig. 1). The HSY structure factors for Mb and IFABP indeed show good agreement with the experimental and CGSB S(q), to the same extent as the hard-sphere model (Fig. 4), at high q ( 0.5 nm −1 ) where the coupling effect is expected to play an important role (Fig. S1 in Supporting Information). Not surprisingly, the charge in the HSY model leads to highly repulsive interactions, as in the case of BPTI (Fig. 6a), and the model diverges from the experiment at lower q for moderately charged Mb (Fig. S1). To examine the effect of charge and shape asymmetry on the electrostatic contribution to the potential of mean force, we performed CGSB MC simulations with only two BPTI molecules at fixed mass-center separation and at constant pH. From the sampled orientational configurations, we calculated the orientation-averaged total (residue-based) electrostatic interaction energy between the two molecules and the intermolecular ion-ion interaction energy (Fig. 7). Note that the CGSB model incorporates both charge and shape asymmetry. As seen from Fig. 7, the total electrostatic repulstion is weaker than the ion-ion repulsion at short intermolecular separations, where charge and shape asymmetry is expected to be important (vide supra). Conclusions From SAXS experiments at multiple protein concentrations, we have determined the structure factor for the three globular proteins BPTI, Mb, and IFABP. Information about the protein-protein potential of mean force, averaged over relative protein orientations and solvent configurations, was derived from the experimental structure factors with the aid of several interaction models. For a structure-based interaction model coarse-grained to the amino-acid residue level, we computed the structure factor by MC simulation. For the hard-sphere and HSY models, the structure factor was obtained from accurate integral equation approximations. The parame- Table 2 (a) or set to +2 (b). The experimental S(q) is only shown up to q = 2 nm −1 ; at higher q the noise amplitude exceeds any deviation from S(q) = 1. Figure 7. Orientation-averaged electrostatic energy, UE, as a function of mass center separation, R12, between two BPTI molecules evaluated exactly as λB i j zizj/rij (black), where residues i and j belong to different molecules, and by treating the two proteins as monopoles, λB i zi j zj/R12 (red). The averaging was based on configurations from a two-body MC simulation at pH 4.1 and a Debye length, κ −1 , of 4.37 nm (cf. Table 2). ters in these interaction models were fixed by the known properties of the protein solutions, rather than being optimized for agreement with the SAXS data. For these proteins and under the investigated solution conditions, we find that the structure factor can be accounted for by excluded volume and screened electrostatic interactions, with no need to invoke other shortranged, soft interactions, such as vdW attraction, hydrophobic and other solvent-related interactions. We cannot exclude the possibility that the effects on the structure factor of some of these apparently unimportant interactions tend to cancel out. For Mb and IFABP, with small net charge, the structure factor is well described by a hard-sphere model, even though these proteins are non-spherical (aspect ratio 1.5-1.6) and contain many charged residues. For BPTI, with larger net charge, screened electrostatic repulsion is important, but it is weaker than predicted by a HSY model. The reduction of the electrostatic repulsion may be a result of the pronounced asymmetry of the surface charge distribution for this protein, which tends to favor protein-protein encounters with less repulsive electrostatic interactions. The MC simulations were performed at constant pH and therefore allow for thermal fluctuations in the protonation state of ionizable residues. Such charge fluctuations do not, however, have a significant effect on the protein-protein potential of mean force under the conditions investigated here. S1. Hard-Sphere Fluid For a fluid of identical hard spheres of diameter σ, the pair interaction energy is where x = r/σ is the reduced inter-particle separation. For this model, the pair correlation function (PCF), g(x), obeys the exact condition which simply expresses the impenetrability of the hard spheres. According to the Percus-Yevick (PY) approximation, 63 the direct correlation function, c(x), is related to the PCF and the pair potential as For the hard-sphere model in Eq. (S1), this implies that where the function y( For the hard-sphere fluid, the approximate PY closure in Eq. (S4) allows the formally exact Ornstein-Zernike (OZ) integral equation 17 to be solved analytically. 34 The resulting structure factor 35 is a function of the reduced wavevector Q ≡ qσ and the particle volume fraction φ = n P πσ 3 /6: with Here we have defined a 0 (φ) = 1 + 2φ and This analytic result is highly accurate up to volume fractions φ ≈ 0.35. S2. Hard-Sphere Yukawa Fluid Solutions of charged colloidal particles or proteins are often modeled as a one-component macrofluid composed of charged hard spheres in a uniform neutralizing background medium. Apart from their excluded volume, the particles are taken to interact with a screened Coulomb (Yukawa) potential, so that where x ≡ r/σ. Furthermore, γ is a dimensionless coupling constant and k is a dimensionless screening parameter. These are given by where Z is the net protein charge (in units of e), n P is the protein number density, φ = n P πσ 3 /6 is the protein volume fraction, and n S is the number density of monovalent salt. The number density of counterions, also assumed monovalent, is n P |Z|. For analysis of SAXS data, it is more convenient to use the completely analytic formulation presented by Cummings et al. 37,[67][68][69] In this so-called Wiener-Hopf factorization approach, a complex-valued function F (Q) is defined such that where Q ≡ qσ and F (−Q) = [F (Q)] * for real Q. The structure factor, S(Q), can then be expressed on the form of Eq. (S5). The function F (Q) is related to the Fourier transform of another function F (x): For the HSY model in the MSA approximation, the function F (x) is given by 69 (but Cummings' earlier papers 67,68 give this function incorrectly) where k is defined by Eq. (S11) and The quantities a, b, d, and β are functions of the system parameters γ, k, and φ. Combining Eqs. (S14)-(S16) and performing the integral, one obtains , where the functions G a (Q), G b (Q), H a (Q), and H b (Q) are given by Eq. (S8) and (S18) The quantities a and b are given by 37,67,68 a = a 0 + 12φβ with a 0 and b 0 as defined in Eq. (S7), and The dependence on the coupling constant γ enters via the quantity which involves the additional quantities where δ = 12φ and the (non-negative) coupling strength parameter Finally, β is one of the four roots of the quartic equation where The desired root reduces to the PY solution in the limit K → 0 and in the limit φ → 0 it yields 67,68 We obtain the following analytic expression for the desired root: 3) (which define Re F (q) and Im F (q)), where b appears in place of k. Heinen also introduces a quantity f , which is unnecessary since f = (1 − d) exp(k). Furthermore, all other authors define K with the opposite sign to that in Eq. (S32). The MSA solution of the HSY model is accurate (as compared to Monte Carlo simulations of the same model) for weakly charged macroions at relatively high volume fractions. But for highly charged macroions and/or at low volume fractions, the MSA produces unphysical results. Specifically the contact PCF, g(σ), becomes negative. Various schemes have been proposed to improve the MSA. The basic idea is that, under the conditions where the MSA fails, the macroions are almost always so far apart (because the volume fraction is low and/or because of strong electrostatic repulsion) that the actual hard-sphere diameter σ has no effect on S(q). It is therefore possible to increase σ to a larger value σ so that g(σ) remains non-negative. Specifically, σ is chosen so that g(σ ; φ ) = 0, where φ = φ(σ /σ) 3 is the rescaled volume fraction. (The volume fraction increases because the particle size is increased at constant particle number density n P .) This approach is called the rescaled MSA (RMSA). 70 Comparison with computer simulations shows that even the RMSA is not accurate for strongly repulsive macroions (high charge and/or low salt concentration). In particular, the RMSA tends to underestimate the local ordering by yielding a too small principal peak in S(q) (and in g(r)) and a too large osmotic compressibility, S(0). It was shown that the accuracy of the RMSA can be further improved by redefining the model parameters γ and k to correct for the fact that the counterions are treated in the one-component macrofluid model (of which the HSY model is a special case) as a uniform background medium that penetrates the macroion and therefore reduces its effective charge. This scheme is called the penetrating-background corrected RMSA (PB-RMSA). 71 A further improvement, yielding a structure factor, S(q), in excellent agreement with Monte Carlo simulations in the full parameter space, was obtained with a modified PB-RMSA (MPB-RMSA) scheme. 36 This MPB-RMSA scheme involves the following steps: 36 (1) Specify the true model parameters σ, φ, γ, and k, with γ given by Eq. (S10) and k by the following modified version of Eq. S11: Table 2. The experimental S(q) is only shown up to q = 2 nm −1 ; at higher q the noise amplitude exceeds any deviation from S(q) = 1.
2014-08-28T10:20:53.000Z
2014-08-13T00:00:00.000
{ "year": 2014, "sha1": "37119bbf35f785b78c845f8fb871e7d3358727d8", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/jp505809v", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "e232dcb670c2f5c3d935e541e1b6eeffad1ba3bf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Biology", "Medicine", "Chemistry" ] }
90614913
pes2o/s2orc
v3-fos-license
A novel fluorescent probe for imaging the process of HOCl oxidation and Cys/Hcy reduction in living cells A new on-off-on fluorescent probe, CMOS, based on coumarin was developed to detect the process of hypochlorous acid (HOCl) oxidative stress and cysteine/homocysteine (Cys/Hcy) reduction. The probe exhibited a fast response, good sensitivity and selectivity. Moreover, it was applied for monitoring the redox process in living cells. Reactive oxygen species (ROS) are indispensable products and are closely connected to various physiological processes and diseases. 1 For instance, endogenous hypochlorous acid (HOCl) as one of the most important ROS, which is mainly produced from the reaction of hydrogen peroxide with chloride catalyzed by myeloperoxidase (MPO), is a potent weapon against invading pathogens of the immune system. 2,3 However, excess production of HOCl may also give rise to oxidative damage via oxidizing or chlorinating the biomolecules. 4 The imbalance of cellular homostasis will cause a serious pathogenic mechanism in numerous diseases, including neurodegenerative disorders, 5 renal diseases, 6 cardiovascular disease, 7 and even cancer. 8 Fortunately, cells possess an elaborate antioxidant defense system to cope with the oxidative stress. 9 Therefore, it is necessary and urgent to study the redox process between ROS and antioxidants biosystems. Fluorescence imaging has been regarded as a powerful visual methodology for researching various biological components as its advantages of high sensitivity, good selectivity, little invasiveness and real-time detection. 10,11 To date, amounts of small molecular uorescent probes have been reported for detection and visualization of HOCl in vivo and in vitro. [12][13][14][15][16][17][18][19][20][21][22]29 The designed strategies of HOCl sensitive probes are based on various HOCl-reactive functional groups, such as p-methoxyphenol, 13 p-alkoxyaniline, 14 dibenzoyl-hydrazine, 15 selenide, 16 thioether, 17 oxime, 18 hydrazide, 19 hydrazone. 20 But, many of these probes display a delayed response time and low sensitivity. And, only few uorescent probes can be applied for investigating the changes of intracellular redox status. 21 Besides, it's worth noting that most of the redox uorescent probes rely on the organoselenium compounds. 22 Even though these probes are well applied for detection of cellular redox changes, excessive organic selenium is harmful to organisms and the synthesis of organoselenium compounds is high requirement and costly. Additionally, almost all the reports have only investigated the reduction effects of glutathione (GSH) as an antioxidant in the redox events. While, there are the other two important biothiols, cysteine (Cys) and homocysteine (Hcy), which not only present vital antioxidants, but also are tightly related to a wide variety of pathological effects in biosystem, such as slowed growth, liver damage, skin lesions, 23 cardiovascular, 24 and Alzheimer's diseases. 25 However, the uorescent probes for specially studying internal redox changes between HOCl and Cys/Hcy are rarely reported. In this respect, a novel redox-responsive uorescent probe, CMOS, was designed and synthesized in this work, and we hope that it can be a potential tool for studying their biological relevance in living cells. Based on literature research, the aldehyde group has excellent selectivity in identication of Cys/Hcy, and the thiol atom in methionine can be easily oxidized to sulfoxide and sulfone by HOCl. 26,27 Considering these two points, we utilized 2-mercaptoethanol to protect the 3-aldehyde of 7-diethylamino-coumarin as the recognition part of HOCl, meaning that two kinds of potential recognition moieties are merged into one site. Fluorescent probe CMOS can be easily synthesized by the acetal reaction in one step (Scheme S1 †). A control molecule CMOS-2 was also prepared by 3-acetyl-7-diethylaminocoumarin (CMAC) similarly. The structure of all these compounds have been convinced by 1 H NMR, 13 C NMR, and HR-MS (see ESI †). As shown in Scheme 1a, we estimated that both CMOS and CMOS-2 can be rapidly oxidized in the appearance of HOCl. The oxidation product CMCHO of CMOS, which has the aldehyde moiety, can further react with Cys/Hcy to obtain the nal product CMCys and CMHcy, respectively. In contrast, the oxidation product CMAC of CMOS-2 cannot combine with Cys/ Hcy or other biothiols anymore (Scheme 1b). In order to conrm our design concept, the basic photophysical characteristics of CMOS, CMCHO, CMOS-2 and CMAC were tested (Table S1, Fig. S1 †). Under the excitation wavelength 405 nm, CMOS and CMOS-2 exhibited strong uorescence centred at 480 nm in PBS buffer solution, while the uorescence of CMCHO and CMAC was weak around this band. The emission properties of CMOS and CMCHO were also investigated at the excitation wavelength 448 nm under the same experimental conditions as well (Fig. S2 †). Aer careful consideration, we chose 405 nm as the excitation wavelength in the follow-up experiments in vitro and in vivo. Next, the sensitivity of CMOS and CMOS-2 to HOCl and Cys/Hcy were investigated. As we expected, both the CMOS and CMOS-2 exhibited good response to HOCl. The uorescence intensity of CMOS and CMOS-2 decreased gradually with addition of NaOCl ( Fig. 1a, S3a †), indicating that the uorescence was switched off obviously in the presence of HOCl. The variation of intensity displayed good linearity with concentration of HOCl in the range of 0-20 mM (R 2 ¼ 0.993, Fig. S4 †), and the detection limit of CMOS to HOCl was calculated to be 21 nM (S/N ¼ 3). Subsequently, when Cys/Hcy was added to the nal solution in Fig. 1a, the uorescence intensity increased gradually within 180 min (Fig. 1b, S5 †). However, the uorescence cannot be recovered by addition thiols to the CMOS-2 solution with excess HOCl (Fig. S3b †). These results indicate that the probe CMOS can response to HOCl and Cys/Hcy in a uorescence on-off-on manner, and can be used for monitoring the redox process with high sensitivity. To further identify the recognizing mechanism of probe CMOS, high performance liquid chromatography (HPLC) and mass spectral (MS) analysis were used to detect the redox process. Initially, probe CMOS displayed a single peak with a retention time at 3.7 min (Fig. 2a, S6 †) while reference compound CMCHO produced a single peak with a retention time at 2.5 min (Fig. 2b, S7 †). Upon the addition of HOCl to the solution of CMOS, the peak at 3.7 min weakened while 2.5 min and 2.2 min appeared (Fig. 2c). According to corresponding mass spectra, the new main peak at 2.5 min is related to compound CMCHO (Fig. S8 †). The other new peak of 2.2 min corresponds to the compound C3, which can be predicted as an intermediate in the oxidation process (Fig. S8 †). 28 The addition of Cys to the solution of CMCHO also caused a new peak with a retention time at 2.1 min, which has been conrmed to be the thioacetal product CMCys (Fig. S9 †). The possible sensing mechanism is depicted in Fig. S10. † To study the selectivity of CMOS towards HOCl, we performed uorescence response to different reactive oxygen species (ROS), reactive nitrogen species (RNS) and reactive sulfur species (RSS). As shown in Fig. 3a, CMOS exhibited signicant change of uorescence intensity only in the presence of HOCl, while other ROS and RNS, such as singlet oxygen ( 1 O 2 ), hydrogen peroxide (H 2 O 2 ), hydroxyl radical (HO$), superoxide anion (O 2 À ), nitric oxide (NO), tert-butylhydroperoxide (t-BuOOH) and tert-butoxy radical (t-BuOO$) had no obvious uorescence emission changes. Additionally, RSS which are abundant in biological samples, showed no inuence in this process under the identical condition. The detection of reducing process was also investigated. As displayed in Fig. 3b, only cysteine and homocysteine induced excellent uorescence recovery towards other reducing materials, such as RSS and various amino acids. Furthermore, the selectivity of CMOS-2 was also studied in the same condition. As expected, CMOS-2 could selectively detect HOCl, and not alter uorescence intensity under various kinds of biothiols (Fig. S11 †). Therefore, Scheme 1 Proposed reaction mechanism of CMOS and CMOS-2 to HOCl and Cys/Hcy. our design strategy for the on-off-on probe is conrmed by results obtained above, with which CMOS can be utilized for detecting the redox process between HOCl and Cys/Hcy with high selectivity. Subsequently, the inuence of pH on probe CMOS was measured. The uorescence intensity of CMOS and CMCHO perform no signicant variances in wide pH ranges (pH ¼ 4-11, Fig. S12a †). Fluorescence intensity changes could be observed immediately when HOCl was added into the solution of probe CMOS, especially in alkaline condition (Fig. 4a). Considering the pK a of HOCl is 7.6, 29 CMOS is responsive to both HOCl and OCl À . Alkaline condition was also benet for the uorescence recovery of CMOS from Cys/Hcy (Fig. S12b †). It is reasonable to consider that thiol atom displays higher nucleophilicity in alkaline condition. From the stop-ow test, the UV-visible absorbance of probe CMOS sharply decreased at the wavelength of 400 nm (Fig. 4b). The response time was within 10 s and the kinetic of the reaction was tted to a single exponential function (k obs ¼ 0.67 s À1 ). The ability of instantaneous response is extremely necessary to intracellular HOCl detection. With these data in hand, we next applied CMOS for uorescence imaging of the redox changes with HOCl and Cys/Hcy in living cells. Aer incubation with 5 mM CMOS at 37 C for 30 min, intense uorescence was observed of the SKVO-3 cells in the optical window 425-525 nm ( Fig. 5a and d), indicating the probe can easily penetrate into cells. Treating the cells with 100 mM NaOCl led to remarkable uorescence quenching as the probe sensed the HOCl-induced oxidative stress ( Fig. 5b and e). Aer 3 min, the cells were washed with PBS buffer three times, and added 5 mM Cys/Hcy for 1 h, respectively. Then the uorescence was recovered obviously (Fig. 5c and f). Experimental results clearly declare that the probe CMOS was successfully used to detect the process of HOCl oxidative stress and Cys/Hcy reducing repair in living cells. Conclusions In this work, a novel on-off-on uorescent probe was reported for highly selective detection HOCl oxidative stress and Cys/Hcy reducing repair in vivo and in vitro. The probe CMOS can be easily synthesized and displayed high sensitivity, fast response, and high selectivity. Cells images indicated that CMOS was capable to sense the redox changes between HOCl and Cys/Hcy. Results show that the probe CMOS would be a potential tool to study the oxidative damage and biothiols repairs in the biology and medical research. Conflicts of interest There are no conicts to declare. research team from School of Basic Medical Sciences, Zhengzhou University, for providing SKVO-3 cells in this work.
2019-04-02T13:13:28.549Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "58dff89321bb6953ea7a0be46a4dfb1de959bd53", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra13419c", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0e7725b150a69355c86444409b271b65d9b946c", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
52836185
pes2o/s2orc
v3-fos-license
Editorial: preclinical data reproducibility for R&D - the challenge for neuroscience The inability to reproduce published findings has been identified as a major issue in science. Reports of only a low percentage of landmark studies being reproduced at pharmaceutical companies like Bayer (Prinz et al. 2011) gained much interest in the scientific community and raised high levels of concerns. A more recent analysis from Amgen (Begley and Ellis 2012) suggested that those non-reproducible studies may have an even stronger impact on the field than those that can be reproduced, possibly because the more remarkable and exciting findings are reported in higher impact journals. Evidently, this is not just a problem of pharmaceutical industry. About half of respondents from faculty and trainees at the academic MD Anderson Cancer Center, Houston, Texas, had experienced at least one episode of inability to reproduce published data in a survey by Mobley et al. (2013) and comparable figures may be expected in neuroscience. Thomas Steckler The inability to reproduce published findings has been identified as a major issue in science. Reports of only a low percentage of landmark studies being reproduced at pharmaceutical companies like Bayer (Prinz et al. 2011) gained much interest in the scientific community and raised high levels of concerns. A more recent analysis from Amgen (Begley and Ellis 2012) suggested that those non-reproducible studies may have an even stronger impact on the field than those that can be reproduced, possibly because the more remarkable and exciting findings are reported in higher impact journals. Evidently, this is not just a problem of pharmaceutical industry. About half of respondents from faculty and trainees at the academic MD Anderson Cancer Center, Houston, Texas, had experienced at least one episode of inability to reproduce published data in a survey by Mobley et al. (2013) and comparable figures may be expected in neuroscience. Why worry? Insufficient data reproducibility and integrity is a major concern, not only from a pure scientific perspective, but also because of potentially serious financial, legal and ethical consequences. It is currently estimated that up to 85% of resources are wasted in science (Chalmers and Glasziou 2009;Macleod et al. 2014). Investigational costs for a single case of misconduct may be in the range of US$ 525,000, amounting to annual costs exceeding US$ 100 MM for the US alone (Michalek et al. 2010). Such figures clearly contribute to a genuine dissatisfaction about the situation, also in the public domain, where questions on whether government spending on biomedical research is still justified are raised (The Economist 2013). In response, bodies like the Wellcome Trust or the Science Foundation Ireland implemented formal audit processes to combat misconduct and misuse of taxpayer's money (Van Noorden 2014; Wellcome Trust 2013) and some research institutions where employees were directly involved in misconduct took drastic steps, including major re-organizations that affected large proportions of its staff (Normile 2014). Consequently, more transparency in reporting of preclinical data has been requested and best practices in experimental design and reporting proposed (Ioannidis 2014;Landis et al. 2012)and in fact are urgently required! The magnitude of the problem is further illustrated by a steep rise of retracted publications over the last years, with a high percentage suggested to be due to misconduct (fabrication and falsification, plagiarism or selfplagiarism) and more than 10% to be due to irreproducible data (Van Noorden 2011). The issue is not limited to published studies, although here the impact on the wider scientific community is possibly most severe. Problems were also observed in contract labs working for the pharmaceutical industry (Nature Medicine Opinions 2013; Selyukh and Yukhananov 2011) and industry itself is not without fault (e.g., Cyranoski 2013). The potential consequences for pharmaceutical industry are major and may lead from delays in drug development to potential retraction of drugs from the market, let alone the potential risks to human volunteers and patients. This issue of reproducibility is highlighted against a background of increasing globalization of science and outsourcing activities from the pharmaceutical industry, with estimates that more than 30% of the annual business expenditure of pharma R&D in the US is spent on external research (Moris and Shackelford 2014) and projections that the global preclinical outsourcing market is still expanding, possibly more than doubling in growth from 2009 to 2016 (Mehta 2011). Whilst there are many advantages to externalize research, it also means people have to rely more on data generated by third parties that themselves may feel obliged to deliver what they think is expected by their customers. Furthermore, dealing with data from an external source adds an additional level of complexity to the already complex issue of data quality assurance. Conversely, in academia there is increasing pressure to deliver publications in order to be successful in the next grant acquisition (and as such future employment) or, one may argue, to be an interesting partner for industry. What are the issues at hand? Partly driven by the situation of dwindling funding, many investigators are attracted to work in emerging and 'hot', but also very complex and competitive fields of science and like to use the most recent technology and innovative experimental designs. By taking this interesting approach, which may yield a lot of novel insights, there is a greater likelihood of receiving more favourable reviews of grant applications as well, especially as many grant schemes emphasize innovation rather than other aspects, such as reproducibility. Moreover, studies may get more rapidly published, often in so-called high impact journals, even if rather small and underpowered and, in this context, it may be more acceptable that reported effect sizes are small. However, all these factors diminish the positive predictive value of a study, i.e., the likelihood that results are true positives (Button et al. 2013;Ioannidis 2005). This issue is by no means limited to preclinical work or in vivo behavioural studies. It is also a concern for biomarker studies that play pivotal roles in drug discovery (Anderson and Kodukula 2014) and the many small explorative, clinical proof-of-concept studies often used to come to go/no-go decisions on drug development programs. Often there is also an uncritical belief in p-values; over-reliance on highly significant, but also variable, p-values has been considered to be another important factor contributing to the high incidence of nonreplication (Lazzeroni et al. 2014;Motulsky 2014;Nuzzo 2014). In general it is believed that expert statistical input is currently under-utilized and can help address issues of robustness and quality in preclinical research (Peers et al. 2012;. This 'publish or perish' pressure may also lead investigators to neglect findings, not conform to their hypothesis and instead to go for the desired outcome, may bias authors to publish positive, statistically significant results (Tsilidis et al. 2013) and to abandon negative results that they believe journals are unlikely to publish (the filedrawer phenomenon; Franco et al. 2014). This pressure to publish may even entice investigators to make posthoc alterations to hypotheses, data, or statistics (Motulsky 2014;O'Boyle et al. 2014), so that there is a more compelling story to tell, essentially transforming uninteresting results into top-notch science (the chrysalis effect; O'Boyle et al. 2014). Reviewers of these manuscripts are also not free of bias, being possibly more willing to accept data that conform to their own scientific concepts; editors have an appetite for positive and novel findings rather than negative or 'incremental' results, and journals compete to publish breakthrough findings to boost their impact factor, which is calculated within the first two years of publication, whereas the n-year impact factor and the citation half-life receive considerably less attention. All of this, paired with the ease of publication in a world of electronic submissions and re-submissions with short turnaround times, generates a self-fulfilling, vicious circle. Unfortunately, there is no greater widely accepted forum where replication studies or negative studies can be published, although those data inevitably exist and are of equal importance to the field, let alone the ethical principles concerning repeated use of animals to show something does not work because publication of negative findings is discouraged. Attempts to reproduce published findings are further hampered as many publications simply lack the detailed information required to reproduce experiments (Kilkenny et al. 2009). Indeed a recent analysis concluded that less than half of the neuroscience publications included in that analysis reported sufficient methodological detail to unanimously identify all materials/resources (Vasilevsky et al. 2013). Detailed information, however, is essential, especially in areas where tests and assays are not standardized and where there is high variability in experimental design and methodological detail across studies. This is frequently evident across many in vivo pharmacological reports (e.g., using different strains of rats or mice, sources of animals, housing conditions, size and made of test apparatus, habituation and training procedures, vehicles for drugs; e.g., Wahlsten 2001;Wahlsten et al. 2003), but in vitro studies may not fare much better either. Consequently, journals publishing original work must adhere to a minimum set of standards to even allow replication studies to be conducted, and many journals and editors have taken action to improve the information content provided in publications (McNutt 2014; Nature Editorial 2014), for example, by providing checklists that prompt authors to disclose important methodological details (Nature Editorial 2013). The inability to reproduce due to lack of detailed information would possibly be less of an issue if data were robust. A robust finding should be detectable under a variety of experimental conditions, making obsolete the requirement for exact, point-by-point reproduction. It could possibly even be argued that most replication studies are in fact studies testing the robustness of reported findings, since it may be difficult to exactly recapitulate all details and conditions under which the original data were produced. Moreover, robust data could be considered more important as they can be seen under varying conditions and may be biologically more relevant. On the other hand, claims of non-reproducibility which do not utilise information that is provided in the original publication should also be carefully scrutinized to test the validity of the 'replication', which is often not the case. This in turn implies that we should not only encourage publication of reproduction attempts but also allow publications investigating the robustness of a reported effect and the validity of attempted replications. While replication studies are usually performed by independent labs, replication attempts can of course also take place within the same laboratory, assessing the degree to which a test or assay produces stable and consistent results across experiments (intra-lab reliability). If intra-lab reliability is already low it comes as no surprise that reproducibility across labs (inter-lab reliability) is low as well, if not worse. Therefore, not only inter-lab replication studies, but also reports of attempts to systematically evaluate the intra-lab reliability of a particular test provide important information and publication of such data should be encouraged. Particularly impacting the media, especially via the social media, are cases of fraud. Fraud or suspected fraud has been suggested to account for more than 40% of retracted papers in the biomedical sciences and life sciences (Fang et al. 2012), which is extremely alarming, although it is important to be reminded that the number of retracted articles is low compared to the huge number of articles that get published each year. However, a meta-analysis and systematic review of survey data concluded that close to 2% of scientists admitted to have fabricated, falsified or modified data or results at least once (Fanelli 2009). But contrary to fraudulent articles that are retracted upon detection of the misconduct, non-reproducible results hardly ever get retracted and yet may influence the field for years. What are the implications for neuroscience? Because scientific advance is iterative, non-reproducibility, low reliability, lack of robustness and false discoveries have major implications, which go well beyond the waste of the taxpayer's money. Researchers may waste their time and efforts, being misled by wrong assumptions, and that way may even jeopardize their future careers, but even more important is the loss of time for patients waiting for new therapies. Misguided research may lead to misdiagnosis, mistreatment and ill-advised development of new therapeutic approaches that lack efficacy and/or suffer from unacceptable side effects. If negative data and failures to reproduce published work remain unshared, it essentially means that very valuable information for the field is withheld, potentially resulting in duplication of efforts, from which ethical questions arise, since in principle it contradicts one of the goals of the 3R's (i.e., reduction) in animal research. Moreover, preclinical efficacy data are increasingly considered unreliable and being of low quality, especially behavioural data which, in many cases mistakenly, are considered nice-to-have rather than obligatory. Given the already very complex nature of neuroscientific research, with high demand for more effective therapies, coupled to low success rates to develop such therapies and high development costs (Frantz 2004;Kola and Landis 2004), there is disappointment in the lack of predictability and reliability of those data. As such there is an unwillingness to invest further in these areas and it may be speculated that this situation contributed, at least in part, to decisions of major pharmaceutical companies to exit the neuroscience field. Can we resolve the situation? Recognizing this situation, a number of organizations have started to take action, including pharmaceutical companies, academia, governmental bodies, charities, editors and publishers (e.g., Landis et al. 2012;McNutt 2014; Nature Editorial 2014) and some scientists even took the initiative to replicate studies of critical data by independent labs prior to publication (Schooler 2014). These are important steps towards improved data reproducibility. However, it is also very relevant to share the outcome of those activities more widely amongst scientists. While there are more instances now where efforts to reproduce published data can be shared with the scientific community (cf. some recent attempts to reproduce some findings reported with the drug bexarotene; Fitz et al. 2013;Price et al. 2013;Tesseur et al. 2013), those publications are still more an exception than the norm, yet provide very valuable information to the field. Fortunately, this is increasingly recognized and a number of programs have recently been launched to make it easier to publish studies aiming at reproducibility. One of these initiatives is a new Springer platform, focusing on publications of peer-reviewed studies concerned with reproduction of recently reported findings in the neuroscience area. This section, which is called "Replication Studies in Neuroscience", is part of the open access, electronic SpringerPlus journal (http://www.springerplus. com/about/update/RepStudNeuro). Neuroscientists, including the readers of Psychopharmacology, should feel encouraged to submit replication studies to journals like this. Sharing these results is highly relevant to Psychopharmacology, both to the research field and to the journal, as it hopefully will help to increase the positive predictive value of our tests and assays, will contribute to scientific quality and eventually help to re-build trust in research and neuroscience in general. Although this article makes a plea for greater emphasis on reproducibility, there should also not be a shift to an aggressively sceptical tendency where some scientists make their names by failing to repeat others' work or where careers of brilliant young scientists are jeopardized because someone else published an article failing to reproduce a particular result. This can be a very intimidatory and threatening situation for many excellent scientists working in good faith to produce robust and useful data. The quest for reproducibility needs to be conducted in a scientific and ethical manner which pays careful attention to its consequences. But what is needed is a cultural change that puts more emphasis on the value of data reproducibility, reliability and robustness of data, rather than just novelty aspects. We hope initiatives like the ones mentioned above can make a contribution to this endeavour.
2018-04-03T01:03:50.701Z
2015-01-13T00:00:00.000
{ "year": 2015, "sha1": "45cd7fec2bb053c21289a58cbc9c323651291b3e", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-4-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2b1a930a5bb84f5b7cf70c2e92a73db4d4b1fed", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268635247
pes2o/s2orc
v3-fos-license
Exploring the sexual experiences and challenges of individuals with cerebral palsy PURPOSE: Cerebral palsy (CP) is a prevalent motor disorder affecting children, with evolving demographics indicating an increasing survival into adulthood. This shift necessitates a broader perspective on CP care, particularly in addressing the often overlooked aspect of sexuality. The purpose of this study was to investigate experiences of, challenges with, and related factors of sexuality and intimacy that people with CP are facing. METHODS: This was a descriptive and cross-sectional single institution survey among individuals with CP, ages 18 to 65, who had the ability to independently complete an online survey. RESULTS: A total of 40 respondents participated in the survey (Gross Motor Function Classification System [GMFCS] level I/II, 32.5%; III, 35%; IV, 32.5%). Of those, 45% were partnered, 60% had past sexual experience, 47.5% were currently sexually active at the time of submitting the survey, 80% had masturbation experience, and 45.8% believed it had positive effect on their self-esteem. Only 10% received sex education tailored for people with disability, whereas school (72.5%) and internet (35%) were the most common sources of sex education. Muscle spasms, positioning difficulty, and pain/discomfort were the most common physical challenges experienced during intimate activity amongst all function stratifications. Stratification analysis showed that, compared to higher functioning respondents, a smaller proportion of lower functioning respondents were partnered (GMFCS IV, 23.1%; quadriplegic, 31.6%), had past or current sexual experience (GMFCS IV, 44.4%, 36.4%; quadriplegic, 42.1%, 26.3%, respectively), and had masturbation experience (GMFCS IV, 61.5%); Also, they had worse Quality of Life Scale scores on average (GMFCS IV, 88.4; quadriplegic, 88.3) and a higher rate of reported positive effects of sexual experiences on self-esteem than negative (GMFCS IV, 38.5%; quadriplegic, 35%). Introduction Cerebral palsy (CP) can be defined as a collection of permanent motor disorders that affect the development of movement and posture, leading to activity limitation [1].Affecting 2-2.5 children per 1,000 born in the US, it is the most common childhood physical disability [2].The current dominant clinical approach to CP is to increase functional abilities and improve motor capabilities through an assortment of interventions, including pharmacology, surgical treatments, and physical therapy.Moreover, CP is currently still seen as a pediatric condition.This perspective is gradually becoming outdated as the population of people with CP has shown a consistent improved survival rate with a growing aging population [3].With this realization, it is important to begin thinking about life span approaches during the transition of care from pediatrics to adulthood. An essential facet of life that the CP population may experience differently from able-bodied people is sexuality.From an early age, people with CP may receive mixed signals in regard to intimacy, physical touch, and other important aspects of sexuality.They are constantly surrounded by their guardians and caregivers, creating a lack of privacy.While many children with CP experience cuddling and affection, the majority of handling in their daily routine is functional for caregiving and health care purposes [4].During adolescence, they may have fewer opportunities to fully experience activities that imitate gender roles [5].In general, whether due to physical disability, stigma associated with CP, or a combination of the two, people growing up with CP may not benefit from the same experiential learning that their able-bodied counterparts receive. With fewer opportunities to spontaneously interact with their peers, people with CP tend to experience a delay in developing skills involved in socializing.This leads to romantic relationships and sexual activities not being pursued or experienced until later ages [6].Like their able-bodied peers, people with CP have been shown to desire intimacy and active sex lives as sexuality is an inherent trait of human beings, despite level of physical ability [7].These desires form an important quality of life (QOL) issue for this community.In fact, a recent study assessing the QOL of 75 adults with CP found average QOL to be at the 56th percentile with the lowest ratings in "[having] a satisfactory sexual life" and "[presenting] with depression symptoms" [8].Another survey found high rates of anorgasmia, physical limitations of CP during sex, and emotional inhibition to initiate sexual contact [9].Many of the participants report wanting information in regard to the impact of CP on reproduction, interventions, problems with their partner, and more, but most had not been given the opportunity to ask questions or discuss sexuality with their healthcare providers. Addressing sexuality is a critical component of life span care for adolescents and adults with CP (and can have a significant impact of their self-esteem and QOL.The purpose of this study was to bridge the gap of knowledge on sexuality in CP by learning more about the sexual and dating history of people with CP as well as how people with CP feel about sexuality and the challenges they face regarding this topic. This descriptive, cross-sectional, noninterventional single institution survey study took place at a large medical institution with patients followed by a physical medicine and rehabilitation department and an orthopedic surgery department that follows patients with CP from childhood through adulthood.Recruitment took place between May and August 2020.Patients were eligible if they were aged 18-65 years and had a diagnosis of CP.Survey participants independently completed the survey online in a private setting outside of the clinic.Patients were excluded if they were unable to complete the survey without having to communicate through a caretaker.This exclusion criterion ensured that participants would provide honest answers to questions of a sensitive nature without any bias from their caretakers.All participants were recruited by phone call and provided verbal informed consent.The research team obtained ethical approval from institutional ethics review boards before recruiting patients.The survey itself was administered online through RedCAP, which coded and de-identified the data. Survey This self-developed questionnaire included a range of demographic questions to determine type of CP (topographic, muscle tone), functional level, educational level, and medications.Additional survey questions covered relationship status, history of and current level of sexual activity, history of masturbation, challenges experienced related to sexual activity, and reliance on caregivers to facilitate sexual activity.Sexual activity was defined as oral, anal, and/or penetrative.Participants also had the opportunity to answer open-ended questions including strategies used to cope with challenges related to sexual experiences, concerns related to sex and sexuality, and how they would like healthcare providers to address these issues.The Quality of Life Scale (QOLS), a 15-item instrument assessing five domains of quality of life (material and physical well-being; relationships with other people; social, community and civic activities; personal development and fulfillment; and recreation) which has been validated in a study of patients with chronic illness, was also included in the survey [10]. Statistical analysis Given the descriptive and cross-sectional nature of this study, no statistical analysis was conducted.The research primarily focused on gathering and presenting descriptive data, including demographic information, sexual behavior, physical characteristics, medication usage, and participants' self-reported experiences and concerns related to sexual health. Demographic and physical characteristics The research team conducted a preliminary analysis of 40 respondents.All classifications and categorizations were based on survey responses that were self-reported directly by patients.The mean age of all participants was 32.3 years ([19] 20-29 years; [11] 30-39 years; [7] 40-49 years; [3] 50-59 years) with 16 male and 24 female.The sexual orientation of the participants was as follows: 32 heterosexual, one homosexual, four bisexual, and one asexual.Patient responses of functional ability were stratified according to Gross Motor Function Classification System (GMFCS) level and topographical classification.The distribution of GMFCS levels amongst participants was as follows: 13 GFMCS I/II (ambulatory without walking aid), 14 GMFCS III (ambulate with hand-held device indoors, wheeled mobility longer distances), and 13 GMFCS IV (use powered mobility, ambulate short distances with significant assistance).The distribution of topographical classification amongst participants was as follows: nine hemiplegic, 11 diplegic, 19 quadriplegic, and one other (patient unable to classify themselves).Description of overall muscle tone included 23 spastic, 13 mixed muscle tone, two dystonic, one athetoid, and one undefined.The highest reported education level of participants included 16 with graduate degrees, 15 with bachelor's degrees, four with associate degrees, and five with high school level education. Relationship status Of respondents, 22 were single and 18 were partnered (married or with a significant other).Higher functioning respondents (GMFCS I/II) had higher rates of being partnered (61.5%) as compared to GMFCS III (50%) and GMFCS IV (23.1%) (Fig. 1).Those classified as hemiplegic had higher rates of being partnered (66.7%) versus diplegic (45.5%) and quadriplegic (31.6%).When relationship status was stratified by education level, it was found that participants with bachelor's degrees were 53% partnered, followed by 50% with graduate degrees, 40% with high school degrees, and 0% with associate degrees. Dating Of the 22 respondents who were not in a relationship, only two (both GMFCS IV and quadriplegic) reported to be actively dating while 18 were not dating but interested in dating and two were not interested in dating (Fig. 2).The respondents with the highest rates of interest in dating were quadriplegic (61.1%) and GMFCS IV (38.9%).Conversely, those that had the lowest interest in dating were hemiplegic (16.7%) and GMFCS I/II (27.8%).The two respondents who were not interested in dating were GMFCS III (diplegic) and GMFCS IV (quadriplegic). Thirty two respondents (80%) reported to have masturbated (Fig. 5).Of those, 12 were GMFCS I/II, 12 were GMFCS III, and eight were GMFCS IV; eight were hemiplegic, nine were diplegic, and 15 were quadriplegic.There were eight respondents who reported to have never masturbated, all GMFCS IV and quadriplegic, and five of them were interested in masturbation. QOL The results of the QOLS demonstrated that respondents with GMFCS IV and quadriplegia had the highest average score (88.4 and 88.3, respectively) with a range from 76-110 (higher scores indicate greater QOL).GMFCS I/II and hemiplegia were shown lowest average scores (81.0 and 79.4,respectively) with a range from 54-92. Respondents reported a variety of techniques used to cope with physical challenges related to sexual experiences.Medications that were reported to be helpful included gabapentin for vulvodynia, muscle relaxants, nerve block injections around the clitoris, botulinum toxin injections to target spasticity, and diazepam vaginal suppositories.Clinically, there were reports of discussing challenges with a psychologist, doctor, and spouse/partner; working with a gynecologist specialized in complex pelvic pain; utilizing a vaginal dilator; and participating in pelvic floor physical therapy.Situational suggestions to utilize during sexual activity included desensitizing gel, lubricant, positioning devices (wedge pillow, sitting in chair, learning against a surface), and a glass of wine.Other alternatives included oral sex, masturbation to achieve climax, and timing sexual activity based on daily fatigue and activity levels. Medications Many medications may impact sexuality or sexual performance.The number of respondents taking each medication that commonly have impacts on sexual performance are listed as follows: selective serotonin reuptake inhibitors (7), serotonin and nore-pinephrine reuptake inhibitors (6), benzodiazepines (5), dopamine-norepinephrine reuptake inhibitors (3), and second generation antipsychotics (3).Medication that may have an impact on sexual performance are listed as follows: tricyclic antidepressant (1), antimanic agent (1), central nervous system (CNS) stimulants (3), anticonvulsants (9), muscle relaxants (7) including oral baclofen (3), alpha-2 agonist (1), intrathecal baclofen pump (1), and botulinum toxin injections (2).Some of these commonly used medications, such as anticonvulsants and CNS stimulants, can increase alertness and energy, which may positively or negatively impact sexual performance depending on the individual.Muscle relaxants, such as oral baclofen, alpha-2 agonists, and botulinum toxin injections, may cause sedation or fatigue, which could then indirectly impact sexual performance. Self-esteem Respondents were asked if their sexual experiences had a positive, negative, or no impact on their self-esteem.Of the 24 respondents who reported history of sexual experience, 11 (45.8%) reported that it had a positive effect on their while six (25%) reported a negative effect and seven (29.2%) reported no impact (Fig. 7).When stratified based on GMFCS level and topographical distribution, there were more reports of positive effects on self-esteem than negative within the following groups: GMFCS III (35.7%),GMFCS IV (38.5%), diplegic (36.4%), and quadriplegic (35%).Those groups with the highest report of negative impact on self-esteem included quadriplegic (30%), GMFCS IV (30.7%),GMFCS I/II (30.7%), and diplegic (27%).There were multiple reports of no impact on self-esteem across all GMFCS levels and topographical distribution. Sex education The most commonly reported source of sex education was school (29) followed by internet (14), friends (4), health care professionals (3), and family members/parents (2).Only four people reported that their sex education was specific for people with disabili-ties.The majority of respondents reported that they had been able to speak with friends (25), healthcare providers (18), and family (17) about sex and sexuality.Fourteen reported communicating with others through the internet, and nine were in support groups (Fig. 8). Concerns and suggestions An open-ended prompt in the survey asked participants to report their concerns related to sex and sexuality.Responses included difficulty and embarrassment when communicating about sexual activity, including communicating with partners about these issues.Some respondents reported selfconsciousness surrounding dating and anxiety/fear of how their body will react to sex.There were also concerns raised about the logistical challenges of two people with disabilities participating in sexual activity, lack of inclusive sex toys, and challenges of utilizing various barrier methods of contraception with fine motor impairment. Participants had multiple suggestions for ways in which healthcare providers can become more aware of these issues and help guide patients through questions and concerns.There was a common theme of wanting healthcare providers to initiate more con- versations on this topic, asking patients about their successes and challenges related to sexual experiences.There was a desire for conversations to be more body and sex positive with less focus on "problems," which can negatively impact self-esteem.Patients were looking for education and guidance as to how their CP may impact their ability to participate in sexual activity, what to anticipate for pregnancy and childbirth, how to consider birth control options, and being connected to available resources.Additionally, participants voiced a desire for more research to be conducted on aging with CP to help understand how to best manage the aging process and potential decline. Discussion The findings are based on a total of 40 respondents that participated in the survey (GMFCS I/II, 32.5%; III, 35%; IV 32.5%).Overall, 45% were partnered, 60% had past sexual experience, 47.5% were sexually active, 80% had masturbated, and 45.8% believed it had positive effect on their selfesteem.Only 10% received sex education tailored for people with disability, whereas school (72.5%) and internet (35%) were the most common sources of sex education.These findings indicated a possible association between functional abilities and topographical CP classification with sexual experiences and QOL.Compared to higher functioning respondents, a smaller proportion of lower functioning participants were partnered (GMFCS IV, 23.1%; quadriplegic, 31.6%), had past or current sexual experience (GMFCS IV, 44.4%, 36.4%;quadriplegic, 42.1%, 26.3%, respectively), and had masturbation experience (GMFCS IV, 61.5%).they had worse QOLS scores on average (GMFCS IV, 88.4; quadriplegic, 88.3), and higher rates of reporting positive effects of sexual experiences on self-esteem than negative (GMFCS IV, 38.5%; quadriplegic, 35%). The primary focus of this survey was on dating, relationship status, and intimate activity.GMFCS I and II respondents had the highest percentages of people actively dating (50% and 35.7%, respectively) and the highest percentage who had engaged in intimate activities (100% and 78.6%, respectively).It is interesting to note that the GMFCS III and IV respondents had the lowest percentages of participants who were in relationships (0% and 18.2%, respectively) and had sexual experience (28.6% and 45.5%, respectively) while 25% of the GMFCS V respondents, who were in the most functionally impaired GMFCS level, were actively dating and most had experience with sex (62.5%).This indicates that GMFCS score, while a good estimate of day-to-day functioning, has little bearing on dating status and history intimate activity. In terms of dating, hemiplegic respondents had a higher percentage of participants who were actively dating (40%) than their diplegic and quadriplegic counterparts (16.7% and 20%, respectively).Hemiplegic respondents also had a higher percentage of participants who had engaged in intimate activities than other respondents (80%).More than half of the diplegic respondents had engaged in intimate activities (66.7%) while quadriplegic respondents, with their more extensive involvement of CP, had the lowest percentage of sexual experience (45%).This may indicate a negative correlation between the extent of the body affected by CP and history of sexual experience.As a whole, topographical classification seemed to be more consistent in predicting dating status and past sexual experience than GMFCS levels. Similar to dating and intimate activity, no correlation seemed to exist between prevalence of physical challenges and GMFCS level.Positioning difficulty, muscle spasms, and pain/discomfort were the most common physical challenges (reported by 27, 17, and 15 respondents, respectively) experienced during intimate activity amongst all stratifications of GMFCS and topographical classifications.The survey also asked how respondents cope with these physical challenges, which yielded a variety of responses from best positions, best times of day, physical therapy exercises and stretches, and useful pharmacologic agents. To sum up, topographical classification seemed to be more strongly correlated with dating status and past sexual experience than GMFCS levels; however, neither classification system seems to be an adequate predictor for the physical challenges experienced during intimate activity.Aside from the predictive power of these CP classification systems, it is important to report that a large percentage of survey respondents were actively dating (45.2%) and a majority (59.5%) had past sexual experiences.All these results confirm that sexuality and intimacy are important QOL considerations.Members of the medical community need to dispel the misconception that people with physical disabilities cannot have sex and work to help facilitate more and better sex for people with CP. This study faced several limitations that merit consideration.First, the exploration of sexual activity focused exclusively on penetrative forms (oral and anal), omitting non-penetrative forms such as hugging, touching a partner's genitals or breasts, and being touched.While this may limit the comprehensiveness of understanding sexual activity among individuals with CP, valuable insights are nonetheless provided about a subset of sexual behaviors that are less frequently discussed within this population.Additionally, the sample population exhibited a higher degree of college education compared to the general CP population, potentially limiting the representativeness of the findings.Also, the survey did not specify a timeframe for defining 'currently sexually active,' which could lead to variability in respondent interpretation.Yet, this flexibility allowed for a broad inclusion of experiences reflective of individuals' current lives.The absence of questions regarding the desire for tailored sex education and the lack of racial/ethnic data collection to understand the diversity of respondents represent further limitations. Moreover, a substantial number of GMFCS I patients declined participation, possibly due to a perception of their condition as having a minimal impact on their daily lives, coupled with an absence of responses from individuals with monoplegia.This under-representation of less functionally impaired individuals with CP suggests a skewed perspective towards those with more severe conditions.Another limitation was the size and geographical scope of the survey.There were only 40 respondents, all of whom received care within the last few years in New York City.Despite these limitations, the research provides an essential glimpse into a relatively unexplored area of CP research.It underscores the diversity of sexual experiences and educational needs among individuals with CP, paving the way for more extensive, inclusive future studies that can address these identified gaps. Moving forward, the research team plans to broaden the scope of this survey, reaching out to more patients from a wider geographical range to improve the reliability and validity of results.Additionally, the data collected can be used to provide a foundation for further studies as well as discussion topics for focus groups between patients and healthcare providers.They can also help inform the care that healthcare workers provide for their patients.For instance, use of botulinum toxin and muscle relaxants such as suppository diazepam in the pelvic areas facilitates better sex by reducing muscle spasticity. Conclusion Sexual behavior is prevalent among adults with CP, highlighting the significance of addressing sexual health as an integral aspect of comprehensive CP care that can significantly influence QOL and selfesteem.Tailored sex education is an urgent necessity, underscoring the importance of healthcare providers initiating discussions about sexual health, assisting patients in navigating their concerns, and providing relevant resources.This descriptive study will be used to create a stepping stone for future research and for the improvement of the comprehensive care of people with CP. Fig. 6 . Fig. 6.Physical challenges experienced during intimate activity by (a) Gross Motor Function Classification System (GMFCS) level and (b) topographical distribution. Fig. 7 . Fig. 7. Impact of sexual experiences on self-esteem by (a) history of past sexual experience, (b) Gross Motor Function Classification System (GMFCS) level, and (c) topographical distribution. Fig. 8 . Fig. 8. Sex education communication by (a) people that the patients have been able to talk to about sex and sexuality and (b) resource of sex education they have received.
2024-03-24T06:17:10.041Z
2024-03-22T00:00:00.000
{ "year": 2024, "sha1": "f2b3634bd62821be33acb377b7a389f2b4eaa2ec", "oa_license": "CCBYNC", "oa_url": "https://content.iospress.com/download/journal-of-pediatric-rehabilitation-medicine/prm240006?id=journal-of-pediatric-rehabilitation-medicine/prm240006", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e001fc9523b8346a8f6ade8ab691fc3003b12efc", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55951308
pes2o/s2orc
v3-fos-license
Quality indicators of ground beef purchased by bidding in a Brazilian university restaurant . INTRODUCTION The Brazilian population has undergone major social transformation in the recent decades, resulting in changes regarding physical spaces for sharing meals and daily practices of food preparation (Brasil, 2008b).Among some facilities where meals are consumed outside home, university restaurants of the Federal Institutions for Higher Education (IFES) have the responsibility of ensuring the right for adequate and safe food for its students and staff (Brasil, 2008a). Meats stand out among the foods used in the human diet, often representing the main part of most meals (Teichmann, 2000).They are considered one of the foods most valued by consumers for having exceptional organoleptic characteristics and high nutritional value.Due to its high protein content, the meat has singular importance in the development of the organism and may also serve as an energy source (Sauvant et al., 2004). The chemical composition of beef differs as result of factors such as species, age, breed and sex of cattle, type of cattle feed and cuts or muscles analyzed.Beef is mostly composed of water (73.1%),proteins (23.2%) and fats (2.8%) and may contain 11-29% of polyunsaturated fatty acids (PUFA).In addition, it is rich in iron and zinc, providing over two thirds and one quarter of the daily requirement, respectively.It is an excellent source of high biological value proteins, vitamin B12, niacin, vitamin B6, phosphorus, endogenous antioxidants and other bioactive substances including taurine, carnitine, carnosine, ubiquinone, glutathione and creatine (Williams, 2007). In food services, the receipt of raw material is important to guarantee the safety of the final product (Silva and Cardoso, 2011;Associação Brasileira de Normas Técnicas, 2008).Therefore, it is imperative to adopt it in order to comply with the good manufacturing practices, particularly concerning the reception area, process control, supplier evaluation and transport system.This goes beyond technical visits and observation of the adequacy of the transportation system used (Associação Brasileira de Normas Técnicas, 2008; Agência Nacional de Vigilância Sanitária, 2004).The procedures still do not include laboratory tests to establish whether the products are suitable for use, which would ensure that only products that are in good nutritional and safety conditions are used in the preparation of foods (Food and Drug Administration-FDA, 2009).The final quality of the beef is the result of what happened to the animal during the entire production chain, reason why appropriate transportation, storage, handling, display and preparation of meat must be ensured (Marin, 2014). The ground beef is used in various menus for the production of a wide variety of culinary preparations.Due to the good acceptance of these preparations, observed in the practice of community restaurants, coupled with the reasonable cost, ground beef is an ingredient routinely acquired and constantly present on the menu of these establishments.However, the use of ground meat has inconvenience which is the lack of standardization as a consequence of the composition and characteristics of the various sections of animals that originate from it. Supplier selection and acquisition of food ingredients have low levels of compliance with the current Brazilian legislation (Medeiros et al., 2012).Obtaining raw material from unreliable sources is a risk factor that contributes to outbreaks of foodborne illnesses (Food and Drug Administration, 2009).A special focus should be placed on raw foods of animal origin, which are considered particularly dangerous (Ebone et al., 2011).Fresh beef, when handled under inadequate sanitary conditions, can be a primary source of infection (Almeida et al., 2010).Thus, the quality of meat depends on the adoption of control measures and monitoring of the pre-slaughter period up to the meat consumption.All parties involved in the supply of meat should ensure the quality of the products (Conceição and Gonçalves, 2009). Public institutions such as hospitals, barracks, prisons, university restaurants, kindergartens and schools often use bidding for acquiring food genres.In this type of purchase, prices should be compatible with the current market and the maximum cost per period should be considered as defined in specific regulation in order to comply with the management of financial resources (Brasil, 1993).A strategic purchase should therefore combine an effective pricing comparison with the assessment of consistent quality indices according to the standards designated by the establishment (Brasil, 1993).The sanitary quality of products offered by food services is an important issue for the individual and population health because many food poisoning outbreaks occur when food is prepared for large groups (Codex Alimentarius Comission, 1993).In Brazil, restaurants rank second in number of reported foodborne diseases.An epidemiological analysis of 8451 cases of foodborne illnesses reported by the Ministry of Health between 2000 and 2011 revealed that foods of animal origin were the most commonly involved foods (Brasil, 2011). In addition, the evaluation of quality parameters by the receiver is an indispensable factor in combating fraud, since a product other than stated in the contract can be delivered, which is not always possible to detect sensorially without the aid of appropriate physicochemical analysis.In the study of Combris et al. (2009) with pears, it was found that flavor can beat food security, that is why the acceptance of a meal prepared with that raw material by the consumer does not indicate by itself integrity.It is necessary to evaluate certain physical and chemical aspects that are indicators of the quality of the raw material, as established by the Normative Instruction no.83, from 21/11/03, of the Brazilian Ministry of Agriculture, Livestock and Supply (Brasil, 2003).This normative stipulates minimal quality characteristics for meats, a maximum fat percentage of 15% and a maximum addition of 3% water, the only additive permitted. Therefore, the aim of this study was to evaluate the physical and chemical characteristics of ground beef purchased through bidding by a community restaurant (CR) for students from a federal public university in the city of Curitiba, in Brazil. MATERIALS AND METHODS The ground beef was received fresh and vacuum packed, in temperatures ranging from 0 to 7°C.The samples were separated and packed in disposable plastic bags and stored between 0 and 2°C until the analyses.Only the samples collected for evaluation of collagen and collagen-related protein were kept frozen (approximately -15°C) until the day of analysis.All other assessments were conducted on the reception day.These analyzes were conducted in triplicate right after receipt of the ground beef by the community restaurant, making a total of 24 samples (8 weeks × 3 batches a week) of 1 kg.All the assessments were carried out between August and October of 2010 in the Departments of Chemical Engineering and Nutrition of the Federal University of Paraná, Curitiba, Brazil. Color parameters The color of the tested meat samples was measured using a spectrophotometer Hunter Lab Scan XE Plus Mini (Reston, VA, USA) equipped with illuminant D65/10° and suitable for analysis of meat (Hunter lab, 2008), using the CIELAB system (L*, a* and b*) (Hunter lab, 2008).The readings were taken within 10 min after exposure to oxygen.All determinations were done in triplicate. Statistical analysis All measurements were replicated three times.Analysis of variance (ANOVA) was carried out and the average values were compared with the Tukey test, or the Kruskal-Wallis test followed by nonparametric multiple comparisons (Hollander and Wolfe, 1999).Differences were considered statistically significant at p < 0.05. RESULTS AND DISCUSSION Proximate compositions of the grounded beef samples are presented in Table 1.As indicated, meats showed relatively homogeneous results in all assessments, with the exception of the second week, for which the values of moisture and lipids showed significant differences, as well as the fixed mineral residue for the first week.The moisture content of the samples ranged from 70.84 to 76.38%.The fixed mineral residue content varied significantly in the first and fourth weeks, from 0.47 to 0.86%, respectively.In the other weeks there were no significant variations regarding this analysis, yielding results between 0.93 and 1.05%.The lipid content, for instance, ranged from 2.72 to 8.55% during the 8 weeks of study.In the 2 nd and 3 rd weeks, the contents were 8.55 and 6.28%, respectively, and in the following weeks there was a clear reduction of these rates, which ranged between 2.72 and 5.67%.The protein content was very homogeneous during all the period of assessment, showing values between 19.63 and 20.42%, with no significant variation.The pH of the samples varied between 5.62 and 6.02 during the two months of study, showing the higher rate in the second week.The higher percentage of lipids (8.55%) and lower content of moisture (70.84%) that occurred in the same week indicate a probable link between lipids and moisture content.The a w , in its turn, only changed significantly in the first week of evaluation, showing a result of 0.996 versus 0.999 for the other weeks. The percentages of collagen content and collagenrelated protein, as shown in Figure 1, showed significant variations between the first week and the others, not only as demonstrated by the graph, but also visually, since the ground beef received in the first week looked exempted of connective or fatty tissues.The collagen content and collagen-related protein in the first week were 0.66 and 3.33%, varying between 1.29 and 1.79, and 6.36 to 8.82%, respectively, during the following periods.These results are evidence that the supplier delivered a better quality product in the first week of the bidding process.Not only the composition of meats is influenced by a number of factors such as age, gender, place of origin and feeding of the animals, but also the collagen content depends on the cleaning phase, when the connective tissue is removed from the meat cuts, which in the case of this study is supposed to be from knuckle (Quadriceps femoris).Considering that the meat is already ground when received by the community restaurant, there is no way to ensure the intrinsic characteristics of the meat or the cut used.These values were lower than that recommended in literature in all weeks of testing: 2% of collagen and 15-18% of collagen-related protein (Shimokomaki et al., 2006).On the other hand, the excess collagen renders the product less digestible, harder and with reduced nutritional value due to amino acid imbalance and low content of essential amino acids (Ordóñez et al., 2005).In this context, some techniques such as hitting, grinding, chopping, soaking, application of hydrostatic pressure and use of enzymes/softeners are procedures commonly used to tenderize meats (Sun and Holley, 2010;Sullivan and Calkins, 2010;Ha et al., 2012;Lonergan et al., 2010).In addition, the evaluation of collagen-related protein is important for the preparation of meat emulsions from ground beef, inasmuch as this value should not exceed the range of 15 to 18% in order not to harm the mass stability when it comes to systems with high fat content (Shimokomaki et al., 2006). Variations in the results between different weeks of analysis, as mentioned above, are attributed to the characteristics of the meat delivered by the supplier each week, taking into account that there are many factors that influence the chemical composition and pH of meat products: age, sex, origin and animal feed, cut type, processing (e.g., removal of connective tissues), among others. Forasmuch as the lipid content of the meat was higher than 5% in the second, third and fourth weeks, the supplier was warned about the fact and informed that should the product should meet the standards of the bidding process (Brasil, 1993) and the technical specification from the nutrition service of the CR.In the following weeks, the value was indeed adjusted to less than 5%, indicating commitment from the butchery in delivering a quality product, most likely due to the warning applied by the CR. It is worth noting that none of the samples showed lipid content higher than 15%, set as the maximum permitted by the Brazilian law, according to the Technical Regulation of Identity and Quality of ground beef (Brasil, 2003).Although, lipids have the positive feature of providing juiciness, flavor and aroma for meats, they are easily oxidized and can lead to the formation of toxic and undesirable products (Shimokomaki et al., 2006), therefore should not be present in excess.Flemming et al. (2003) evaluated the fat ground beef sold at a supermarket chain of Curitiba, Brazil, finding a content of 3.43%, that is, less than the average result from the present study, which was 4.62%. During the 60 days of analysis, the protein levels remained constant, however the fat percentage increased as moisture content decreased.Meats, like most foods, have a pattern of compensation between levels of moisture, lipids and proteins (Shimokomaki et al., 2006).Within the same class of meat products, the protein content is almost constant, whereas for certain fat levels, a reduction of moisture is verified (Shimokomaki et al., 2006).The inverse relationship between moisture, lipids and proteins was also evidenced by Pedrão et al. (2009) when comparing the chemical composition of hump steak (Rhomboideus m.) and loin (Longissimus dorsi m.) of Nellore (Bos indicus): 36.70 and 73.34% of moisture, 48.82 and 3.39% of lipids and 12.6 and 21.8% of protein, respectively. All samples showed pH below 6.1, indicating the absence of early decomposition and meeting the standards of the National Laboratory of Animal Reference (LANARA), which profess meat as proper for consumption when pH ranges from 5.8 to 6.2 (Brasil, 1999).However, there were significant differences between weeks, inasmuch as the pH assumed rates of 5.62 and 5.71 on the sixth and eighth weeks, respectively.Conceição and Gonçalves (2009) found pH values of 6.5 and 7 to ground beef collected in Rio de Janeiro and Niterói, Brazil, which indicate the beginning of bacterial decomposition.The range of Aw expected for fresh meat is greater than 0.985, which complies with the meat received in the CR, which ranged between 0.996 and 0.999. The mean values (Table 1) obtained for the chemical composition and pH of bovine ground meat received at the CR can be compared with those found for knuckle beef cut by Della Torre and Beraquet (2005): 74.5, 1.1, 2.8, 21.1, 1.0, 5 and 5.56 for moisture, ash, lipids, proteins, collagen, collagen-related protein and pH value, respectively.The ground meat tested in the current study showed a much higher lipid content, indicating that there may be the possibility that the supplier used another cut of beef, different from knuckle, to obtain the product, such as those with larger content of fats, that is, topside, outside flat and chuck.Table 2 presents the results of color attributes of the ground beef samples during the eight weeks.Significant (p < 0.05) differences were observed among meat samples during 8 weeks in lightness (L*) and yellowness (b*).The analysis of variance using ANOVA was performed for all physical and chemical parameters, however when the assumptions for this analysis were not satisfied, a corresponding nonparametric analysis, the Kruskal-Wallis test, was used and followed by non-parametric multiple comparisons. When the hypothesis H 0 was rejected by the Kruskal-Wallis test, the presence of significant difference was indicated (Hollander and Wolfe, 1999). The color analysis showed average values of L* (lightness) and b* (yellow) of 40.63 and 16.43, respectively.Cañeque et al. (2003) suggested that increased brightness may be ascribed to intramuscular fat content.According to Marin (2014), color intensity depends on the quantity of hemoglobin and fat and differs depending on pH and cutting and also on age, sex and activity of the animal.Brightness, in specific, depends on pH and it influences the conformation of proteins within the muscle.Zhang et al. (2005) reported that meat with high pH showed lower values of L*, a* and b* than meat with normal pH.The mean value obtained for the parameter a* (red) was 11.1 to 23.6 according to a survey of Muchenje et al. (2009). In summary, the beef received by the CR showed color, pH and water activity mostly within the standards established in the literature for ground knuckle, but the levels of collagen and collagen-related protein were smaller than that desired and the lipid content was greater than that prescribed by the bidding process (up to 5%), although always lower than the maximum permitted by the Brazilian legislation (15%) (Normative Instruction n. 83, 2003, from the Brazilian Ministry of Agriculture, Livestock and Supply).Thus, there is the possibility that the supplier is delivering a cut different from knuckle, such as topside, hard cushion and chuck, which have lower cost and higher fat content.However, when warned of lipid content greater than 5% (as specified in the bidding process) in the second, third and fourth weeks of analysis, the supplier adapted the product to the specifications of the CR.Hence, the continuous assessment of the physicochemical parameters of food products obtained by bid enhances the quality of the products purchased.Such control renders it possible to maintain quality standards of raw materials used in community restaurants. Conclusions The current study evaluated the physicochemical characteristics of bovine ground meat comparing them with legislation and literature in order to facilitate the identification of standards that can be used by public institutions that purchase meat by bidding process.The Brazilian legislation presents the technical regulation of identity and quality of ground beef in its Normative Instruction n. 83, from 2003.Such normative does not stipulate physical and chemical specifications for categories of ground beef, it rather only establishes maximum levels for fat (15%) and addition of water (3%), and prohibits additives other than water.This way, commercial establishments are free to market products with different quality standards, naming them accordingly as special, first and second quality cuts, however these quality standards are not regulated by the Brazilian food legislation regarding the fat and collagen content. The ground beef received by the community restaurant (CR) was in general adequate in relation to color attributes, moisture content, Aw and pH, according to the values mentioned in previous studies and the maximum fat content (15%) established by the Brazilian food legislation.Nevertheless, the contents of collagen and collagen-related protein were found to be lower than the ideal.In addition, after the first week of analysis, the lipid content of the product received increased continuously and out of the range prescribed in the bidding contract, which was corrected by the supplier after receiving a warning from the CR, revealed the importance of evaluating the quality parameters continuously and not only in the first weeks of reception of the raw material.The standards set in this study may be used for other institutional food services to ensure the receipt of high quality meat, consequently raising awareness among local butcheries. Figure 1 . Figure 1.Contents of collagen and collagen-related protein. Table 1 . Physicochemical characteristics of the ground beef received weekly in the community restaurant.Means followed by the same letter in a column are not significantly different according to the Tukey test (p<0.05).**Means followed by the same letter in a column are not significantly different according to the Kruskal-Wallis test (p < 0.02).Observation: The only chemical composition specifications for ground beef as stipulated by the Brazilian food legislation (Normative Instruction n. 83, 2003, from the Brazilian Ministry of Agriculture, Livestock and Supply) are the maximum fat percentage (15%) and maximum addition of water (3%). * Table 2 . Color coordinates L*, a* and b* of the ground beef received weekly in the community restaurant.
2018-12-11T07:32:02.717Z
2016-04-30T00:00:00.000
{ "year": 2016, "sha1": "23fd4b78e9bcbe95713dff8b0fcc3ad1c2b6ed06", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJFS/article-full-text-pdf/0C76E4758488", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "23fd4b78e9bcbe95713dff8b0fcc3ad1c2b6ed06", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "History" ] }
221754619
pes2o/s2orc
v3-fos-license
Exact schemes for second-order linear differential equations in self-adjoint cases When working with mathematical models, to keep the model errors as small as possible, a special system of linear equations is constructed whose solution vector yields accurate discretized values for the exact solution of the second-order linear inhomogeneous ordinary differential equation (ODE). This case involves a 1D spatial variable x with an arbitrary coefficient function κ (x) and an arbitrary source function f (x) at each grid point under Dirichlet or/and Neumann boundary conditions. This novel exact scheme is developed considering the recurrence relations between the variables. Consequently, this scheme is similar to those obtained using the finite difference, finite element, or finite volume methods; however, the proposed scheme provides the exact solution without any error. In particular, the adequate test functions that provide accurate values for the solution of the ODE at arbitrarily located grid points are determined, thereby eliminating the errors originating from discretization and numerical approximation. Introduction Consider the following linear second-order inhomogeneous ordinary differential equation (ODE) in self-adjoint form: where κ(x) ≥ κ 0 > 0 is a positive function ensuring the existence of the integrals applied for the solution procedure. Equation (1.1) is a fundamental formula in the mathematics of physical laws. The practical impact of the solution of Eq. (1.1) is profound since many important physical phenomena used in various industrial applications are described by this equation. Problems in this form arise in various fields, including thermal energy transport, diffusion, electrostatics, and electrodynamics. In electrical applications, κ is the conductivity distribution, u is the electric potential, and f is the source term; all these parameters depend on the 1D spatial variable x. It is assumed that the classic solution u(x) exists on [0, ] with the appropriate boundary conditions. Finding and applying an analytical solution to Eq. (1.1) ensures that the error in the mathematical algorithm used to solve the physical problem is solely numerical. Therefore, our mathematical problem in the case of Eq. (1.1) is finding and applying the inverse of the differential operator to construct an analytical ODE solver. On the basis of our research work, this paper proposes a scheme that provides the values for the exact solution a of Eq. (1.1) in arbitrarily chosen grid points. Although some researchers have attempted to develop exact schemes, practically applicable solutions have been reported only for homogeneous second-order ODEs [1]. Furthermore, some researchers [2] used the nonstandard finite difference method to develop a scheme by using the Green function corresponding to the operator. Based on the fundamental solutions corresponding to differential operators, several schemes applicable to first-order ODEs have been reported [3]. In addition, some higher-order schemes for first-and second-order differential equations were presented in [4]. Furthermore, the authors of [4] reported an exact scheme originating from Samarskii applicable to a special case, namely an equidistant distribution of space. Delkhosh et al. proposed analytical methods for solving a second-order ODE, the effectiveness of which was demonstrated in their published work [5,6]. The method they described uses the beneficial properties of the Bessel equation to solve homogenous equations with a second-order self-adjoint differential operator [5,6]. Based on the state-ofthe-art literature, it is generally concluded that analytical methods based on the current state of science provide solutions for some specific version of Eq. (1.1) (f (x) = 0 or κ(x) = 1). Well-known numerical methods, such as the finite difference method (FDM) [7], finite volume method [8], and boundary element method [9], can be used to resolve equations in form (1.1) by approximating the analytical solution of the ODE. Among the classic numerical methods for second-order differential equations, at present, the finite element method (FEM) is the most widely used [10]. This method has also been used to develop approaches involving a higher order of convergence, particularly with the application of higher-order elements [10]. Thus, research on exact schemes is an important aspect in the domain of mathematical physics, and increasing the accuracy of such schemes represents a continuous challenge for researchers. It has been reported [11] that there is no unified theoretical foundation for the construction of exact schemes. Furthermore, increasing the accuracy also increases the complexity and computational burden of these schemes, resulting in difficulties during their application. However, the proposed scheme can overcome these limitations as it provides the exact solution at an arbitrary location independent of the spatial discretization. The subsequent sections present the details pertaining to the mathematical construction of the proposed method. Furthermore, we demonstrate the advantages of the method through several examples. Local Green functions of ODE for arbitrary partitioning Consider an arbitrary discretization of the interval [0, ] into (n + 1) subintervals using arbitrarily distributed node points: where n may be 1, 2, 3, . . . . We represent the subintervals as I i-1 = [x i-1 , x i ] for the indexes i = 1, 2, . . . , n, n + 1; thus, We define (2n) nonnegative integral functions of 1 κ(x) on the different subintervals represented as follows: for each i = 1, 2, . . . , n, assuming that the integrals are existing and finite. In the subsequent sections, these functions are used as test functions in a similar sense to the case of the FEM [10]. The following properties are fulfilled for the test functions based on the fundamental theorems of calculus: x 0 = 0, Then the local Green functions can be defined as follows: (2.7) In Fig. 1, the graphs of functions ψ 0 , ψ 1 , and ψ 2 are represented as solid lines, while those of functions ϕ 1 , ϕ 2 , and ϕ 3 are represented as dotted lines on the domains of their definitions. From the derivatives listed in (2.4) and (2.5), it is clear that test functions ψ i increase monotonously and that test functions ϕ i decrease on the interval of their domain I i for all i = 1, 2, . . . , n -1. On the first interval I 0 , only the test function ψ 0 is considered, while on the last interval I n , we define only the test function ϕ n similar to the FEM [10]. Initially, it appears as though the local Green functions are not all independent because Furthermore, as indicated by (2.8) and demonstrated in the following subsections, the test functions are handled in a similar manner as in the FEM [10]. Remark 1 If κ(x) ≡ 1 is selected as the constant function and the uniform mesh x i = ih (i = 0, 1, . . . , n + 1) is used on the interval [0, 1] with a step size of h = 1 n+1 , the test functions form linear functions resembling a saw-tooth pattern, as shown in Fig. 2, and the shape functions can be calculated as follows: Fundamental recursive relation: flux elimination process In this section, we demonstrate that the test functions defined in (2.3a)-(2.3b) are adequate to create a recursive relation among consecutive values of the solution u(x). This approach enables the elimination of the derivative u (x) from the equations. The objective The coefficients of the linear recursive relation are defined using the following integral formula: Using the definition of the test functions in (2.3a)-(2.3b), equivalent formulas for the following coefficients can be defined: The recursive relation can be developed essentially in two steps. An arbitrary pair of consecutive subintervals In the first step, we multiply ODE (1.1) by the test function ψ i-1 (x) and then integrate with respect to x over the interval I i-1 : where the prime symbol ( ) denotes the derivative with respect to x and i = 1, 2, . . . , n. Applying integration by parts to the left-hand side of (3.3), the derivative of ψ i-1 (x) and the anti-derivative of the remainder can be obtained as follows: The first term becomes zero by substituting the lower limit according to the third identity reported in (2.4); however, by substituting the upper limit, we can obtain the coefficient (3.2). The integrand of the second term can be simplified using the factor κ(x), and based on the fundamental theorems of calculus, the following expression is obtained: By multiplying this equation with the coefficient a i-1 , we obtain the following relation: which is satisfied for all i = 1, 2, . . . , n. It is desirable to eliminate the first term from (3.5), the so-called flux, because the derivative u (x) is unknown. This elimination is a necessary second step to obtain the recursive formula. Therefore, we multiply ODE (1.1) by the test function ϕ i (x) and integrate over the interval I i : Applying integration by parts to the left-hand side of (3.6), the derivative of factor ϕ i (x) and the anti-derivative of the remainder can be obtained as follows: (3.7) The first term becomes zero by substituting the upper limit according to the third identity reported in (2.5); furthermore, by substituting the lower limit, we obtain the coefficient 1 a i = ϕ i (x i ), as discussed in (3.2). The integrand of the second term can be simplified using the factor κ(x), and from the fundamental theorems of calculus, we can obtain the following: Multiplying this equation by the coefficient a i , the following relation is obtained: which is satisfied for all i = 1, 2, . . . , n. Now, we can eliminate the flux from Eqs. (3.5) and (3.9). By adding these two equations, the basic recursive relation can be obtained: for all indexes i = 1, 2, . . . , n. Exact scheme for Dirichlet boundary conditions We use the basic recurrence relation of Eq. (3.10) to obtain the exact values of solution (1.1) at all (n + 2) node points: However, only n equations are available in (3.10). Therefore, two values must be prescribed arbitrarily to obtain a unique solution. One possible approach to provide two independent values is to apply Dirichlet boundary conditions, in which the values of the solution at the endpoints are obtained as follows: In this case, we substitute u(x 0 ) = α into the first equation and u(x n+1 ) = β into the last equation in (3.10) to obtain the key result as follows. Theorem 1 Consider a system of n ≥ 3 linear equations: -a n-1 u n-1 + (a n-1 + a n )u n = a n β + a n-1 G n-1 + a n H n where the coefficients a i can be obtained using integrals . , n) and the coefficients G i-1 and H i can be obtained using double integrals (4.5) The matrix-vector format LU = F D of system (4.3) can be defined, in which the discrete Laplacian matrix (L) is as follows: L = a 0 + a 1 -a 1 0 0 -a 1 a 1 + a 2 -a 2 0 -a 2 a 2 + a 3 -a 3 0 -a n-2 a n-2 + a n-1 -a n-1 0 0 a n-1 a n-1 + a n This matrix is symmetric and positive definite and has a tridiagonal shape with dimensions of n × n. The vector on the right-hand side can be defined for the Dirichlet boundary conditions as follows: . . a n-2 G n-2 + a n-1 H n-1 βa n + a n-1 G n-1 + a n H n We can assemble the entries of the Laplacian matrix based on the tridiagonal form (4.6) by using accurate values of the exponential function e x in (4.9) without approximating with rounded values: By calculating the integrals G i and H i in (4.5) and using the local Green functions from Example 2.1, the following expressions are obtained: Now, we can collect the entries of the right-hand side vector based on the formula and because the boundary conditions are homogeneous, i.e., α = β = 0. After simplification, we obtain The solution u(x) is represented in Fig. 3 where the kernel function G(x, t) is the (global) Green function (4.17) We should note that this derivation from local Green functions towards the global Green function is reversible; i.e., we could have started with a solution of the form (4.16) with the Green function (4.17) applied to the partitions I i-1 and I i , and we would still have arrived at the basic recursion relations (3.10). Exact scheme for Dirichlet and Neumann boundary conditions Consider ODE (1.1) with a Dirichlet boundary condition at x = 0 and a Neumann boundary condition at x = : Because the flux value β is given at x = , we specify a function mapping from the flux β to the solution value u( ). This function is termed the Neumann-to-Dirichlet map at the boundary x = . Considering the same manipulations leading to identity (3.5) (albeit on the whole interval [0, ]), in this case, we have Substituting the Neumann boundary condition and isolating u( ) from this equation results in the following: Therefore, the Neumann boundary condition (5.1) at x = is transformed into a Dirichlet boundary condition. Applying the scheme from the Dirichlet boundary conditions and using the Dirichlet boundary condition (5.2) instead of u( ) = β leads to the following theorem. Theorem 2 The solution vector U = (u 1 , u 2 , . . . , u n ) T of the system of linear equations, that is, -a n-1 u n-1 + (a n-1 + a n )u n = a n u( ) + a n-1 G n-1 + a n H n , (5.5) and u( ) is defined in (5.2). The Laplacian matrix (4.6) for the Dirichlet-Neumann boundary conditions (5.1) is the same as in the case of the Dirichlet boundary conditions. Only the last element of the vector on the right-hand side must be changed as follows: . . a n-2 G n-2 + a n-1 H n-1 a n u( ) + a n-1 G n-1 + a n H n Therefore, the right-hand side vector F DN in 5.6 is the same as the vector F D in Example 4.1. Consequently, the solution values of the system of linear equations are the same as well. Case study for piecewise constant κ(x) In this section, we demonstrate the effectiveness and robustness of the proposed exact scheme in the case of an ODE with a discontinuous conductivity function, which appears frequently in practical problems. Consider a physical problem based on (1.1) with homogeneous Dirichlet boundary conditions, in which the source term is f (x) = x and the is not continuous at x = 1, as shown in Fig. 4. Because the proposed method eliminates the numerical difficulties arising from the discontinuity, no smoothing (i.e., a sigmoid approximation of the Heaviside function) is required to obtain the exact solution. In particular, the discontinuity of κ(x) does not create issues for the implementation of the proposed method. Although the κ(x) function is discontinuous, its integrals exist, and thus the necessary calculations can be performed. The integral of the conductivity function on the whole domain is Thus, the shape functions are expressed as follows: as shown in Fig. 5. The different parts of the Green function are defined as follows: (6.5) Thus, based on the shape functions (6.3), (6.4) and (6.5), we can construct the Green functions corresponding to conductivity function (6.1) as follows: (6.6) Figure 6 shows the surface plot of the Green function G(x, t). The solution of the problem u(x) can be derived by using (4.16); however, in general, the solution is defined using Conclusion This paper presents a practically useful and easily applicable robust scheme for calculating the exact solutions of a second-order self-adjoint ODE on any grid point corresponding to an arbitrary discretization. Through detailed derivations, we proved our theorems and assertions, and the effectiveness of the method was demonstrated through several relevant examples involving continuous and discontinuous κ(x). By rearranging the scheme, an implicit form of the solution of the differential equation can be obtained, which can be used to derive the analytical solution if it can be evaluated symbolically; in addition, by numerically evaluating the integrals, the proposed approach can provide the exact numerical values of the solution at any given point. Because the proposed scheme is easy to implement in various mathematical software environments (Mathematica, Maple, MAT-LAB, etc.), for problems possessing the given ODE structure, the exact solution can be determined, thereby avoiding the use of and problems associated with various numerical approximation methods. For all these reasons, the proposed method can be used not only to effectively solve mathematical problems with the differential operators in Eq. (1.1) but also to solve physical processes that can be described by the ODE in Eq. (1.1). In this case, if the integrals resulting from the applied scheme can be evaluated symbolically, the physical problem can be solved analytically, but if the integrals are numerically evaluated, we can obtain the exact solution to the physical problem at arbitrarily located grid points. At the moment, the proposed scheme works for the specific ODE structure discussed; a future direction can be the extension of this method to a broader range of differential operators.
2020-09-17T12:57:59.784Z
2020-09-16T00:00:00.000
{ "year": 2020, "sha1": "3aa8ee4890ef8b2c2acca8b5c5fa5e13c2079008", "oa_license": "CCBY", "oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-02957-7", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "865994e540c353c36d82fe82e5ca60fb31917155", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
1096333
pes2o/s2orc
v3-fos-license
Interactive comment on “ Deep drivers of mesoscale circulation in the central Rockall Trough ” by T I suggest briefly mentioning already in the abstract where the Rockall Trough is located, broadly speaking (north-eastern North Atlantic). Done But said ‘west of British Isles’ Section 2 Background As mentioned above, this section could be more specific on why the Rockall Trough is of interest. For example, you could refer to Hátún et al., 2005 (now cited only in sec. 4.3 regarding EKE) or other literature for the idea that the salinity of the Atlantic inflow to the Nordic Seas is in part determined by the location of the NAC relative to topography in/near the Rockall Trough, this being a main passageway for these inflows. Agreed. This is now incorporated into the Introduction and Background Section 3 Methods What bathymetry data were used (e.g. for figures 2 and 4)? This now spelt out Section 3.4 Data analysis Spatial mean EKE, does this mean you use spatial variances of u and v (variance relative to the spatial mean over 9âŮę-13âŮęW and 56âŮę58âŮęN respectively)? Done Variance at each grid point. Section 6.1 Correlations between in situ and altimetry-derived current components: the closer correlation between eastward than northward components is somewhat surprising especially since the northward component is stronger (generally calculation of current velocity from satellite altimetry works better in regions of strong currents). You later explain that the dominant cause of the bad correlation for the northward component is an error in the slope current. It might still be worth looking at how the correlations (zonal and meridional) compare with other studies. There is a large body of literature comparing in situ and altimetryderived currents are there any general findings of zonal agreement being better than meridional (or the other way around)? And can you speculate in a reason for why this large error in the slope current occurs? What is the C1379 typical Rossby radius of deformation here, and how does it compare to the resolution of observations? Can we expect a significant level of noise from small-scale phenomena here? The referee is asking too much here. Nevertheless, we have beefed up §3.4 with a more informed discussion of the relationship between the Geoid and the MDT produced by Rio et al. Rio et al. computed their MDT as a global product and tuned it to match in situ velocity observations and hydrography so there’s not much point in taking it any further. They only claim to have a better MDT than just using the raw GRACE data. They did not specifically address slope currents which are generally too narrow and too small a transport to show up in a region of rapidly changing bathymetry. We have specifically made clear that we are only writing about the Rockall Trough, although we recognise that there may be a more general problem here. We are users rather than experts in satellite altimetry and so we see it as our role to simply point out the problem rather than indulge in a detailed investigation. We believe that the case for a mismatch between the meridional currents is well made and we choose not to open the discussion any further in this manuscript. Section 7. Discussion P2626, L1-5: Interesting results. I wonder what this finding means for the historical and continuing regular observations along the Ellett line – is this monitoring less meaningful in terms of inter-annual variability and long-term trends, if local circulation patterns play such an important role? We appreciate the referee’s point, but this is an idea that has the potential of being a big investigation. We feel that it is the role of this manuscript t point out the explanation for the high temperatures on the Ellett line in winter 2009/10 observed by Sherwin (2010) but to leave it to others to investigate the implications. P2625, L5-10: If I follow this correctly, you argue here that the observed eddies are too deep to be wind-forced, so their origin is more likely to be instabilities in the NAC Introduction The northern end of the Atlantic Ocean (south of the Greenland-Scotland Ridge) is dominated by a basinwide three-dimensional cyclonic interleaving of wind-and thermohaline-driven water masses known as the sub-polar gyre (SPG) (e.g.Bacon, 1998;Hakkinen and Rhines, 2009;Hátún et al., 2005;Pickart et al., 2003).Convective forcing drives its strongest currents in the deeper waters of the Labrador Sea and Irminger Basin in the west, but the influence of the gyre stretches across the Iceland Basin and into the Rockall Trough on its eastern side.Warm, salty surface water is carried northward through the basin and trough across the Greenland-Scotland Ridge and into the Nordic Seas (e.g.Hansen et al., 2010;Hansen and Østerhus, 2000).The complexity of the SPG is demonstrated by Tett et al. (2014), who analysed the transports produced by six global ocean reanalyses and found that a recent 50 year increase in the strength of the Atlantic Meridional Overturning Circulation (AMOC) south of the SPG was uncorrelated with the relatively steady exchange across the Greenland-Scotland Ridge.This result would appear to undermine a basic assumption that convection at high latitudes drives an Atlantic thermohaline circulation.Understanding all aspects of the SPG is thus critical to studies of the global and local climate, and whilst progress has been made in defining the deep, cold southward flowing boundary currents in the western part, less is known about the currents and transports in the generally warmer and saltier water in the east. Models of the SPG require an accurate representation of the exchange across their boundaries, which in turn depends on an accurate parameterisation of mixing and circulation in its various basins, including the Rockall Trough.In response to a need to systematically study the SPG, a major international oceanographic observation programme was estab- In the winter of 2009 to 2010 the Scottish Association for Marine Science increased the sampling frequency by deploying an underwater glider, and the first new data points for the Ellett Line time series were published in Sherwin et al. (2012).By combining these glider data with gridded satellite altimetry ship-borne observations, new insights into the current field of the Rockall Trough have been derived. Background The Rockall Trough is an 800 km long by 200 km wide trench that lies to the west of the British Isles (Fig. 1).At its southern entrance (53 • N, 17 • W) it is up to 3500 m deep, but it shallows towards the north to a depth of 1000 m at the foot of the Wyville Thomson Ridge (8 • N, 60 • W), itself about 600 m deep.South of 58.5 • N the western side of the trough is flanked by the Rockall-Hatton Plateau (minimum depth of order 100 m), but further north this boundary is populated by a series of deep channels (to 1000 m) and shallow banks that The problems of quantifying and monitoring circulation on the eastern side of the SPG stem from (i) weak net transports with only moderate eastern boundary intensification and (ii) the presence of relatively strong mesoscale currents.The general distribution of the currents and water masses in the northern half of the trough (Fig. 1) is well documented (e.g.Ellett et al., 1986;Ellett and Martin, 1973;Mc-Cartney and Mauritzen, 2001).In the upper layers (to at least 600 m) there is a slow north-eastward flow (∼ 0.7 Sv, or 0.7 × 10 6 m 3 s −1 ) of relatively cool North Atlantic Water (NAW) originating in the North Atlantic Current (NAC) and including water from the sub-polar front in the western Atlantic (Holliday et al., 2000).This flow is enhanced along the eastern boundary by a slope current down to 500 m, with a width of order 50 km and mean speeds of up to ∼ 20 cm s −1 , in which about 3 Sv of warmer, salty Eastern North Atlantic Water (ENAW) from a more tropical source is found (Booth and Ellett, 1983;Holliday et al., 2000;Souza et al., 2001).At intermediate depths (say to 1000 m) there is a mixture of Sub-Arctic Intermediate Water (SAIW) from deep in the NAC and Mediterranean Overflow Water (MOW) from the south (e.g.Reid, 1979;Ullgren and White, 2010) that over decadal timescales interact with each other and with Wyville Thomson Ridge Overflow Water (WTOW) coming from the north (e.g.Ellett and Roberts, 1973;Johnson et al., 2010).At deeper levels (down to about 1800 m) low-salinity Labrador Sea Water (LSW) intermittently pulses into the trough from the south-west (Holliday et al., 2000), whilst deeper again is water with the signature of Antarctic Bottom Water (Figs. 2 and 3).The Ellett Line ship-borne CTD sections reveal that after 1995 there was a steady rise in temperature and salinity in the trough that is associated with a retreat of the SPG to the west (Hátún et al., 2005;Johnson et al., 2013), although since 2010 this trend appears to have reversed (Holliday et al., 2015). Although at times mesoscale activity in the northern part of the Rockall Trough has been thought to be weak (e.g.Pollard et al., 1983), hydrographic observations during the JASIN experiment of the summer of 1979 to the west of Rosemary Bank showed that this is not the case.A cyclonic mesoscale eddy with a diameter of ∼ 100 km, and internal velocities of order 10 cm s −1 , propagated westward through the observation area (at about 59.5 • N) with a translation speed of about 1.4 km day −1 (Ellett et al., 1983).It is notable that this eddy was coherent to well below 1000 m and had a weak surface signature.Its direction of propagation and water composition indicated to the JASIN group that it was formed by the overflow of WTOW across the ridge. A synthesis of current meter observations from north of 57 • N by Dickson et al. (1986) revealed a maximum in eddy kinetic energy (EKE) levels at all depths (from current meters) in winter to spring that lagged the peak in wind stress.Subsequent surface drifter studies in the central and northern parts of the trough by Booth (1988) and Burrows et al. (1999) revealed clear evidence of mesoscale eddies.Booth (1988) identified three eddies: a large anti-cyclonic one south of this study area at 54 • N, 15.5 • W with a radius of 60 km, a periodic timescale of up to 16 days and an orbital speed of up to 80 cm s −1 ; and two much smaller cyclonic ones, with periods of 1 to 2 days and orbital speeds up to 35 cm s −1 that rotated anti-cyclonically around the ADS.He attributed the source of these eddies to instability of the slope current near the Porcupine Bank and to Taylor column dynamics over the seamount respectively.More recently, Ullgren and White (2012) found 35 eddies over a 6 year period from 2001 in the southern part of the trough between 50 • and 56 • N using satellite altimetry and mid-water ARGOS floats.Cyclonic eddies tended to enter along the track of the NAC and anti-cyclonic eddies were found along the path of the slope current and may have included Mediterranean Overflow Water.The eddies were typically slow moving and their cores had radii of ∼ 27 km, and the floats had an orbital speed of 20 cm s −1 .The energy sources of this motion are not certain, but EKE levels in the trough had a seasonal peak in spring, so they may have been wind forced. More generally, there have been several large-scale studies of circulation of the North Atlantic as a whole based on archived data sets.Satellite altimetry (Heywood et al., 1994;Volkov, 2005) indicates enhanced levels of EKE in the Rockall Trough (order 100 cm 2 s −2 ) that contrasts with the quiescent Rockall-Hatton Plateau.Surface drifter tracks (Fratantoni, 2001;Jakobsen et al., 2003) across the North Atlantic reveal a similar picture, with the latter finding that boundary currents were enhanced by wind stress in winter, which in turn seemed to lead to enhanced instability and the appearance of increased mesoscale activity in spring.A study of intermediate depth drifters (Argo floats, RAFOS drifters, for example) indicated EKE levels of order 20-40 cm 2 s −2 at depths between 1500 and 1750 m in the southern part of the Rockall Trough (Lankhorst and Zenk, 2006). To sum up, historical observations suggest that the Rockall Trough has a moderate level of EKE activity which ranges from a maximum of about 100 cm 2 s −2 at the surface to about 25 cm 2 s −2 at 1500 m.There is evidence of a seasonal signal with a maximum in spring or early summer that may be related to instabilities formed in the boundary currents. In this study we look in detail at the mesoscale current field in the central part of the trough, focussing on a 12 month period between mid-2009 and mid-2010 to provide a more complete picture than has been possible to date of the timevarying three-dimensional structure of mesoscale currents in the trough, and their impact on the circulation and horizontal mixing of the trough.It will also explain why the mean temperature and salinity from the first glider mission appeared anomalously high (see Sherwin et al., 2012). Ship-borne CTD and LADCP The 2009 Ellett Line section of RRS Discovery cruise D340 (Sherwin et al., 2009) was conducted between 16 and 20 June, and provides a full depth picture of the temperature, salinity and velocity fields (Figs.2-4).Profiles were measured with a stainless steel Seabird 911 CTD package that was suspended inside a 24 bottle rosette of 20 L water bottles and below which was attached a downward looking 300 kHz lowered acoustic Doppler current meter (LADCP).The CTD system was lowered at typically 1 m s −1 and data were calibrated against water bottle samples.The LADCP data were processed using LDEO version IX.5 of the modified Visbeck routines (Thurnherr, 2010) which corrects velocity observations for the relative motions of the LADCPs to achieve a quoted accuracy of < 3 cm s −1 when two LADCPs are used.D340 did not have an upward looking LADCP, so individual observations were less accurate than this, but realistic profiles were achieved with smoothing applied over 50 m.On some casts a titanium frame and simple CTD system without an LADCP were used, so the number of velocity profiles was less than the CTD ones. Satellite altimetry Weekly syntheses of merged gridded satellite altimeter data were downloaded for the period 14 October 1992-7 August 2013 from the Aviso website (www.aviso.oceanobs.com) which provides processed data from all altimeter missions for near real time applications and offline studies.During this period the trough was covered by the JASON 1 and 2 satellites (cycle time 10 days, track separation 120 km) and the Envisat and ERS-2 satellites (35 days and 40 km), which together provided a reasonably dense coverage of sea surface observations.Daily and weekly averages of sea level anomaly (SLA) were added to the CNES-CLS09 estimate for the mean dynamic topography (MDT) to produce an absolute dynamic topography (ADT) on a nominally 1/3 • Mercator grid (i.e.ADT = MDT + SLA).The data supplied by Aviso had a meridional grid spacing of 20.7 km and a zonal spacing that ranged from 20.9 km at 55.5 • N to 19.6 km at 58.5 • N. Instantaneous and mean surface geostrophic currents (eastward, northward as u, v cm s −1 respectively) were derived by Aviso from the ADT and MDT using the thermal wind equation. The MDT heights are based on the observations of the geoid over 4 years between 1993 and 1999 by the Gravity Recovery and Climate Experiment (GRACE), a satellite observation programme run by NASA.The horizontal resolution of the raw GRACE geoid (200-300 km) has been enhanced using observations of in situ currents and hydrography between 1993 and 2008, along with concurrent satellite altimeter observations, to achieve a global resolution of 1/4 • (Rio et al., 2011).Whilst this resolution would appear sufficient for the Rockall Trough, it will be shown below that the MDT does not resolve the narrow slope currents. Although all velocity calculations have used the ADT, reservation about the accuracy of the MDT has led to the semantic difficulty of using an acronym that includes the word "absolute".Since the sea surface plots use an arbitrary mean level, the text tends to use the acronyms ADT and SLA interchangeably and the reader should not attach too much interpretation to their use. Underwater glider Continuous observations of temperature and salinity along with depth averaged "drift" current down to 1000 m were made by an underwater glider operated by the Scottish Association for Marine Science in an exercise to evaluate a way to increase the monitoring of temperature and salinity along the Ellett Line.The glider was deployed on 12 October 2009 on the Scottish Shelf at 56.56 • N, 7.48 • W and made its way westward over the shelf edge to deep water by 18 October (dive 235, Table 1).In the subsequent 4.5 months it completed eight transits across the trough between the 500 m isobaths along an (approximately) WNW-ESE track between 56.40 • N, 9.05 • W at the edge of the shelf to 57.22 • N, 12.52 • W on the eastern flank of Rockall (about 250 km apart, Fig. 5), with each round trip taking about 4 weeks.It was finally recovered on 9 March 2010 near the shelf edge following a mechanical failure.The glider (SG156) was a battery operated long-range autonomous underwater vehicle constructed by Seaglider Fabrication, University of Washington (Eriksen et al., 2001).Its descending and ascending vertical velocities were typically 10 cm s −1 , so it took about 6 h to complete a full dive, during which time it travelled about 5.5 km against the ambient current at a horizontal speed of roughly 25 cm s −1 .Its principal oceanographic instrument was an unpumped version of the Seabird SBE41 conductivity-temperature (Cθ) system that was modified to minimise power consumption.Up to dive 476 (14 December) Cθ was measured at 10 s (or about 1 m) intervals down to 30 m, then every 30 s to 200 m and every 60 s to 1000 m.For the rest of the mission the frequencies became every 5 s to 100 m, 10 s to 500 m and 60 s to 1000 m.Even though the vertical velocity is small, salinity spiking is a potential problem with the unpumped SBE41 when the glider travels through a pycnocline.Lag corrections The black track connects those points used for averaging, with the blue lines connecting the gaps in tracks 3 and 6 (see Table 1 and Figs. 6 and 7).Vertical red lines delimit the 20 (plain) and 10 (dashed) zonal averaging bins.Isobaths are in m. are applied before salinity is computed from the raw data (e.g.Perry et al., 2008) and extensive tests were undertaken to assess the magnitude of the problem here (for example, by comparing ascending and descending temperature -salinity, or θ S, plots).For the most part the up and down profiles were very similar, probably because sharp pycnoclines in the water column are absent during winter (blue profiles in Fig. 3). The October profiles (red in Fig. 3) suggest salinity spiking in the seasonal thermocline, but as the up and down profiles of θ and S were similar, the sharp changes in the θ S gradient, which are also present in the D340 profile west of the ADS in June, are just as likely to have been due to water mass interleaving.The overall accuracy of the salinity observations are confirmed by the coincidence of glider and D340 θ S profiles below 10 LH column: density anomaly sections (kg m −3 ) to 1000 m along the dotted track shown in the accompanying map.Also, the SLA (plain red line) is plotted as cm about 500 m (dotted red line), with the dashed red lines showing heights of ±10 cm.RH column: glider drift velocity vectors.All dive data (dotted red line) were averaged into 20 bins before plotting.The labels "T1"-"T4" over the ADS in the RH column identify transits 1 to 4; see also Table 1. of fouling on recovery, and pre-and post-mission CTD calibrations by Seabird Electronics indicated that there had been negligible drift in any of the sensors.The surface positions of the glider were accurately determined from GPS fixes.Dive averaged ambient current velocities (or "drift" velocities) were computed using an algorithm based on a theoretical model of the glider's hydrodynamic performance through the water, which determines the difference between the expected and measured surfacing positions at the end of each dive.However, this calculation is sensitive to the accuracy of the glider's internal compass, and in post-processing it was found that a significant difference existed in the mean drift velocity measured during the eastward and westward transitions of the trough (about 7 and 10 cm s −1 southward respectively be explained by a non-linear error in the compass measurements (a similar error has been reported by Merckelbach et al., 2008, for a different glider system).As a result, it was not possible to determine basin-wide transport.Other theoretical tests, however, indicated that the faster local eddy currents were reasonably well measured. Data analysis The spatial mean EKE in the central Rockall Trough between latitudes 56 • and 58 • N and longitudes 9 • and 13 • W was computed from the ADT currents as where var is the variance at each grid point.Error in the mean of N observations of EKE (err) was computed as where SD is the standard deviation of the observations.For Table 1 and Fig. 6 glider data from a specific transit of the trough were averaged into 10 longitude (about 10 km) wide meridional bins (see Fig. 5).The mean positions of the dives within a bin were then computed and the values of other data were determined by interpolating horizontally to the centre positions of the bins.The mean transit speeds and their standard deviations were derived by averaging individual speeds from the dives listed in Table 1.Density anomalies, shown as σ t in Fig. 6, from each downward and upward dive were averaged into 5 m vertical bins and then smoothed in the vertical with a 25 m half-width Hamming filter before they were averaged into 20 longitudinal bins for presentation. The glider track deviated significantly from the intended transit on two occasions and data collected between 16 November and 20 December (Transit 3, when it had to be piloted out of a strong opposing current and communication was then lost for a while) and between 28 January and 4 February (Transit 6, when it was directed to investigate a potential eddy) have been omitted from the analysis (see Table 1).In making the bin averages no allowance was made for these gaps, even though they were quite extensive.Although the gap in Transit 3 lasted about 5 weeks, both density sections appear sufficiently continuous (Fig. 6) that the discontinuity in the data is unlikely to have compromised the general findings. It is possible to compare the different Aviso and glider data sets directly by deriving equivalent time series.This was achieved by interpolating the 3-D (in space and time) fields of ADT and surface currents to match the time and position of individual glider observations.This time series was then processed and averaged in the same way as the glider data. Compatibility issues between glider and altimeter observations Mission 1 was conceived before gliders had become proven platforms in the North Atlantic and its success during a winter deployment was a stimulus for further uptake of gliders in the UK.There are especial challenges to analysing glider data, because they are slow moving vehicles and in a strong mesoscale current field their frame of reference is constantly moving.At first sight it would appear that data from a vehicle that takes about 2 weeks to cross the Rockall Trough would be incompatible with those from a satellite that every 2 weeks makes a single instantaneous measurement of SLA along a line that will usually be at least 10 km from the glider.The problem is further compounded by the fact that these length scales and timescales are comparable to those of the mesoscale currents that dominate the central Rockall Trough.Hátún et al. (2007) combined glider and altimeter data to describe the structure of an anti-cyclonic eddy in the Labrador Sea, and Martin et al. (2009) used altimetry and repeated glider passes to analyse a long lasting stationary eddy in the Gulf of Alaska.In Mission 1 the glider tended to be carried in the faster currents that flowed around cyclones rather than through them (see Fig. 7 and the video in the Supplement), so a comparison of the structure of different eddies was not undertaken.Instead we have chosen to look at the general nature of the mesoscale eddy field.(Incidentally, the few cloud-free satellite observations of the surface showed minimal horizontal variation in winter temperatures and did not reveal the distribution of the deep eddy field.) 4 Background observations The evolution of temperature and salinity During June 2009 the seasonal thermocline was well established in the upper 100 m of the water column and there was clear evidence in the salinity signal of the influence of the slope current, in the upper 500 m on the eastern side, spreading across to the centre of the trough (Fig. 2).The deep thermocline (about 7.5 • C) which was located at about 800 m to the west of the ADS descended to about 1000 m on its eastern side (Figs. 2 and 3).The deep part of the water column was marked by gradual decreases in temperature and salinity with depth. Seasonal warming and freshening of the surface layers on both sides of the trough persisted until October, but after that stratification weakened so that by February the upper 500 m was uniformly mixed.By and large the profiles of θ and S in the upper 1000 m on the eastern side of the trough were well behaved and almost equally mixed between NAW in the centre of the trough and the more saline ENAW.To the west of the seamount, away from the influence of the shelf edge current, the changes in the profile appeared more dynamic, with an apparent interleaving between ENAW and the cooler and fresher NAW.Between October and February, and below about 200 m, this side of the trough also experienced remarkable increases in both θ and S as NAW appeared to be forced down by as much as 800 m.A reason for these increases is discussed later. The velocity field during Discovery cruise D340 The directly observed full-depth LADCP sections (Fig. 4) dispel any notion that the currents in waters of the central trough are either slow or vertically uniform.There was a fast southward flowing current (up to 45 cm s −1 ) along the slope of the Rockall Bank that is much stronger than the Aviso equivalent.There is a suggestion of an extensive feature that filled the space west of the ADS, which manifested itself in a pronounced surface depression of the SLA running down the western side of the trough.The sea surface had a particularly steep zonal gradient on the western side of the ADS (Fig. 8), which complemented the westward uplift of the deep pycnocline centred on 900 m (Figs. 2 and 3) and the strong in situ meridional currents (Fig. 4).The directly observed current of more than 25 cm s −1 that flowed northward along the upper edge of the Malin Shelf appears, surprisingly, as a weak southward flowing surface current by Aviso.The poor comparisons at the boundaries give a strong impression that the meridional ADT surface currents are badly represented at the edges of the Rockall Trough.This point will be taken further with the glider observations. Temporal variations in eddy kinetic energy The monthly means of the weekly averaged surface EKE in the central Rockall Trough (see box in Fig. 1) from Aviso gridded altimetry have increased at a rate of 1.1 cm 2 s −2 per year since 1992 (Fig. 9b) with an inter-annual linear correlation coefficient of R = 0.65.The reason for this long upward trend is not clear and a brief examination of the correlation with two candidate explanations was made.The oscillations in the North Atlantic Oscillation (e.g.Hurrell, 1995) for the years 1994-2013, acting as a surrogate for the strength of the wind stress, had a very weak negative correlation with EKE (R = −0.2) with a high probability that this correlation is by chance (p = 0.5).Increases in the mean temperature and salinity in the upper 800 m from Ellett Line data between 1975 and 2010 had a stronger correlation (R = 0.49) with a low probability of chance (p = 0.05).High temperature and salinity are associated with a weak SPG (Hátún et al., 2005;Johnson et al., 2013), so it is possible that the increase in EKE is related to the westward retreat of the SPG in the northern Atlantic. During the period May 2009 to April 2010, which covers the in situ observations described here, mean surface EKE levels (66 cm 2 s −2 ) were about 11 cm 2 s −2 larger than the long-term average.EKE was relatively high in June 2009 (about 84 cm 2 s −2 ), although the highest level for the 12 months was over 100 cm 2 s −2 in the middle of November 2009 (Fig. 9a).This latter peak occurred after a steady increase from about 55 cm 2 s −2 at the start of the mission and was much more than a standard deviation away from the mean levels for November and December (Fig. 9c).It was followed by a steady decline to about 40 cm 2 s −2 by the middle of January.Thus, in the 12 months from May 2009 to April 2010 the seasonal variation in EKE levels was anomalous compared the long-term average, where the peak occurs in May (67 cm 2 s −2 ) and is at a minimum in October (45 cm 2 s −2 , Fig. 9c). The evolution of SLA and surface currents during the glider mission As the glider tracks and SLA maps show (Figs.(B A ).A strong east to north-eastward current (> 20 cm s −1 ) marked the boundary between them.During the first part of November, A C moved NE onto the seamount, where it intensified and seemed to become anchored until the middle of December, with a circular diameter of about 100 km and current speeds in excess of 20 cm s −1 (Fig. 7b and c).Meanwhile, B A drifted eastward to the north of the Hebrides Terrace Seamount and a cyclonic cell (or eddy) started to drift into the picture from the southwestern corner (C C , Fig. 7b). By the middle part of November a westward flowing current with maximum surface speeds in excess of 30 cm s −1 had formed in the southern part of the region (D).From then on the pattern of alternating cyclonic and anti-cyclonic eddies persisted throughout December (Fig. 7c), whilst they slowly lost energy until by 9 January average EKE in the region had weakened to < 50 cm 2 s −2 (Figs.7d and 9a).Nevertheless, these cells were still sufficiently intense to be able to drive a pronounced anti-cyclonic circulation westward across the trough between the two seamounts.From then on mesoscale activity increased a little so that by 5 February an anti-cyclonic circulation (E A ), which may have evolved from B A , had become trapped on or close to the ADS with current speeds only little less than the earlier A C .At the same time the broad cyclonic circulation, C C , had become established in the southern part of the region (Fig. 7e) and a new cyclonic eddy (F C ) had appeared to the north-west of the ADS.Finally, by the start of March, E A had weakened a little above the ADS, whilst C C had disappeared and a new cyclonic eddy, G C , had appeared close to the Malin Shelf edge between the ADS and the Hebrides Terrace Seamount. Overall, the mesoscale motions in the deep water of the Rockall Trough seemed to be distributed in a fairly arbitrary pattern, although the circulations were arranged like gears in a pattern of anti-cyclonic/cyclonic cells.It is intriguing that both anti-cyclonic and cyclonic cells were able to occupy the top of the ADS, and it appears that the precise sense of circulation was determined more by the regional arrangement of the cells than by a local dynamic balance formed by the seamount itself.The vertical extent of these surface cells is described below. A word of caution is necessary here.Later analysis that questions the validity of the background MDT in the Rockall Trough will throw doubt on the intensity on the anticyclonic circulations between the ADS and the Hebrides Terrace Seamount. Sub-surface glider observations winter 2009/2010 Satellite altimeter observations of ADT and surface velocity give valuable information about the spatial and temporal scales of mesoscale variability in the Rockall Trough, but they provide a poor representation at boundaries and cannot reveal detail of the structure below the surface.Ship-borne sections such as those of D340 help to address these omis-sions, but are of necessity rare.By contrast, the underwater glider provides measurements of the density structure and velocity field that through the use of repeated sections help to construct a more complete picture of the variability below the surface. Glider profiles and drift currents It is quite difficult to navigate a glider precisely in a field of apparently random mesoscale currents that have a similar speed to its forward velocity.During Mission 1 the glider encountered ambient current speeds of over 24 cm s −1 in 25 % of the dives, and at such times it might either be stopped dead, forced sideways or backwards, or race forward (hence the uneven spacing of the dive positions in Figs. 5 and 6).As a precaution (since this was its first mission) the glider was kept outside the 500 m isobath so that it did not measure the strength of the slope currents in the shallower water on either side of the trough.The original plan to pass across the top of the ADS along the Ellett Line route was abandoned after Transit 4 because the opposing currents near the seamount were too strong in the early part of the mission. Changes in temperature and salinity profiles across the trough At the beginning of Mission 1 the surface temperature and salinity at the shelf edge were about 12.6 • C and 35.45 respectively (Fig. 3), fractionally warmer and fresher than that observed by D340 on 19 June (12.3• C and 35.47).The temperature and salinity sections of the upper 200 m from Transit 1 were generally similar to those from D340.Over winter, between mid-October 2009 and the end of February 2010, surface stratification was gradually eroded and deepened so that by March the upper 500 m was almost isothermal and isohaline (Figs. 3 and 6).By contrast, the θ S profiles below about 600 m (9.5 • C and 34) were very similar apart from vertical displacement due to the mesoscale motions, and are typical of Ellett Line profiles, which contain WTOW at intermediate depths (Johnson et al., 2010).In many transits the near-surface isopyncals were roughly horizontal, whereas those deeper in the water column (below about 600 m or σ t ∼ 27.3 kg m −3 ) had large and uneven variations in depth (with undulations up to 200 m high and 50 km wide) across the trough.Over the course of the mission many of the changes in the track of the glider visually correlated with the ambient drift velocities and the undulating depth of the deep isopycnals (Fig. 6).(These correlations were observed to steadily evolve in real time as the plots from successive dives were updated on the mission console.)There also appears to be an inverse correlation between the depth of the SLA and the height of the deep isopycnals along the glider track.Mesoscale gradients of the deeper (rather than shallower) isopycnals of the trough were associated with depth mean current speeds mea- sured by the glider in the upper 1000 m of typically 10 to 20 cm s −1 .A good example of this association occurred in October during Transit 1, when a doming of the deep isopycnals between 10 • and 12.5 • W and about 57 • N coincided with a trough in the SLA and drove a strong south-westward current (Figs.6 and 7). Interaction between glider track and mesoscale currents By the time the glider turned eastward from the Rockall Bank on 31 October (Transit 1, Fig. 6) the large cyclonic circulation A C had settled on the western side of the trough (Fig. 7a), which initially forced it southward along the western side of the circulation (west of 12.5 • W in Transit 2).The glider was then carried eastward along the interface between the southern side of A C and the northern side of the anticyclone B A until it was picked up by the eastern side of this circulation and deflected south-eastward.The isopycnals at 1000 m in B A were by then significantly deeper than when A C occupied the eastern side of the trough (see also Fig. 10). Until the end of December (Transit T3, Fig. 6) the subsurface isopycnal levels tended to mirror the sea surface undulations observed by the altimeters.However, from the end of Transit 2 (12 November) a deep water doming of the isopycnals appeared between about 9.5 • and 10.5 • W that seems to have been associated with the presence of lighter water in the upper 300 m of the slope current, and is not readily reflected in the SLA.At this time the Aviso current speeds along the transits were at their maximum (and greater than those observed by the glider), but from then on until the end of the mission glider drift speeds (averaged over 1000 m) 2. Altimeter values have been interpolated in time and space to coincide with the glider observations.were always 2 times greater than the surface altimeter speeds (Table 1). In Transit 4 an uplift of deep isopycnals at about 9.8 • W drove a southward current between 10 • and 10.5 • W in response to the presence of a cold core that was most pronounced below 800 m.Similar currents were found on the northern side of the Hebrides Terrace Seamount at about 56.5 • N, 10 • W in transits 7 and 8 (see Figs. 6b and 7f).The other transits also exhibited examples of doming of the deep water isopycnals driving glider drift currents that extended from the surface at least to 1000 m. 6 Direct comparison of the glider and altimeter current measurements Comparison of individual observations The average current speed along individual tracks measured by the glider drift was often much greater that that observed by Aviso, particularly once the seasonal stratification had been eroded by the start of 2010 (Table 1) (during the June 2009 cruise the LADCP measured currents depth averaged to 1000 m were nearly twice as large as the equivalent Aviso surface currents; see also Table 1).Over the glider mission as a whole the eastward components of velocity were fairly closely correlated (0.69, Table 2 and Fig. 11a), whilst the correlation between the northward components was poorer (0.41 and Fig. 11b).One explanation may be the smoothing that is introduced by gridding the altimetry data, although the satellite altimeter coverage of the Rockall Trough was quite dense and smoothing would not explain the difference in correlation between the northward and eastward components of velocity.So these results are a little surprising and merit further investigation, particularly as there is some doubt about the accuracy of the geoid and MDT across slope regions (see Fig. 12a). along with the correlation coefficients, R u and R v , and the number of observations, n.The fits to Aviso data (standard font) and glider data (bold font) are shown.The best fit will be somewhere between the two.There is a marked improvement in the correlation of the individual northward currents once A is applied.Observations were smoothed over about 1 day with a four-point Hamming window and subsampled.The meridional averages are derived from the red and blue curves in Fig. 12b and c. Correcting for errors in the MDT velocity field The extent to which the steady MDT velocity field (Fig. 12a) deviates from the true background velocity field in the Rockall Trough has been determined by averaging all the simultaneous glider and altimeter currents along all tracks into 10 of longitude bins along a mean track, including those in transits 3 and 6 that were previously omitted, but ignoring velocities > 30 cm s −1 (see Fig. 5).The justification is outlined in Appendix A. It is important to note that the eastward (i.e.across-trough) components of the temporal mean velocities from the glider (G M1 ) and Aviso (A M1 ) are strongly correlated (R = 0.92, Table 2) with a constant of proportionality (α, Appendix A) that looks close to 1 in Fig. 12c.(The results of the correlation analysis in Table 2 suggest that α ∼ 1.3, but tests showed that the precise value of α is not critical.)On this basis it is also assumed that the true α equals 1 for northward currents and that the striking difference in the glider and Aviso measurements in this direction (Fig. 12b) can be attributed to errors in the MDT current field along the track (A ∈ ).This error was calculated from Eq. (A5) with U + G set to 0 for convenience and is shown by the black lines and vectors in Fig. 12.The true background current (U ) can be estimated by mentally adding the black arrows to the blue MDT field arrows along the track in Fig. 11a.The two components of A ∈ were interpolated along the mean glider track and used to correct individual Aviso measurements (making no allowance for any variation of A ∈ with latitude).This improved the correlation between the corrected northward Aviso and glider currents from 0.41 to 0.73 (Table 2).The equivalent correction for the eastward currents did not improve the correlation (Fig. 10a and c), which suggests that most of the improvement in the northward compo-nent is due to the domination of the slope current error (see Fig. 12).Thus the MDT fails to produce the slope currents northward along the European edge and southward along the eastern flank of Rockall.Although the anomalous MDT current field to the south of the ADS appears to be robust (since the standard error in the estimates of the mean background current are small, Fig. 11), it can be explained when combined with the anomaly in the slope current.Taken together these anomalies suggest that the extensive anti-cyclonic cell northward and over the Hebrides Terrace Seamount is also an artefact of the MDT.An order of magnitude estimate of U is needed to complete the analysis.The mean northward transport through the trough is probably between 0.7 and 3.7 Sv (Holliday et al., 2000) which, with a cross-sectional area 250 km wide and 1000 m deep, implies that U is between 0.3 and 1.5 cm s −1 .A dashed line has been added to Fig. 12b at the mean of these speeds to show a y coordinate shift of −0.9 cm s −1 .With this adjustment it appears that from its western edge at about 9.7 • W the mean European slope current builds in strength to about 13.5 cm s −1 above the 500 m isobath at 9.1 • W, which is comparable to the value quoted in Sect. 2. Westward of this longitude to a meridian southward of the western edge of the ADS at 11.4 • W the mean flow is S or SSW with a maximum of about 6 cm s −1 at 10.5 • W. West again the mean current flows northward around the western side of the ADS with a mean speed of 3-4 cm s −1 .Finally, at the very western end of the track, on the eastern flank of Rockall, the mean Equatorward slope current is of order 5 cm s −1 .The general agreement of this pattern with the schematic mean circulation pattern described in Ellett et al. (1986) provides confidence with this analysis.We note that this estimate of a mean northward slope current at 56.75 • N was derived during a prolonged period with a weak current temporarily southward because the main current had been deflected to the west. Discussion The central Rockall Trough is populated by mesoscale eddies or cells that appear trapped in the deep water, where they push each other randomly around on timescales of months.They drive currents that extend to the surface, but because surface temperatures in winter are uniform, the impact of these eddies is masked from satellite temperature sensors.During the glider mission they had transient currents that, integrated over the top 1000 m of the water column, had a mean speed of about 15 cm s −1 (Table 1).It is clear from the glider σ t profiles and sections (Fig. 6), and from the LADCP measurement of D340 (Fig. 4), that they also extend well below 1000 m, the maximum depth range of the glider. The cyclonic eddies seem to be too deep to be formed from the local wind stress curl or the slope current (which extends to only about 400 m) and must have originated elsewhere either as part of a northern extension of the eddy field at the mouth of the trough, described by Ullgren and White (2012), or from the north (e.g.Ellett et al., 1983).The 1000 m profile dive limit and the fact that the glider did not get to the centre of a cyclonic eddy renders this discussion somewhat speculative.Profiles of θ S at the edges of cyclonic eddies A C and F C (Fig. 7), near the ADS, clearly have WTOW to 1000 m and may have been formed from intermittent bursts of cold water overflow across the Wyville Thomson Ridge (Ellett et al., 1983;Johnson et al., 2010;Sherwin et al., 2008). The glider did not pass close to the cyclonic eddies south of the ADS, and it is not known where they originated, although it is quite likely that they were derived from instabilities of the North Atlantic Current front near the mouth of the trough.The warm core anti-cyclonic eddy B A comprises ENAW (Harvey, 1982), rather than MOW, to at least 800 m, and may have been spun off from the northward flowing slope current somewhere south of the region (see Ullgren and White, 2010). The amount and spatial extent of mesoscale variability in the Rockall Trough seems to have an impact on the stability of the northward propagating current slope current along the western edge of the European shelf, which has a mean speed of comparable magnitude (∼ 15 cm s −1 ).In some of the sections reported by Holliday et al. (2000) slope current water appears to be spread across the trough, and our observations seem to explain what is going on when that happens.The vertical profile of water west of the ADS, which at 500 m was much cooler and fresher than that to the east in June 2009, had become much warmer and saltier and adopted an eastern looking profile by February 2010 (Fig. 3).During the intervening period, starting around the beginning of December and continuing through to the end of February, there was a sustained period of north-westerly geostrophic flow extending from the Hebrides Terrace Seamount at 10.5 • W, 56.5 • N towards the Rockall Bank (Fig. 1 and D in Fig. 7).From the ADT plots the speed of this current was typically 20 cm s −1 , so it would take of order 1 week to cross the trough.Using a width of this current from Fig. 7d of 25 km, and an assumed depth of 1000 m, gives its rate of transport as 5 Sv.A current of this magnitude, which is much larger that the ambient currents in the central trough, sustained over a period of 3 months, would certainly be big enough to explain the apparent excursion of slope current water away from the European side onto the Rockall side.This is taken as evidence that the mesoscale activity in the trough can lead to substantial horizontal exchange in the upper 1000 m.Variable currents of this nature contribute to the inter-annual variability reported for the upper 800 m observations of temperature and salinity along the Ellett Line (Holliday et al., 2000;Sherwin et al., 2012) and also have sufficient magnitude to reverse the slope current for several months. It is noted from the altimeter observations that there was a tendency for the mesoscale structures to become stuck in the vicinity of the local seamounts and, in particular, both anti-cyclonic (in November) and cyclonic (in February) eddies appeared to become trapped on the ADS.This is surprising given that the full vertical extent of these eddies is much deeper than the top of the seamount (500 m) and was not anticipated by Booth (1988), who suggested that Taylor columns form with closed streamlines over the ADS.Whilst both senses of circulation satisfy this requirement, there is no evidence that the seamount itself generated these eddies, or that cyclonic circulation is preferred over anti-cyclonic. It remains to add some final comments about the use of glider data in this investigation.It would not have been possible to undertake such a detailed description and analysis of the mesoscale variability without the repeated measurements of the in situ conditions in the Rockall Trough by the glider.Its Cθ observations provided the information about the depth of the eddies, and the drift data, despite being compromised by a suspect compass, gave irrefutable evidence that the surface currents inferred from satellite altimetry should not be assumed to decrease with depth when the water column is mixed. But this warning may only apply to currents with relatively short timescales because over longer timescales the two measurement systems seemed to observe currents of similar velocities.The apparent success of combining glider and gridded altimeter observations averaged over a 6 month period to determine the difference in the background mean currents suggests that glider observations may provide a practical methodology for improving the definition of the geoid in other regions where the existing MDT is not well defined. The synergy derived from the combined use of glider and altimeter observations also provides valuable information about the state of a transient velocity field that can help glider pilots operating in the North Atlantic.Very often a glider will encounter an unexpected current, and reference to the contemporary gridded map of SLA or ADT can provide the pilot with valuable information for charting a course to avoid, or to make use of it. Conclusions The principal findings of this investigation are that 1. much of the surface and deep mesoscale current field in the central Rockall Trough is driven by deep circulations that appear to be associated with eddies that seem to have migrated into the region from both north and south; 2. surface currents appear to be much stronger during the autumnal period of seasonal stratification than in late winter, when the upper trough is mixed to a depth of 600 m; 3. in late 2009, during a period of unusually large EKE activity, a deflection of the slope current, caused by a chance arrangement of some deep mesoscale features, resulted in a large quantity of slope water being advected to, and thereby warming, the upper 500 m of the western side of the trough; and 4. the background MDT field of the Aviso CNES-CLS09 data set fails to pick up the mean transport in the narrow slope currents either side of the Rockall Trough, and may also introduce a fictitious mean anti-cyclonic circulation north of the Hebrides Terrace Seamount. Figure 1 . Figure 1.Map of the Rockall Trough with bathymetry (m) and the major currents.WTR: Wyville Thomson Ridge; RB: Rosemary Bank; HTS: Hebrides Terrace Seamount.Other acronyms are defined in the text.D340 stations are shown as a series of black dots.The red "tramway" is the alternative route of ENAW identified in this paper.The black box outlines the area that EKE is averaged over in Fig. 9. Figure 2 . Figure 2. (a) Temperature and (b) salinity sections through the Anton Dohrn Seamount in mid-June 2009.Water masses are indicated, with those in black being much diluted at 56 • N. Kriging interpolation was used to map contours to the seabed.The approximate bathymetry is defined by the depth of each cast. Figure 3 . Figure 3. Temperature and salinity profiles and θS plots either side of the Anton Dohrn Seamount from cruise D340 (full depth, black) and Mission 1 (to 1000 m, red and blue).Glider data are averages of the up and down casts.For positions, see Fig. 8. Figure 4 . Figure 4. Full-depth LADCP sections of currents in mid-June 2009 along the Ellett Line.The upper panel shows zonal currents across the trough and the lower panel shows meridional currents.Tidal currents may be aliased in water less than 150 m.Kriging interpolation was used to map contours to the seabed.The approximate bathymetry is defined by the depth of each cast. Figure 5 . Figure 5. Glider track in the Rockall Trough from 18 October 2009 (S) to 5 March 2010 (E).Red dots are the dive positions.The black track connects those points used for averaging, with the blue lines connecting the gaps in tracks 3 and 6 (see Table1 and Figs.6 and 7).Vertical red lines delimit the 20 (plain) and 10 (dashed) zonal averaging bins.Isobaths are in m. Figure 7 . Figure 7. Snapshots of SLA and associated surface velocities (scaled by the 25 cm s −1 arrow in the top left-hand corner) sampled every 27 days during Mission 1.(a) 20 October 2009; (b) 16 November 2009; (c) 13 December 2009; (d) 9 January 2010; (e) 5 February 2010; (f) 4 March 2010.Red lines are the track of the glider for 9 days before and after each snapshot (where appropriate), with the head indicating the end position.The black dots show the position of dives used to investigate the θS profiles of individual eddies (see Fig. 10).Each panel shows the anomaly about the area monthly mean to eliminate the colour shift due to the seasonal change in mean steric height.Isobaths are in m. Figure 8 .Figure 9 . Figure 8. SLA and associated surface velocities (scaled by the 50 cm s −1 arrow near the bottom of the left side) on 18 June 2009.In red is the track of D340 with the positions of CTD profiles shown as "+".The dots are the positions of the profiles either side of the ADS in Fig. 3 shown as black (June 2009), red (October 2009) and blue (February 2010).Isobaths are in m. 5 and 7), the surface waters of the central Rockall Trough were continually disturbed by slow moving eddies and other mesoscale motions.Early in Mission 1 (on 20 October) an elliptical cell with a cyclonic circulation and a NE-SW major axis diameter of about 200 km occupied the deep water immediately SW of the ADS (A C , Fig.7a), whilst immediately to the south of it lay an anti-cyclonic circulation of similar size Figure 10 . Figure 10.(a) Temperature, (b) salinity and (c) density anomaly profiles from the edge of cyclonic eddies B A , A C and F C (see Fig. 7).(d) Equivalent θS profiles. Figure 11 . Figure 11.Scatter plot of glider vs. altimeter speeds; (a) east, (b) north.Correlation lines (cm s −1 ) are fitted to Aviso (black) and to the glider (red), and correspond to values in Table2.Altimeter values have been interpolated in time and space to coincide with the glider observations. Figure 12 Figure 12.(a) MDT currents from Aviso (thin blue arrows) and mean absolute current along the glider track derived from (A5) (thick black arrows).(b) Northward and (c) eastward currents from simultaneous Aviso (red) and glider (blue) observations averaged over the whole mission into 10 longitude bins, with the dashed lines being the standard error in the mean.The black lines are the difference between the Aviso currents that are displayed as vectors in (a).The lower horizontal line in (b) is an estimate of the offset of the y axis origin to account for the mean current through the trough. Table 1 . Summary of the dives used in most of the calculations and transit plots described below and (bottom row) equivalent speeds from the Ellett Line section during D340.Drift and standard deviation speeds are the scalar mean values per transit.(W) and (E) indicate the direction of travel. a The gaps in transits 3 and 6 are explained in the text; b mean current speed in the top 50 m (other D340 speeds are averaged over 1000 m). ). Theoretical tests of possible error scenarios demonstrated that such a difference can Table 2 . Values of the constants C v , m v , C u , m u in the linear relationships between the glider drift (u G , v G ) and Aviso surface (u A , v A ), velocities u A
2015-06-01T23:46:22.000Z
2014-11-25T00:00:00.000
{ "year": 2014, "sha1": "79aa7e03777a001806ccc5e7bd898bd9e68fae87", "oa_license": "CCBY", "oa_url": "https://os.copernicus.org/articles/11/343/2015/os-11-343-2015.pdf", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "0e5b76f9608376ff98d30ad1113bedb51ddbea9c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
228981689
pes2o/s2orc
v3-fos-license
Career Decision-making Difficulties among Secondary School Students in Nigeria Career decision is a process that secondary school students must undergo. Many students find it difficult due to the obstacles that they may encounter. The problem of this study, therefore, is what these difficulties are and whether they differ on gender basis. Two research questions and one hypothesis guided the study. The sample size was 341 students made up of 161 males and 180 females. The instrument for data collection was the Career Decision Making Difficulties Questionnaire (CDDQ), developed by Gati et al (1996). The reliability of the instrument was ascertained with Cronbach alpha reliability coefficient and a coefficient of 0.90 was obtained for the whole instrument. Data were analysed using mean, standard deviation and independent samples t-test. The results revealed that secondary school students are confronted with career decision-making difficulties in nine out of the ten levels used for the study. Moreover, gender was not significant except for one out of the ten distinct levels. The conclusion was drawn that students are faced with career decision-making difficulties and that there were no gender differences in all the difficulty levels. Introduction A career decision is one of the crucial decisions that students have to make as they grow up (Bimrose and Mulrey, 2015; Gati and Tal, 2008). This entails choosing from different occupations, training institutions, taking a job and these pose a lot of problems and difficulties to students. When these difficulties are not properly handled, they might lead students to take inappropriate decisions. To assist students in making desirable decisions, it is expedient to fully comprehend the complex process of career decision making . Hirschi (2018) asserted that in this era of the industrial revolution, students' career decision making has been affected due to information and technology revolution. Career decision affects all facets of the individual's life hence difficulties on their way must be addressed to enable students to make appropriate decisions. It certainly affects their well-being (Creed, Prideaux, and Patto, 2005) standard of living (Sabates, Gutmon & Schaon, 2017), those with whom they will work with their lifestyle and job satisfaction (Amir and Gati, 2006). These complexities and barriers that arise in career decision making can be influenced by many factors, (Gati, 1986;Krieshok, Black and Mckay, 2009;Savemann, 2005) namely, career options from which selection is to be made, issues to be examined, doubts about themselves, compromise to be made and social barriers. In light of these, some students make their career decisions with ease while others have a lot of difficulties in contending within the process. Identifying such difficulties is a crucial step in assisting students. Numerous researches show that students all over the world grapple to make decisions in respect of their future career (Amir and Gati, 2006;Bacanli, 2008;Bacanli, 2012, Wierik, Beishuizenandts, 2014Di Fabio, et al., 2015;Guan, et al., 2015;Mau, 2004). Career counselling aims to assist students to make good decisions. It, therefore, becomes pertinent that school counsellors know the difficulties that students might face and how to overcome them. Apart from the difficulties, gender is another variable considered in this study. The study ascertained if there are gender differences in career decision difficulties faced by secondary school students. The result is expected to help school counsellors know what counselling strategies to use and areas of need to the students. Studies abound on gender and career difficulties but none has used the career decision difficulty scale. Some authors believe that there are no gender differences while others stated that there are. For instance, Murniarti and Siahaan (2019) discovered that male students were confronted with more problems associated with career decision making while Tagay (2015) revealed that males' level of difficulty was lower than that for females in all grades. Bacanli (2016); Gali and Saka (2001) and Gineura, Nota, Soresi, and Gati's (2012) findings showed that females had greater difficulties in career decision making. Durosaro and Adebanke (2012) found that males and females showed different levels of career readiness. On the other hand, Albion (2001) differed completely as his findings revealed that adolescent boys and girls did not differ in the difficulties experienced in career decision making. This study, therefore, is important as it adds to the growing body of related literature on gender and career decision -making difficulties, which is expected to be relevant to secondary schools in Nigeria. In Nigeria, the secondary school is a significant period of transition. It runs for six years of three years each in two phases namely Junior Secondary School (JSS) and Senior Secondary School (SSS). In SSS I the students are exposed to all the subjects and in SSS II, career decision is made. At this level, students are streamed into classes according to their career choice. Subjects selected are important because they will affect future careers. It is obvious that if students are not given adequate information, enlightenment, and counselled about the career decision-making process, it will create difficulties for students. This is why guidance and counselling services are highly needed. Purpose of the Study The main purpose of this study was to look at the difficulties being faced by secondary school students in Nigeria in career decision making. The study also sought to determine if differences existed between male and female students in the difficulties encountered. Research Questions Two research questions were asked to guide the study. 1. What are the difficulties encountered by secondary school students in career decision making in Nigeria? 2. What are the differences between male and female students in terms of the difficulties encountered in career decision making in Nigeria? Hypothesis One hypothesis was formulated for the study: There is no significant difference between male and female secondary school students in terms of the difficulties encountered in career decision making in Nigeria. The Taxonomy of Difficulties The theoretical background for this study is based on the taxonomy of difficulties in career decision making by Gati et al (1996). This taxonomy is hinged on decision theory which is based on the construct of an "ideal career decision-maker". This simply refers to an individual who is conscious of the need to make a career decision and is capable of making the "right" decision. However, any deviation from the ideal career decision-maker is seen as a barrier that might, in turn, affect the process. The taxonomy according to Gati et al (1996) had three major categories of lack of readiness, lack of information, and inconsistent information. These were further categorized into ten (10) distinct problems or difficulties namely: lack of readiness, indecisiveness, dysfunctional myths, lack of knowledge about the process of a career decision, lack of information about the self, lack of information about occupations, lack of information about ways of obtaining additional information, inconsistent information: unreliable information, internal conflicts, and external conflicts. Population and Sampling The population of this study consists of all secondary school students in the Delta State of Nigeria, with a population size of 272,328 (Ministry of Basic and Secondary Education, 2019). Purposive sampling was used to select four schools and the subjects for this study were randomly selected. The participants were 341 students which included 161 males and 180 females. Instrument The instrument used in this study was the Career Decision-making Difficulties Scale (CDDQ) by Gati et al (1996) made up of 44 items. It was modified by the author to 39 items; five items were removed. The original response format was on a nine-point scale but this researcher modified it to a four-point response format of Strongly Agree, Agree, Strongly Disagree, and Disagree. The subjects were asked to rate on the four-point scale the degree to which each difficulty described them. The ten (10) distinct categories that make up the taxonomy of career decision-making difficulties were used in the study. The median Cronbach and reliabilities of the instrument were 0.78 and 0.77. The test-retest reliabilities reported by Gati et al (1996) were 0.67, 0.74, 0.72, and 0.80 for the three main categories and the whole instrument. The author decided to do another reliability and validity to ascertain if the instrument can be used in Nigeria. The career decision-Making Difficulty Questionnaire (CDDQ) has some criticisms levelled against it by (Creed & Yin, 2006;Tien, 2005; Vahedi, Farrokhi, Mahdavi, Moradi, 2012). Not paying attention to the affective areas which are capable of influencing both attitudes and information processing and these, in turn, can influence the process of career decision making. Despite these demerits, the strength of the CDDQ according to Vai Poulou et al (2019) is its solid theoretical base and ability to provide career decision-making difficulties assessment. Validity of the Instruments The instruments consisted of 44 items initially and when subjected to Principal Component Analysis and Varimax Rotation Method, with Kaiser Normalization, the items in the instrument, were reduced to 39. For instance, lack of readiness scale now has 3 items; indecisiveness, scale, 2 items; dysfunctional myths scale, 3 items; lack of knowledge career decision scale, 3 items; lack of information scale, 7 items; lack of information about occupation scale, 4 items; lack of information about ways of obtaining information scale, 2 items; inconsistent information: unreliable information scale has 6 items; internal conflicts scale, 5 items; and external conflicts scale, 4 items. Reliability of the Instrument The researcher used the Cronbach Alpha method to determine the items' internal consistency in measuring what it is supposed to measure or the consistency in giving a similar score and the reliability of the instrument. The reliability coefficient of the instrument was 0.90 for the overall instrument. These figures indicate that the instrument was valid and reliable to be used in Nigeria. Data Analysis The data obtained were collated and entered into a computer system using the Statistical Package for Social Sciences (SPSS) version 24. Mean and the standard deviation was used to answer the research questions. The benchmarks used to answer the research questions are as follows: 6.00 for lack of readiness; 4.00 for indecisiveness, 6.00 for dysfunctional myth, 6.00 for lack of knowledge of the career decision process; 14.00 for lack of information about self; 8.00 for lack of information about the occupation; 4.00 for lack of information about ways of obtaining additional information about self; 12.00 for inconsistent information: Unreliable information; 10.00 for internal conflicts and 8.00 for external conflicts. The benchmark was obtained by calculating the average mean for all the items in each of the sub-categories. T h e independent samples t-test was used to test the null hypothesis at the 0.05 level of significance. Results Research Question 1: What are the difficulties encountered by secondary school students in career decision making? In answering research question 1, mean and standard deviation were computed. The result of data analysis is presented in Table 1. As shown in Table 1, lack of information has the highest number of difficulties encountered by secondary school students in career decision making with a mean of 17.92 and SD=6.30; the second variable is Inconsistent information: unreliable information with a mean of 14.36 and SD=4.69; the third is Internal conflicts with a mean of 12.28 and SD=4.45; the fourth is Lack of information about occupation with a mean value of 9.84 and SD=3.53; the fifth is External conflicts with a mean of 9.53 and SD=3.59; the sixth is Lack of readiness with a mean of 7.21 and SD=2.22; the eight is Dysfunctional myth with a mean of 5.55 and SD=2.00, and the ninth is lack of information about ways of obtaining additional information about self with a mean value of 5.34 and SD=2.90, and the tenth variable is indecisiveness with a mean value of 5.09 and SD=2.67. This provides an answer to research question 1. The conclusion is that the secondary school students involved in this investigation have demonstrated evidence of career decision making difficulties in the nine discernable areas encountered in the study using the benchmark for each level. The dysfunctional myth was the only area they did not encounter difficulties. Research Question 2: What are the differences between male and female secondary school students in terms of the difficulties encountered in career decision making? To answer research question 2, the mean and standard deviation were used, as presented in Table 2. As shown in Table 2, for lack of readiness, male students (n=161) had a mean score of 7.22 while female students (n=180) had a mean score of 7.19, with a mean difference 0.022; for Indecisiveness, male students (n=161) had a mean score of 5.32 while female students (n=180) had a mean score of 4.88, with a mean difference 0.433; for Dysfunctional myth, male students (n=161) had a mean score of 5.47 while female students (n=180) had a mean score of 5.63, with a mean difference 0.155; for Lack of knowledge of the career decision process, male students (n=161) had a mean score of 7.54 while female students (n=180) had a mean score of 7.81, with a mean difference 0.265; for Lack of information about self, male students (n=161) had a mean score of 6.62 while female students (n=180) had a mean score of 6.02, with a mean difference 0.279; for Lack of information about the occupation, male students (n=161) had a mean score of 3.58 while female students (n=180) had a mean score of 3.49, with a mean difference 0.206; for Lack of information about ways of obtaining additional information about self, male students (n=161) had a mean score of 5.01 while female students (n=180) had a mean score of 5.64, with a mean difference 0.626; for Inconsistent information: Unreliable information, male students (n=161) had a mean score of 4.77 while female students (n=180) had a mean score of 4.62, with a mean difference 0.476; for Internal conflicts, male students (n=161) had a mean score of 4.85 while female students (n=180) had a mean score of 3.94, with a mean difference 1.491; and for External conflicts, male students (n=161) had a mean score of 9.77 while female students (n=180) had a mean score of 9.31, with a mean difference 0.459. The result further showed that male and female students only had fewer difficulties in dysfunctional myth when compared with the benchmark calculated for the construct. This result implies that secondary school students in the study do not have difficulties in the area of dysfunctional myths. Hypothesis 1: There is no significant difference between male and female secondary school students in terms of the difficulties encountered in career decision making. To test hypothesis 1, t-test table was computed as presented in Table 3. Table 3 shows the difficulties encountered by secondary school students in career decision making. The demographic data of sex was used. The result shows that gender was not a factor in career decision-making difficulties (t[339] =0.963 p > 0.05). Hence, the null hypothesis is accepted, which implies that there is no significant difference between male and female secondary school students in terms of the difficulties encountered in career decision making. Discussion of Findings The first finding revealed that secondary school students in the Delta State of Nigeria are experiencing career decision making difficulties in the nine distinct areas. The foremost difficulty experienced by students according to the analysis is the lack of information about themselves. This means that students lack information about their abilities, traits, and information about career-related preferences. This is followed by inconsistent information: Unreliable information and the fourth in the hierarchy of difficulty is lack of information about the occupation. If students have difficulties in all these areas, what type of career decision are we expecting? However, these findings are in agreement with Vaiopoulou et al (2019) whose findings among Greek students revealed a lack of information as constituting the highest difficulty facing students in career decision making. The result is that students will make inappropriate decisions. This is expected as most of the schools are without guidance counsellors vested with the responsibility for this role. The option is to depend on unreliable sources. Another difficulty identified is in the area of internal conflict and this is followed by an external conflict which has to do with the significant others. This area of external conflict is crucial especially in Nigeria where most parents want their children to take prestigious courses, like medicine, law, engineering among others even when the students have no abilities for such profession. This is corroborated further with Sarikaya and Khorshid's (2000) findings in Turkey. In this circumstance, there will be a clash within the students in career decision making. It is not limited to parents. There are also friends, family members, teachers who equally pose as threats to students in making their career decisions. Some parents want to mirror what they could not achieve in their children. This is further supported by Akpochafo (2017) who found that external influences, affect students in their career decision making. This poses a challenge when the individual's interests and abilities cannot cope with the preferred professions dictated by the significant others. This is supported by Akkoc, 2012a, b, Bancanli et al (2013 and as a result, the willingness to compromise is not there. Moreover, there is a problem with alternatives and preferences for other careers. The study further identified a lack of readiness and indecisiveness as difficult areas of career decision making among secondary school students. When counselling services are not put in place in the schools, how can the students be ready to make career decisions? Some of them just know that they have to decide on a career while in SSS II but they are not prepared for it. The study, however, observed that dysfunctional myth is not a difficult area of career decision making among secondary school students. It may be that the students believed that career would solve their problems in life. They always look forward to entering a career that will help them achieve their life goals. The second finding revealed that male and female students only had fewer difficulties in dysfunctional myth when compared with the benchmark calculated for the construct. The finding also revealed mean differences between males and females in the nine areas with males experiencing more difficulties in seven categories and females in three categories. The males experience more difficulties in all areas of information except lack of information about ways of obtaining additional information about the self. This agrees with the study of Murniarti and Siahaan (2019), which revealed that male university students had more problems in making their career choices especially with regards to lack of information which showed in their work that more than half of the males did not receive enough information about the types of occupation they are to go in for. As for external conflict, boys experienced more difficulty which is in line with Gati and Saka (2001), whose study revealed that boys had greater difficulties than girls. Since both males and females are experiencing difficulty in information but higher with males, the consequence is getting inconsistent and unreliable information since the schools have no way of providing authentic information. The students might rely on friends, other avenues and thus get fake news which according to Park and Rim (2019) can be influenced by social media. As for indecisiveness, males experienced more difficulties, which disagrees with Bacanli (2015), whose study showed that female students had higher levels of difficulties than male students. But these differences are not significant as can be seen from the finding of the hypothesis. The finding from the hypothesis revealed that gender was not a significant factor in the career-decision making difficulties encountered by secondary school students. This means that gender does not influence students' difficulties in making a career decision. This is not unexpected because most of the schools do not have counsellors; those that have counsellors do not have all that is required to practice full counselling (Akpochafo, 2018). These lead to career decision making difficulties that affect both male and female students. This finding agrees with the result of previous studies conducted in Switzerland where no gender difference was found in respect to career decision-making difficulties (Gati, et al., 2000;Vertzberger & Gati, 2016). The finding also corroborated the study of Hsiu-Ian Shelly Tien (2001), which revealed that gender was not significant in the career decision-making scale. Albion (2001) is in partial agreement with this finding as, at times, girls report less difficulty, and at other times, the boys. On the other hand, the finding disagrees with the studies of Bacanli (2016), Tagay (2015), Guineura, Nota, Soresi, and Gati (2012), which revealed either males or females have more difficulties. Conclusions It can be concluded from the findings of this study using the CDDQ that secondary school students are faced with difficulties in making career decisions in nine areas except for dysfunctional myth. The analysis also revealed no gender differences in the difficulty level. Recommendations Based on the findings and conclusions of this study, the following recommendations are made: 1. Counsellors should realize from this study that nine out of the ten distinct levels posed as difficulties and so activities to enhance career decision making should be embarked on. To this end, seminars, workshops, conferences should be organized for secondary school students to alleviate these difficulties and build confidence in students to make good decisions. Counsellors in providing career interventions should give equal attention or opportunity to males and females as they both need assistance. 3. The government should post counsellors to secondary schools since they are to implement the recommendations and assist students in proper career decision making. The counsellors should be retrained in line with current global practices. This involves organizing seminars, workshops, and symposia for counsellors already on the job. 4. Moreover, the CDDQ used in this study can also be used in secondary schools for early identification of difficulties in Nigeria. Contribution The study has contributed to the growing body of knowledge of career decision making in Nigeria especially using the (CDDQ) questionnaire where there is no previous work on the taxonomy of difficulties before the author. The work has also clearly shown that no differences existed between males and females in career decision making. This will help to guide counselling effort.
2020-11-19T09:12:44.569Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "0f9c7708d5b2f4f63e812ac0d62ae5ced0b08d69", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20201130/UJER26-19517729.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "85da24434d876ed92f4428fe03bfaa676c900cf7", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
18552573
pes2o/s2orc
v3-fos-license
Infection of human cytomegalovirus in cultured human gingival tissue Background Human cytomegalovirus (HCMV) infection in the oral cavity plays an important role in its horizontal transmission and in causing viral-associated oral diseases such as gingivitis. However, little is currently known about HCMV pathogenesis in oral mucosa, partially because HCMV infection is primarily limited to human cells and few cultured tissue or animal models are available for studying HCMV infection. Results In this report, we studied the infection of HCMV in a cultured gingival tissue model (EpiGingival, MatTek Co.) and investigated whether the cultured tissue can be used to study HCMV infection in the oral mucosa. HCMV replicated in tissues that were infected through the apical surface, achieving a titer of at least 300-fold at 10 days postinfection. Moreover, the virus spread from the apical surface to the basal region and reduced the thickness of the stratum coreum at the apical region. Viral proteins IE1, UL44, and UL99 were expressed in infected tissues, a characteristic of HCMV lytic replication in vivo. Studies of a collection of eight viral mutants provide the first direct evidence that a mutant with a deletion of open reading frame US18 is deficient in growth in the tissues, suggesting that HCMV encodes specific determinants for its infection in oral mucosa. Treatment by ganciclovir abolished viral growth in the infected tissues. Conclusion These results suggest that the cultured gingival mucosa can be used as a tissue model for studying HCMV infection and for screening antivirals to block viral replication and transmission in the oral cavity. Background Human cytomegalovirus (HCMV) is a ubiquitous herpesvirus that causes mild or subclinical diseases in immunocompetent adults but may lead to severe morbidity and mortality in neonates and immunocompromised individuals [1,2]. For example, disseminated HCMV infection, common in AIDS patients and organ transplant recipients, is usually associated with gastroenteritis, pneumonia, and retinitis [3,4]. Moreover, HCMV is one of the leading causes of birth defects and mental retardation in newborns [5,6]. Understanding the biology of CMV infection and developing novel anti-CMV approaches are cen-tral in the treatment and prevention of CMV-associated diseases. HCMV infection in the oral cavity plays an important role in its pathogenesis and transmission. HCMV is among the most common causes of oral diseases associated with AIDS patients [7,8]. Active viral replication in the oral tissue induces CMV-associated oral manifestations such as ulcerations, aphthous stomatitis, necrotizing gingivitis, and acute periodontal infection [9][10][11][12][13]. Persistent and latent infections have also been found in oral tissues. The presence of infectious particles in the oral cavity including saliva is believed to be a major source of HCMV horizontal transmission [1,6]. Indeed, initial infection of the oral mucosa by HCMV, primarily through casual contact, is believed to be one of the major routes of horizontal transmission among individuals, and the consequent viral replication and spread in oral tissues leads to the establishment of lifelong latent infection. Elucidating the mechanism of HCMV infection in the oral mucosa and blocking viral replication in infected oral tissues are essential for the treatment and prevention of CMV transmission and systemic infections. HCMV belongs to the β family of herpesviruses and contains a linear 230 kb double-stranded DNA genome that is predicted to encode more than 200 proteins [14,15]. There are currently few animal models available to study HCMV infection and pathogenesis and to determine efficacy of various antiviral therapies. This is largely due to the fact that HCMV infection and replication are limited to human cells [1,2]. Consequently, little is known about the mechanism of viral pathogenesis, such as how HCMV infects the oral mucosa. One of the most powerful approaches to study viral pathogenesis is to develop a cultured tissue model that can mimic natural infection in human tissues in vivo. The SCID-hu mouse, in which different fetal human tissues are implanted into the kidney capsule of a severe combined immunodeficient (SCID) mouse, has been shown to be a useful model to study HCMV replication and to screen antiviral compounds in human tissues [16,17]. In these animals, the implanted human fetal tissues continue to grow and differentiate. HCMV was directly inoculated into the implanted tissues and viral replication was monitored. SCID-hu mice implanted with different human tissues from the liver, thymus, bone, retina, and skin have been shown to support HCMV replication and can be used as models to study HCMV infection in these human tissues in vivo [16,18]. However, the difficulty in generating these animals limits the use of the models. Furthermore, the use of fetal tissues in SCID mice presents a challenge to study HCMV infection in adult tissues, such as in the oral mucosa, because the implanted tissues need to differentiate properly into adult tissues in the mouse microenvironment. Currently, no SCID mice with human oral mucosa implants have been reported. Recently, three-dimensional models of the human oral epithelia that exhibit a buccal or gingival phenotype, such as EpiGingival from MatTek, Co., have been developed [19][20][21][22]. In these models, normal human keratinocytes are differentiated into tissues in serum free media. The gingival model has 10-20 layers of viable, nucleated cells and is partially cornified at the apical surface. These models exhibit very similar histological characteristics to human oral tissues in vivo. Thus, they can serve as a tissue model for human oral epithelia, such as gingival mucosa, and can potentially be used to study oral physiology and transmission of infectious pathogens. The development of reconstructed tissues of human oral cavity provides an invaluable cultured tissue system for studying the biology of CMV infection. To study the function of viral-encoded genes in supporting HCMV infection, we can generate a collection of viral mutants by introducing mutations into the viral genome and screening viral mutants in both cultured cells and tissues for potential growth defects [23]. The construction of HCMV mutants has been reported using site-directed homologous recombination and cosmid libraries of overlapping viral DNA fragments, and recently, using a bacterial artificial chromosome (BAC)-based approach [24][25][26][27][28][29][30]. Examining the growth of these mutants in the oral tissue model should facilitate the identification of viral genes responsible for HCMV tropism in the oral mucosa and for transmission. Furthermore, the tissue model can be used for screening antiviral compounds and for developing novel strategies for preventing HCMV infection in oral cavity and its transmission among human populations. In this study, we examined the infection of HCMV in a cultured gingival mucosa model (EpiGingival, MatTek Co.) and determined whether the cultured tissue is suitable to study HCMV infection in vivo. Both laboratoryadapted viral strain and low-passaged clinical isolate were shown to infect the human tissue via the apical surface. Investigation of the growth of these viruses indicates that the viral strains replicate at a similar level, reaching a 300fold higher titer after 10 days post infection. Histological examination of tissues infected via the apical surface indicated that these viruses spread from the apical surface to the suprabasal region. Moreover, Western analyses demonstrated the expression of viral proteins IE1, UL44, and UL99 in the infected tissues, suggesting that the infection process represents a classic lytic replication that is associated with primary HCMV infection in vivo. Growth studies of a collection of eight viral mutants indicated that a mutant with deletion at open reading frame US18 is defi-cient in growth in human oral tissues. Treatment of infected tissues with ganciclovir, which is effective for anti-HCMV therapy in vivo [31,32], abolished viral growth in the cultured tissues. These results provide the first direct evidence that the cultured gingival mucosa is an excellent tissue model for studying HCMV infection in vivo and for screening antiviral compounds to block HCMV infection and transmission in the oral cavity. Growth of different HCMV strains in cultured human oral tissue The MatTek gingival tissue model (EpiGingival) contains normal human oral keratinocytes cultured in serum-free medium to form three-dimensional differentiated tissues. Hematoxylin and eosin staining of tissue cross-sections indicates that the cultured tissue shows an architecture very similar to human gingival mucosa in vivo (Figure 1, see Figure 4A) [22]. The cultured tissue is 10-20 cell layers thick and consists of a cornified apical surface and a noncornified basal region ( Figure 1). The thickness and morphology of the apical stratum corneum and the basal cell layers are similar to those in the gingival tissues in vivo. As observed in vivo, cells at the basal region of the cultured tissue continue to divide and differentiate, and apical surface cells continue to cornify to form the stratum corneum. Furthermore, immunohistochemical staining indicates that distributions of different cytokeratins (e.g. K13 and K14) in cultured tissues are like those found in vivo [22,33] (data not shown). Thus, the cultured tissue exhibits characteristics in structure (thickness, morphology, and organization), cell type and differentiation, and protein expression and composition as observed in vivo, and can be a model representing the oral tissue [22]. To determine whether the cultured tissues are permissive to HCMV infection and replication, two different HCMV Growth of different HCMV strains (Toledo, Towne, and Towne BAC ) in cultured cells (A) and cultured gingival tissues (B) Figure 2 Growth of different HCMV strains (Toledo, Towne, and Towne BAC ) in cultured cells (A) and cultured gingival tissues (B). In (A), human foreskin fibroblasts (HFFs) (1 × 10 6 cells) were infected with each virus at a MOI of 0.05. At 0, 2, 4, 7, 10, and 14 days post infection, cells and culture media were harvested and sonicated. In (B), the tissues were infected with 2 × 10 4 PFU of each virus at the apical surface of the tissue. At 0, 3, 6, and 10 days post infection, the tissues were harvested, suspended in a small volume of 10% milk, and sonicated. The viral titers were determined by plaque assays on HFFs. The limit of detection was 10 PFU/ml of the tissue homogenate. The values of the viral titer represent the average obtained from triplicate experiments. The standard deviation is indicated by the error bars. Hematoxylin and eosin staining of EpiGingival tissues (magni-fication, ×400) Figure 1 Hematoxylin and eosin staining of EpiGingival tissues (magnification, ×400). Upon arrival, the tissues were cultured for 12 hours prior to viral infection, fixed with Streck Tissue Fixative, frozen in 2-methylbutane submerged in liquid nitrogen, cross-sectioned at 9 µm using a LEICA cryostat LC1900 sectioner, stained with hematoxylin and eosin, and visualized with a Nikon TE300 microscope. strains (Towne and Toledo) and a mutant (Towne BAC ), were used in our initial experiments. Towne is a laboratory-adopted strain that has been passaged many times in vitro in human fibroblasts; whereas Toledo is an HCMV clinical isolate passaged in limited numbers in vitro [34,35]. Towne BAC was derived from Towne by inserting a bacterial artificial chromosome (BAC) sequence into the viral genome and replacing the dispensable, 10 kb US1-US12 region [36]. The Towne BAC DNA, while maintained as a BAC-based plasmid in E. coli, produces infectious progeny in human fibroblasts and retains a wild type-like growth characteristic in vitro ( Figure 2A) [23,36]. Each of these viruses was used to infect the tissues by inoculating at the apical surface with 2 × 10 4 PFU. The infection through the apical surface serves as a model for HCMV infection via gingival mucosa surface. The infection was carried out for 10 days. We observed that the structure of the tissue remained intact up to 10 days in culture and started to disintegrate after 12 days incubation (data not shown). At different time points post infection, the tissues were harvested and the titers of the viruses were determined. The viral strains were able to grow in the tissues since viral titers increased by at least 300-fold during a 10 day infection period ( Figure 2B). Thus, the gingival tissues support active HCMV lytic replication. No differences in growth among these viruses were found, suggesting that the lab-adopted Towne strain and its derivative, Towne-BAC , grow as well as the clinical low-passaged Toledo strain. In subsequent experiments, Towne BAC was used as an HCMV representative to study viral infection in the gingival tissues. This mutant contains the gene coding for green fluorescence protein (GFP) and therefore, infection can be easily monitored in the tissues by detecting GFP expression [23,36]. Viral protein expression and histological changes in cultured human oral tissue upon HCMV infection HCMV oral transmission begins when the virus enters the mucosal (apical) surface of oral tissues (e.g. gingival tissues), replicates in the surface cell layers, and spreads to neighboring cells and tissues in the basal regions [1,7]. To determine whether HCMV infection of the MatTek gingival tissues can be a model for viral infection in vivo, two sets of experiments were carried out. First, Western analysis was used to determine whether viral lytic proteins were expressed, as observed in productive HCMV infection in vivo. Tissues were infected with 2 × 10 4 PFU of either HCMV Toledo, Towne, or Towne BAC strains. Protein extracts were isolated from tissues that were either mockinfected or infected with HCMV at 6 days post infection. Viral proteins were separated electrophoretically in SDSpolyacrylamide gels and electrically transferred to identical membranes. One of the membranes was stained with monoclonal antibody against human actin (anti-actin) ( Figure 3D) and the other membranes were stained with Expression of HCMV lytic proteins as determined by West-ern blot analysis Figure 3 Expression of HCMV lytic proteins as determined by Western blot analysis. Protein samples were isolated from the cultured EpiGingival tissues that were either mock-infected (lanes 1, 5, 9, and 13) or infected with HCMV (2 × 10 4 PFU) (lanes 2-4, 6-8, 10-12, and 14-16) for 6 days, separated in SDS-polyacrylamide gels, and then transferred to membranes. One membrane was allowed to react with a monoclonal antibody (Anti-actin) against human actin (D) while the others were stained with the antibodies (Anti-IE1, Anti-UL44, and Anti-UL99) against HCMV IE1, UL44, or UL99, respectively (A-C). The expression of human actin was used as the internal control for the quantitation of the expression of HCMV proteins. monoclonal antibodies against viral IE1, UL44, and UL99 proteins ( Figure 3A-C). The expression of actin serves as an internal control for the quantitation of HCMV protein expression in the tissues. IE1 is a viral immediate-early (α) protein, while UL44 and UL99 encode viral early (β) and late (γ) proteins, respectively [2]. These proteins serve as the representatives for the expression of viral α, β, and γ genes. As shown in Figure 3, IE1, UL44 and UL99 were expressed in infected tissues. Combined with the growth analysis ( Figure 2), these results indicate that the cultured tissues are permissive to HCMV infection and can support viral lytic gene expression and replication. In the second set of experiments, infection of these tissues was studied using both conventional histological and fluorescent microscopy. Two different staining methods were employed. First, tissues were stained with hematoxylin and eosin in order to examine their structures. Second, since Towne BAC contains a GFP expression cassette [36], fluorescent microscopy was used to detect GFP expression and to visualize infected cells. As shown in Figure 4, mock-infected tissues maintained the characteristic gingival mucosal structure during the infection period. In these tissues, the cells at the basal surface continue to divide while those at the apical surface differentiate and cornify, forming a characteristic stratum corneum ( Figure 4A). In the tissues that were infected through the apical surface, GFP staining was found in the cells near the apical surface, suggesting that the apical cells were infected with HCMV ( Figure 4C-F). Compared to mock-infected tissues, the thickness of the stratum corneum in the infected tissues was significantly reduced ( Figure 4B), possibly because the active replication of HCMV in apical cells induces cellular lysis and disrupts cellular differentiation and generation of the stratum corneum. Active HCMV replication in the apical surface has been observed in vivo and is associated with reduced thickness and destruction of the oral epithelial surface [1,9,11]. Thus, our results suggest that HCMV infection of cultured gingival tissues via the apical surface corresponds to its pathogenesis in vivo. Deficient growth of HCMV mutants in infected human oral tissues The ability of HCMV to infect and replicate in cells of the oral cavity is responsible for its pathogenesis in the oral mucosa, including viral-associated gingivitis and oral lesions. However, little is currently known about the mechanism of how HCMV is able to infect and replicate in oral tissues. Equally elusive is the identity of viral determinants responsible for oral infection. Specifically, it is unknown whether HCMV encodes specific genes responsible for its infection in the gingival mucosa. Through the use of a BAC-based mutagenesis approach, we have recently generated a library of HCMV mutants containing deletions in each open reading frame (ORF) [23]. If a viral ORF is essential for viral infection in the oral tissue, the corresponding mutant with the deletion of the ORF is expected to be deficient in infecting and replicating in the tissue. Using the gingival tissue as the model, several experiments were performed to determine whether viral mutants that are attenuated in growth in the oral mucosa can be identified. A collection of eight different mutants was used in our initial screen (Table 1). Each mutant was derived from Towne BAC and contains a deletion in ORF UL13, UL24, UL25, UL108, US18, US20, US29, or RL9, respectively [23]. In these mutants, the deleted ORF sequence was replaced with a kanamycin-resistance gene (KAN) expression cassette, which provides antibiotic resistance for rapid selection and isolation of the bacteria carrying the mutated Towne BAC sequence. All mutants grew as well as the parental Towne BAC in primary human foreskin fibroblasts (HFFs), suggesting that these ORFs are not essential for viral replication in vitro in cultured fibroblasts (Table I and Figure 5A). The functions of many of these deleted ORFs are currently unknown. However, they are present in all HCMV strains whose sequences have been determined [14,15,23,37,38]. Hence, these genes may play an important role in HCMV infection in vivo, such as in viral transmission and infection in the oral cavity. To determine whether any of these HCMV mutants are deficient in growth and infection in cultured gingival tissues, the tissues were infected via the apical mucosal surface with each viral mutant at an inoculum of 2 × 104 PFU. Infected tissues were harvested at 10 days post infection and viral titers in the tissues were determined. The tit- Mutants HFFs Gingival Tissue Cells (1 × 10 6 HFFs) or tissues were infected with each virus at 2 × 10 4 PFU and at 10 days post-infection, cells and tissues were harvested and sonicated. The viral titers were determined by plaque assays on HFFs in triplicate experiments [46]. +++, titer similar to that of Towne BAC ; ++, titer about 10 times lower than that of Towne BAC ; +, titer at least 100 times lower than that of Towne BAC . ers of mutant ∆US18 and ∆UL13 at 10 days post infection were approximately 100-and 10-folds lower than those of the parental TowneBAC, respectively, while other mutants, such as ∆UL24 and ∆RL9, replicated as well as the parental virus (Table I and Figure 5B). Thus, mutants ∆UL13 and ∆US18 appeared to be deficient in infecting the tissues via the apical surface. Both ∆UL13 and ∆US18 were derived from the parental TowneBAC by replacing the UL13 and US18 ORFs, respectively, with a DNA sequence (KAN) that confers antibiotic resistance to kanamycin in E. coli [23]. Because ∆RL9 replicates as well as the parental TowneBAC ( Figure 5), the presence of the KAN cassette in the viral genome per se does not significantly affect the ability of the virus to grow in the tissues. Thus, these results suggest that the growth defect of ∆US18 may be due to the deletion of the US18 ORF. Two series of experiments were further carried out to study how ∆US18 is defective in growth in the cultured tissues. First, viral infection in the tissues was studied by examining hematoxylin and eosin-stained tissues and visualizing GFP expression in infected cells. At 7 days post infection, the structure of the apical region in the ∆US18-infected tissues was similar to that of uninfected tissues, and the thickness of the stratum corneum was not reduced as observed in the Towne BAC -infected tissues ( Figure 4G-H). Little GFP staining was found in the ∆US18-infected tissues ( Figure 4H) while substantial levels of GFP staining were detected in tissues infected with ∆RL9 and Towne BAC ( Figure 4E-F, data not shown). These observations support the growth analysis results ( Figure 5) and show that ∆US18 is deficient in infection and replication in gingival tissues. Second, Western analyses were used to examine the expression of viral proteins. As shown in Figure 6, at 72 hours post infection, the expression levels of IE1, UL44, and UL99 in ∆US18-infected tissues were minimal and significantly lower than those in Towne BAC -infected tissues. Thus, the infection of ∆US18 appeared to be blocked prior to or at viral immediate-early gene expression, probably during viral entry, decoating, or transporting the capsid to the nuclei. Because similar levels of these proteins were found in tissues that were infected with ∆RL9 and Towne BAC (Figure 6), the presence of the KAN cassette in the viral genome (e.g. ∆RL9) per se does not significantly affect viral protein expression in the tissues. These observations suggest that the defect in protein expression of ∆US18 may be due to the deletion of the US18 ORF. Inhibition of HCMV growth in human oral tissues after ganciclovir treatment One of our objectives is to establish an in vitro cultured tissue model to screen antiviral compounds and determine their potency in inhibiting HCMV growth and replication in human oral tissue. To determine the feasibility of using the gingival tissue for antiviral compound screening and testing, two sets of experiments were carried out using ganciclovir, which functions as a nucleoside analog and is effective in treating HCMV infection in vivo by blocking viral DNA replication [31,32]. In the first set of experiment, oral tissues were treated with different concentrations of ganciclovir for 4 hours prior to viral infection. In the second set of experiments, tissues were infected with Towne BAC for 24 hours and then treated with different concentrations of ganciclovir. The tissues were harvested at different time points post infection and the growth of HCMV was assayed by determining the viral titers. Treatment of ganciclovir reduced the growth of HCMV in HFFs ( Figure 7A) [31,32]. Significant inhibition of HCMV growth was also observed in the gingival tissues when ganciclovir was added 24 hours after viral infection ( Figure 7B). Similar levels of inhibition of viral growth in the tissues were found when the tissues were incubated with the drug before viral infection (data not shown). Previous studies have shown that treatment of ganciclovir blocks HCMV infection in cultured fibroblasts regardless whether the drug was added before or 24 hours after viral infection [31,32]. These results strongly suggest that cultured gingival tissues can be a suitable model for screening and testing antiviral compounds for inhibiting HCMV growth and replication. Discussion The oral mucosal epithelia represent one of the most common sites encountered with microbial organisms for infection and transmission [39][40][41]. Both commensal (nonpathogenic) and pathogenic bacteria and yeast have been found in the epithelia [39,40]. The mucosa surface also appears to be susceptible to infection by a variety of viruses including HCMV, herpes simplex virus, HIV, and human papillomavirus [7,41]. The development of human reconstructed tissues of the oral cavity that exhibit the differentiated characteristics found in vivo will provide excellent research tools to study the biology of infections by these pathogens, to screen antimicrobial compounds, and to develop therapies against oral diseases associated with these infections. HCMV primarily propagates and replicates in human cells, and there are few animal models available to study HCMV infection and pathogenesis [1,2]. Little is known whether cultured human oral tissues can support HCMV lytic replication in vitro and be used to study HCMV infection. In this study, we have characterized the infection of HCMV in a cultured gingival tissue model. Several lines of evidence presented in this study strongly suggest that the cultured oral tissues support HCMV replication, and can be used as a model for studying HCMV pathogenesis, screening antivirals, and developing therapies for treating CMV infections in the oral cavity. First, the cultured tissue morphology and architecture used in our experiments was histologically similar to that found in vivo (Figure 1). Tissue structure remained intact for up to 10 days in the uninfected tissues. Hematoxylin and eosin staining showed no significant changes in tissue structure, except increased cornification and cell proliferation toward the apical surface ( Figure 4A). These results suggest that our cultured conditions do not significantly affect the continuous differentiation and growth of the tissues and that the tissues exhibit similar characteristics found in vivo. Second, both laboratory-adapted "high passage" Towne strain and clinical "low passage" Toledo strain were able to infect the apical surface and establish productive infection ( Figure 2). An increase of at least 300-fold in viral titers was found in the infected tissues after a 10-day infection period. Thus, HCMV can replicate in the cultured tissue as it does in vivo in oral tissues. Third, viral lytic proteins, IE1, UL44, and UL99, were detected in cultured tissues (Figure 3). These proteins are commonly found in infected tissues in vivo, with IE1, UL44, and UL99 expressed at the immediate-early, early, and late stage of the HCMV lytic replication cycle, respectively [2]. These results suggest that HCMV infection in the cultured tissues exhibits similar gene and protein expression profiles as found in vivo. Fourth, fluorescence microscopy experiments indicated that HCMV can spread within the cultured tissue as observed in vivo ( Figure 4). Towne BAC , which carries a GFP expression cassette and a BAC sequence [36], was used in our experiments. Viral infection and spread can be monitored by detecting the GFP expression. HCMV spread started from the apical surface, the inoculation site, to the suprabasal regions in the tissues. Initial viral infection at the apical surface and subsequent spread to the suprabasal region have been observed in oral mucosa in vivo and are believed to represent a common route for viral transmission among casual contacts [1]. Active HCMV replication led to lysis of infected cells, damage of tissues, and reduced thickness of the cornified cell layers in the cultured oral tissues (Figure 4). Similar observations are found in vivo, as uncontrolled replication of HCMV leads to lesions and ulcers in the oral epithelia [1,9,11]. Thus, HCMV infection in cultured oral tissues appears to cause similar cytopathic effects and pathological changes as found in vivo. Fifth, treatment with ganciclovir, which is effective in treating HCMV infection in vivo [31,32], abolished the growth of HCMV in cultured tissues (Figure 7). These results indicate that the cultured tissue model can be used for screening antiviral compounds for blocking HCMV infection and replication in the oral cavity. The availability of a cultured oral mucosa model will provide a unique opportunity to study HCMV pathogenesis in oral tissues and to identify viral determinants responsible for HCMV infection in oral cavity. We have initiated a series of experiments to use the cultured tissues to screen a pool of viral mutants with deletions in different HCMV ORFs (Table 1). ∆US18 was found to be defective in growth in the cultured tissues ( Figure 5). These observations suggest that HCMV encodes specific determinants for its infection and replication in the oral mucosa. Moreover, these results validate the use of the cultured tissue as a model for identifying viral genes important for oral Expression of HCMV lytic proteins as determined by West-ern blot analysis Figure 6 Expression of HCMV lytic proteins as determined by Western blot analysis. Protein samples were isolated from the cultured EpiGingival tissues that were either mock-infected (lanes 1, 5, 9, and 13) or infected with HCMV (2 × 10 4 PFU) (lanes 2-4, 6-8, 10-12, and 14-16) for 72 hours, separated in SDS-polyacrylamide gels, and then transferred to membranes. One membrane was allowed to react with a monoclonal antibody (Anti-actin) against human actin (D) while the others were stained with the antibodies (Anti-IE1, Anti-UL44, and Anti-UL99) against HCMV IE1, UL44, or UL99, respectively (A-C). infection and for studying the mechanism of how HCMV replicates and causes viral-associated diseases in oral cavity. The function of US18 is currently unknown. US18 is only found in the HCMV genome and no sequence homologues are found in other human herpesviruses or rodent CMVs (e.g. murine CMV (MCMV)) [14,15,38]. It is believed that some genes from a particular CMV (e.g. HCMV) might have co-evolved with its respective host and interacted with specific components of the host and therefore, are unique and may not share significant sequence homologies with CMVs from other species (e.g. MCMV). For example, US11 and US28, which are dispensable for HCMV replication in vitro, function to downregulate the major histocompatibility complex (MHC) class I molecules and stimulate vascular smooth muscle cell migration, respectively [42,43]. While little is known about CMV determinants important for viral infection in the oral mucosa, previous studies have shown that salivary gland gene 1 (sgg1), a gene that is unique to MCMV and is dispensable for viral replication in vitro, is important for MCMV infection in salivary glands [44]. Likewise, the function of US18 may be involved in species-specific interactions between HCMV and humans, such as the potential interactions in the apical surface of oral epithe-lia. Like US11 and US28, US18 is dispensable for HCMV replication in vitro since ∆US18 grows as well as the parental Towne BAC in human fibroblasts ( Figure 5A). US18 has been predicted to encode a membrane protein [14,15,38] and is found to be expressed predominantly in the cytoplasm [45]. Our results of Western analysis and examination of the ∆US18-infected tissues (Figures 4 and 6) suggest that the infection of ∆US18 is very limited and may be blocked prior to or at the step of viral immediateearly gene expression, possibly during viral entry, decoating, or transporting the capsids to the nuclei. To confirm the assignment of functionality of a particular viral gene (e.g. US18), it is probably necessary to restore the mutation back to the wild type sequence and determine whether the phenotype of the rescuant viruses is similar to that of the parental virus. However, the rescue procedures may potentially introduce adventitious mutations that occur elsewhere in the genome. Meanwhile, it is possible that the deletion of a target ORF (e.g. US18) might affect the expression of other viral genes, including those in nearby regions, as the deleted region may function as a regulatory element important for the expression of these genes, in addition to encoding the target ORF. Extensive studies are needed to demonstrate that the deletion does not affect any other gene expression in the viral genome. Alternatively, a viral mutant that contains a subtle mutation, such as point mutations, to inactivate the ORF can be generated. Examination of the phenotype of this second isolate should confirm the results obtained from the first mutant. Further characterization of these mutants and the genes mutated will identify the HCMV determinants important for viral pathogenesis and elucidate the functional roles of these ORFs in HCMV infection. Growth of HCMV in cultured cells (A) and gingival tissues (B) that were treated with different concentrations of ganciclovir Our results demonstrate that the cultured tissues provide a useful system to study HCMV pathogenesis and to identify viral determinants responsible for HCMV infection in oral cavity. However, fully differentiated gingival tissues currently can be maintained in vitro for only a very limited period of time (~10-14 days). In our experience, after 11 days of culture upon arrival, the tissues began to deteriorate and their structures and morphologies changed (data not shown). Thus, the cultured tissues currently can only be used to study HCMV lytic but not latent infection. Further studies, such as tissue engineering and improving culture conditions and media compositions, will facilitate the development of this exciting model to study oral biology and infections. Investigation of HCMV infection and characterization of different viral strains and mutants in these cultured tissues will provide valuable insight into the mechanism of how HCMV infects oral epithelia, achieves successful transmission, and causes viral-associated oral complications. Furthermore, these results will facilitate the development of new compounds and novel strategies for treating CMV-associated oral lesions and preventing viral transmission. Conclusion In this report, we investigated the infection of HCMV in a cultured gingival tissue model and determined whether the cultured tissue can be used to study HCMV infection in the oral mucosa. HCMV replicated in the cultured tissues that were infected through the apical surface, spread from the apical surface to the basal region, and reduced the thickness of the stratum coreum at the apical region. Our results that a mutant with a deletion of open reading frame US18 is deficient in growth in the tissues provided the first direct evidence to suggest that HCMV encodes specific determinants for its infection in gingival tissues. Viral infection in these tissues resembled HCMV lytic replication observed in vivo and was inhibited by treatment of ganciclovir. These results suggest that the cultured gingival tissue can be used as a cultured human tissue model for studying HCMV infection and for screening antivirals to block viral replication and transmission in the oral cavity. Viral infection of human tissue Human gingival tissues (EpiGingival), obtained from MatTek Co (Ashland, MA), are living reconstructed oral epithelial tissues of 10-20 layers of cells that are derived from human primary oral keratinocytes and allowed to differentiate to a structure characteristic to that in vivo [22]. The tissues arrived in Millipore Millicell CM culture insert wells and were approximately 0.1 mm thick and 9 mm in diameter. After overnight refrigeration (4°C, manufacturer's recommendations), the tissues were equilibrated by transferring them to 6 well plates containing 5 ml of assay media (MatTek Co.) per well and incubated at 37°C and 5% CO 2 for 1 hour. A small volume of 2 × 10 4 PFU HCMV (0.1~0.2 ml) was then directly added to the apical surface of the tissues. After incubation with the viral inoculum at 37°C and 5% CO 2 for 4 hours, the tissues were washed to remove the inoculum. The tissues were replenished with fresh serum-free media containing growth factors every 48 hours. At different time points post infection, the tissues were collected and processed for determination of viral titers and for histochemical and fluorescent microscopy analysis. Analysis of the growth of viruses in human oral tissues The tissues were suspended in a small volume of 10% skim milk, followed by sonication. The tissue homogenates were titered for viral growth on HFFs in 6-well tissue culture plates (Corning Inc., Corning, NY) [23]. Cells were inoculated with 1 ml of the sonicated tissues in 10-fold serial dilutions. After two hours of incubation at 37°C and 5% CO 2 , cells were washed with complete media, overlaid with fresh complete medium containing 1% agarose, and cultured for 7-10 days. Plaques were counted under an inverted microscope. Each sample was titered in triplicate and viral titers were recorded as PFU/ml of tissue homogenates. The limit of virus detection in the tissue homogenates was 10 PFU/ml of the sonicated mixture. Those samples that were negative at a 10 -1 dilution were designated a titer value of 10 (10 1 ) PFU/ml. Tissue preparation and processing for histological studies Human oral tissues were fixed in Streck Tissue Fixative (Streck Laboratories, La Vista, NE) and then placed in 30% sucrose overnight. To prepare for cryostat sectioning, tissues were embedded in Histo Prep (Fisher Scientific, Fair Lawn, NJ) and frozen in 2-methylbutane submerged in liquid nitrogen. Tissues were cross-sectioned at 9 µm using a LEICA cryostat LC1900 sectioner, placed on Superfrost Plus microscopic slides (Fisher Scientific, Pittsburgh, PA), air-dried at room temperature, and frozen at -80°C until further use. In the experiments using hematoxylin and eosin staining, the tissue slides were rehydrated in ethanol baths, immersed in Gill's Hematoxylin 3 and 1% eosin Y (Fisher Scientific, Fair Lawn, NJ), and then dehydrated in ethanol. Slides were mounted in permanent media and examined using a Nikon TE300 microscope with a SPOT camera attached (Diagnostic Instruments, Inc., Detroit, MI). For experiments using fluorescence staining, the tissue slides were permeabilized with 1:1 acetone:methanol and blocked with 0.1% BSA. For direct visualization of GFP staining, the slides were counterstained with DAPI (Molecular Probes, Portland, OR) and mounted with Vectashield (Vector Laboratories, Inc., Burlingame, CA). For staining with anti-HCMV antibody, the permeabilized slides were stained with anti-IE1 monoclonal antibody (Goodwin Institute of Cancer Research, Plantation, FL), and then with secondary anti-mouse IgG conjugated to FITC and/or Texas-Red (Vector Laboratories, Inc., Burlingame, CA), prior to counterstain with DAPI. Images were visualized on a Nikon PCM2000 confocal microscope system [46]. The monoclonal antibodies against cytokeratins K13 and K14 were purchased from United States Biological (Swampscott, MA). Western analysis The tissues were either mock-infected or infected with 2 × 10 4 PFU of different HCMV strains and mutants, then incubated for 0-10 days. Viral proteins were isolated as described previously [47]. The polypeptides from cell lysates were separated on either SDS/7.5% polyacrylamide gels or SDS/9% polyacrylamide gels cross-linked with N,N"methylenebisacylamide, and transferred electrically to nitrocellulose membranes. We stained the membranes using the antibodies against HCMV proteins and human actin in the presence of a chemiluminescent substrate (Amersham Inc, Arlington Heights, IL), and analyzed the stained membranes with a STORM840 phosphorimager. Quantitation was performed in the linear range of protein detection [47]. The monoclonal antibodies c1202, c1203s, and c1207, which react with HCMV proteins UL44, IE1, and UL99; were purchased from Goodwin Institute for Cancer Research (Plantation, FL). The monoclonal antibody against human actin was purchased from Sigma Inc (St Louis, MO). Treatment of ganciclovir Two different sets of experiments were carried out to study the effect of ganciclovir (GCV) [31,32] on HCMV replication in the oral tissues. First, the tissues were first pre-incubated with different concentrations (i.e. 10 µM and 100 µM) of GCV for 2 hours, and then incubated with the viral inoculum in the presence of GCV for 4 hours to initiate HCMV infection. In the second set of experiments, the tissues were incubated with viral inoculum for 4 hours in the absence of GCV, and then incubated in fresh media in the absence of GCV for additional 24 hours before adding different concentrations of GCV to the culture. The infected tissues were incubated in the GCV-containing media for different periods of time and harvested, and viral titers in these tissues were determined by plaque assays on HFFs. Growth kinetics of HCMV in cultured fibroblasts Growth analyses of different HCMV strains and mutants in vitro in primary human foreskin fibroblasts (HFFs) were carried out as described previously [23]. Briefly, 1 × 10 6 human foreskin fibroblasts were infected at an MOI of 0.05 PFU per cell. The cells and media were harvested at 0, 2, 4, 7, 10 and 14 days post infection, and viral stocks were prepared by adding an equal volume of 10% skim milk, followed by sonication. The titers of the viral stocks were determined by plaque assays on HFFs in triplicates. Publish with Bio Med Central and every scientist can read your work free of charge
2016-05-12T22:15:10.714Z
2006-10-05T00:00:00.000
{ "year": 2006, "sha1": "fa550f3dc4271edbb8a0598387d05c1a12ef99e1", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-3-84", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa550f3dc4271edbb8a0598387d05c1a12ef99e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15705721
pes2o/s2orc
v3-fos-license
Pentoxifylline and the proteasome inhibitor MG132 induce apoptosis in human leukemia U937 cells through a decrease in the expression of Bcl-2 and Bcl-XL and phosphorylation of p65 Background In Oncology, the resistance of the cancerous cells to chemotherapy continues to be the principal limitation. The nuclear factor-kappa B (NF-κB) transcription factor plays an important role in tumor escape and resistance to chemotherapy and this factor regulates several pathways that promote tumor survival including some antiapoptotic proteins such as Bcl-2 and Bcl-XL. In this study, we investigated, in U937 human leukemia cells, the effects of PTX and the MG132 proteasome inhibitor, drugs that can disrupt the NF-κB pathway. For this, we evaluated viability, apoptosis, cell cycle, caspases-3, -8, -9, cytochrome c release, mitochondrial membrane potential loss, p65 phosphorylation, and the modification in the expression of pro- and antiapoptotic genes, and the Bcl-2 and Bcl-XL antiapoptotic proteins. Results The two drugs affect the viability of the leukemia cells in a time-dependent manner. The greatest percentage of apoptosis was obtained with a combination of the drugs; likewise, PTX and MG132 induce G1 phase cell cycle arrest and cleavage of caspases -3,-8, -9 and cytochrome c release and mitochondrial membrane potential loss in U937 human leukemia cells. In these cells, PTX and the MG132 proteasome inhibitor decrease p65 (NF-κB subunit) phosphorylation and the antiapoptotic proteins Bcl-2 and Bcl-XL. We also observed, with a combination of these drugs overexpression of a group of the proapoptotic genes BAX, DIABLO, and FAS while the genes BCL-XL, MCL-1, survivin, IκB, and P65 were downregulated. Conclusions The two drugs used induce apoptosis per se, this cytotoxicity was greater with combination of both drugs. These observations are related with the caspases -9, -3 cleavage and G1 phase cell cycle arrest, and a decrease in p65 phosphorylation and Bcl-2 and Bcl-XL proteins. As well as this combination of drugs promotes the upregulation of the proapoptotic genes and downregulation of antiapoptotic genes. These observations strongly confirm antileukemic potential. Background Leukemia is a heterogenic group of diseases characterized by infiltration of neoplastic cells of the hematopoietic system into the blood, bone marrow, and other tissues [1,2]. Leukemia is the most common malignancy among people aged <20 years. In the last decade, these diseases have exhibited a clear ascending pattern in the morbidity index, becoming a great challenge to health institutions [3]. The main treatment for this disease is chemotherapy. However, its results are very often limited due to the treatment resistance that the neoplastic cells develop [4,5]. In an attempt to increase the efficiency of antileukemic treatments, higher doses of the cytotoxic agents have been used or different combinations of them [6,7], but in the majority of the cases, higher doses have been put into effect in an empirical manner without good results and incrementing side effects. Given this situation, our research team has developed the concept of chemotherapy with a rational molecular basis. The former is based on the premise that chemotherapy acts mainly to induce a genetically programmed death of the cell called apoptosis, and that this depends in turn on the synthesis of proteins de novo and the activation of biochemical factors as a result of a modification in the balance between expression of pro-and antiapoptotic genes in response to treatment [8,9]. The cells undergoing apoptosis show internucleosomal fragmentation of the DNA, followed by nuclear and cellular morphologic alterations, which leads to a loss of the integrity of the membrane and the formation of apoptotic bodies. All of these processes are mediated by caspases, which are the main enzymes that act as apoptosis initiators and effectors. Some of these molecules can active themselves, while others require other caspases in order to acquire biological activity. This proteolytic cascade breaks down specific intracellular proteins including nuclear proteins of the cytoskeleton, endoplasmic reticulum, and cytosol, finally hydrolyzing the DNA [10][11][12]. On the other hand, it is noteworthy that upon apoptotic stimulus such as that generated by chemotherapy, this not only induces apoptosis but can also activate antiapoptotic mechanisms [13,14]. Similarly, the nuclear factor-kappa B (NF-κB) transcription factor plays an important role in tumor cell growth, proliferation, invasion, and survival. In inactive cells, this factor is linked with its specific inhibitor I-kappa B (IκB), which sequesters NF-κB in the cytoplasm and prevents activation of target genes [15][16][17][18]. In this respect, NF-κB can activate antiapoptotic genes such as Bcl-2, Bcl-XL, and survivin, affecting chemotherapy efficiency, even if the chemotherapy itself or the radiotherapy itself can activate the NF-κB factor [19][20][21]. Blast cells exhibit overexpression of antiapoptotic proteins (Bcl-2 and Bcl-XL), which increase resistance to antitumor therapy [22]. In this regard, the drug PTX can prevent the phosphorylation of serines 32 and 36 of IκB, and we have found that PTX in combination with antitumor drugs such as adriamycin and cisplatin induced in vitro and in vivo a significant increment of apoptosis in fresh leukemic human cells [8], lymphoma murine models [9], and cervical cancer cells [23]. Similar results have also been observed with PTX in other studies [24]. PTX is a xanthine and a competitive nonselective phosphodiesterase inhibitor that inhibits tumor necrosis factor (TNF) and leukotriene synthesis and reduces inflammation [25,26]. The MG132 proteasome inhibitor is another drug that decreases NF-κB activity [27]. Proteasome inhibitors are becoming possible therapeutic agents for a variety of human tumor types that are refractory to available chemotherapy and radiotherapy modalities [28,29]. The proteasome is a multicatalytic complex that is responsible for regulating apoptosis, cell cycle, cell proliferation, and other physiological processes by regulating the levels of important signaling proteins such as NF-κB, IκB, and the MG132 proteasome inhibitor have been shown to induce apoptosis in tumor cells [30,31]. This is important because apoptosis is regulated by the ubiquitin/proteasome system at various levels [32]. The aim of the present work was to study in vitro in U937 leukemic cells the effects on viability, apoptosis, cell cycle, caspases cleavage, cytochrome c release and mitochondrial membrane potential (ΔΨm), the Bcl-2 and Bcl-XL antiapoptotic proteins, and related genes activated by the PTX and/ or MG132 proteasome inhibitor, compounds that possess a NF-κB-mediated inhibitory effect. Cells The cell line U937 (ATCC CRL-1593.2), human monocytic leukemia, was used. These cells were cultivated in an RPMI-1640 culture medium (GIBCO, Invitrogen Co., Carlsbad, CA, USA) with the addition of 10% fetal bovine serum (FBS) (GIBCO), a 1% solution of L-glutamine 100X (GIBCO), and antibiotics (GIBCO), which will be designated as RPMI-S. The cells were maintained at 37°C in a humid atmosphere containing 5% CO 2 and 95% air. Drugs PTX (Sigma-Aldrich, St. Louis, MO, USA) was dissolved in a sterile saline solution (0.15 M) at a 200 mM concentration and stored at -4°C during a maximum period of 1week. The MG132 proteasome inhibitor (N-CBZ-LEU-LEU-AL, Sigma-Aldrich) 0.5 mg was dissolved in 0.250 mL of Dimethyl sulfoxide (DMSO, Sigma-Aldrich), divided into 20 μL aliquots, and stored at -20°C. Immediately prior to use, this was diluted in RPMI-1640 culture medium at a final concentration of 1 μM. Cell culture and experimental conditions U937 cells (2.5 × 10 5 -mL in T75 flasks, Corning Incorporated, Corning, NY, USA) were grown in RPMI-S for 24 hours and collected by centrifugation. The cells were reseeded onto 24 well plates; U937 cells were either treated with PTX (8 mM) or MG132 (1 μΜ), or PTX + MG132 (final concentrations). The cells were incubated with PTX for 1 hour prior to the addition of MG132. All experiments were carried out 24 hours after treatment, to exception of the p65 phosphorylation that it was analyzed 1 hour after treatment with PTX or MG132 and in the gene expression studies the cells were incubated with the drugs for only 3 hours. The concentrations of the treatments employed in this study were previously confirmed as being the most favorable for the induction of apoptosis in this experimental model [33,34]. Cellular viability Cell viability was determined at different times in U937 cells (2 X 10 4 ). They were incubated with PTX, MG132 or PTX + MG132 during 18, 24, 36 and 48 hours, we use a WST-1 cell proliferation reagent commercial kit (BioVision, Inc. Milpitas, CA, USA) following the manufacturer's instructions. This study is based on the reduction of tetrazolium salts (WST-1) to formazan. After of the incubation 10 μL/well of WST-1/ECS reagent was added and the U937 cell were incubated for another 3hours. The absorbance was measured in a microplate reader (Synergy ™ HT Multi-Mode Microplate Reader; Biotek, Winooski, VT, USA) at 450 nm as reading reference wavelength at 690 nm. Data are reported as the mean ± standard deviation of the optical density values obtained in each group. Cell cycle analysis by flow cytometry For cell cycle analysis, the U937 cells were synchronized [35]. In brief, cells were culture in RPMI-1640 containing 5% FBS by 12 hours then the cells were washed and culture in RPMI-1640 containing 1% FBS overnight. After the cells were washed with PBS and changed to serum free medium for 18 hours, and finally the cells were passage and released into cell cycle by addition of 10% FBS in RPMI-1640 culture medium and 1 × 10 6 cells were treated 24 hours with the different drugs. The BD Cycletest ™ Plus DNA Reagent Kit was used following the manufacturer's instructions (BD Biosciences, San Jose, CA, USA). DNA QC Particles (BD Biosciences) were used for verification of instrument performance and quality control of BD FACSAria I (BD Biosciences) cell sorter employed in DNA analysis. For each sample, at least 20,000 events were acquired and data were processed with Flowjo v7.6.5 software (Tree Star Inc., OR, USA). Assessment of apoptosis induction by PTX and MG132 proteasome inhibitor Apoptosis was evaluated by means of the Annexin V-FITC FLUOS Staining kit (Annexin-V-Fluos; Roche, Mannheim, Germany). Briefly, 1×10 6 U937 cells were treated 24 hours with PTX, MG132 or PTX + MG132 after that the samples were washed twice with PBS and resuspended in 100 μL of incubation buffer; 2 μL of Annexin V-Fluorescein Isothiocyanate (FITC) and 2 μL of propidium iodide (PI) solution were added. The samples were mixed gently and incubated for 10 min at 20°C in the dark. Finally, 400 μL of incubation buffer was added to each suspension, which was analyzed by flow cytometry. Annexin V-FITC-negative and PI-negative cells were considered live cells. Percentage of cells positive for Annexin V-FITC but negative for PI was considered to be in early apoptosis. Cells positive for both Annexin V-FITC and PI were considered to be undergoing late apoptosis and cells positive to PI were considered to be in necrosis. At least 20,000 events were acquired with the FACSAria I cell sorter and analysis was performed using FACSDiva software (BD Bioscience). Assessment of mitochondrial membrane potential by flow cytometry U937 cells (1 × 10 6 ) were treated 24 hours with the different drugs after that the cells were washed twice with PBS, resuspended in 500 μL of PBS containing 20 nM of 3,3dihexyloxacarbocyanine iodide (DIOC 6 , Sigma-Aldrich), and incubated at 37°C for 15 min and the percentage of cells with ΔΨm loss was analyzed by flow cytometry. As an internal control of the disrupted ΔΨm, cells were treated for 4 hours with 150 μM of protonophore carbonyl cyanide m-chlorophenylhydrazone (CCCP, Sigma-Adrich) positive control. Flow cytometry was performed using FACSAria I (BD Biosciences). At least 20,000 events were analyzed with the FACSDiva Software (BD Biosciences) in each sample. Detection of Bcl-2 and Bcl-XL antiapoptotic proteins, and p65 phosphorylation by flow cytometry For determination of Bcl-2, Bcl-XL, and phosphorylated p65, 1 × 10 6 U937 cells were treated or not treated for 1hour with PTX, MG132 or PTX + MG132. We employed Alexa Fluor W 647mouse anti-human Bcl-2 and Alexa Fluor W 647 mouse anti human Bcl-XL proteins (Santa Cruz Biotechnology) and Alexa Fluor W 647 mouse antihuman NF-κB p65 (pS529) (BD Biosciences) antibodies. The staining procedures were according to protocol for detecting protein or activation of the phosphorylation state by flow cytometry. An appropriate isotype control was utilized in each test to adjust for background fluorescence, and the results are represented as the mean fluorescence intensity (MFI) of Bcl-2, Bcl-XL proteins, and phosphorylated p65 protein. For each sample, at least 20,000 events were acquired in a FACSAria I cell sorter (BD Biosciences) and data were processed with FACSDiva software (BD Biosciences). Quantitative real-time PCR Total RNA of the U937 cells (5×10 6 ) was obtained after 3hours of incubation with the different treatments using the Purelink ™ Micro-to-Midi purification system for total RNA (Invitrogen Co.). The DNAc was synthesized beginning with 5 μg of total RNA utilizing the Superscript ™ III First-Strand Synthesis Supermix kit (Invitrogen Co.). Real-Time PCR was carried out with the System Light Cycler W 2.0 (Roche Applied Science, Mannheim, Germany), for which we employed DNA Master plus SYBR Green I (Roche Applied Science). The PCR program consisted of an initial 10-min step at 95°C, and 40 cycles of 15-sec at 95°C, 5-sec at 60°C, and 15-sec cycles at 72°C. Analysis of the PCR products was carried out with Light Cycler W software (Roche Applied Science). Data are presented in relative normalized quantities employing L32 ribosomal gene expression to verify the specificity of the amplified reaction, which was nearly 100%. The oligonucleotides (Invitrogen Co.) were designed in the data base of nucleotides of the Gen Bank of the National Information Center for Biotechnology (http://www.ncbi.nlm.nih.gov) using the oligo v.6 program (Table 1). Statistical analysis All experiments were carried out in triplicate and were repeated three times. The values represent mean ± standard deviation of the values obtained. Statistical analysis was performed with the non-parametric Mann-Whitney U test considering p <0.05 as significant. In some experiments, we calculated the Δ%, which represents the percentage of increase or diminution in relation to the corresponding untreated control group (UCG). For the different gene expressions, we considered significant variations as ≥ at 30% compared with the constitutive gene [8]. The committee of ethics, biosafety and research of CIBO approved the study with the number 1305-2005-16. PTX and MG132 proteasome inhibitor induce a decrease in viability in U937 cells We evaluated the effect on viability of U937 leukemic cells treated with both drugs. PTX, MG132, or PTX + MG132 induce inhibition of cell viability in time-dependent manner ( Figure 1). In the case of PTX or PTX + MG132 treated cells, these treatments at 18 hours exhibited similar behavior inducing around 60% of diminution of cell viability (p < 0.05 vs all groups). These values practically did not change in the other times. In contrast, at this same time the cellular viability was slightly modified by MG132 treatment (p <0.05 vs other treated groups) and reached similar values to those of the other two treated groups at 48 hours after treatment (optical density = PTX 0.48 ± 0.06, MG132 0.54 ± 0.06, PTX + MG132 0.49 ± 0.11, p < 0.05 vs untreated control group 1.87 ± 0.9). PTX and MG132 proteasome inhibitor induce G1 cell cycle arrest in U937 cells Our next interest was to elucidate whether the combination PTX + MG132 modulates the cell cycle. To address this point, U937 cells were treated in similar conditions with PTX, MG132 or PTX + MG132 for 24 hours and, subsequently, flow cytometry analysis of DNA content to determine cell populations in the different cell cycle phases was performed. As depicted in Figure 2, the percentage of untreated control group in G1 phase was 52.7 ± 3.8%. This percentage of cells is increased in PTX treated group Δ% = 25% and the maximum increment was observed in MG132 and PTX + MG132 treated groups with nearly to Δ% = 45% for both groups p < 0.05. For the S phase opposite results were observed, and it was found 34.5 ± 3.4% of U937 tumor cells in phase S; however, the Δ% in PTX, MG132 or the combination of both drugs were -26.4%, -49.2% and -54.3% respectively p < 0.05. Finally for the G2 phase the percentage of cells from untreated control group was 12.8 ± 3.6%, it diminished in treated groups Δ% = -15.2%, -24.5%, -10.9% for PTX, MG132 and PTX + MG132 groups respectively. These observations suggest that PTX and MG132 or its combination induce a cell arrest in the G1 phase. Apoptosis induction by PTX + MG132 At 24 hours of culture, apoptosis was evaluated in the U937 human leukemia cells that was induced by the different treatments under experimental conditions as previously described. In Figure 3, it is observed that the untreated control group showed a low percentage of early and late apoptosis (2.1 ± 0.9% and 2.6 ± 1.1% respectively) compared with the group treated exclusively with either PTX (18.2 ± 2.1% and 28.5 ± 7.3% of early and late apoptosis, respectively, p <0.05), or treated with MG132 proteasome inhibitor so we observed 28.1 ± 8.1% and 20.7 ± 6.6% of early and late apoptosis, respectively (p < 0.05 vs untreated control group). It was also very interesting to observe that the group of cultures exposed to PTX + MG132 showed a greater percentage of late apoptosis 44.1 ± 4.5% in comparison with all other groups p <0.05. PTX + MG132 induce mitochondrial membrane potential (ΔΨm) loss As mitochondria plays an important role in apoptosis, for that reason we determined the ΔΨm in U937 leukemia cells treated with PTX, MG132 or PTX + MG132 and the results are represented in the Figure 4. The ΔΨm did not change in untreated control group. However when the cells were treated with either PTX or MG132 an important loss of the ΔΨm were noted 43.4 ± 4.7% and 46.8 ± 6.6 respectively (p < 0.05 compared with untreated control group), and it is interesting that PTX + MG132 induce an important ΔΨm loss in U937 cells 62.7 ± 3.7%, in comparison with the other groups p < 0.05. PTX + MG132 increase cleavage in caspases-3, -9 and cytochrome c release We determined caspases -3, -8, -9 and cytochrome c by Western blot. The analysis reveals that the combination PTX + MG132 was more effective in the activation of Figure 3 Induction of apoptosis in U937 cells treated with PTX, MG132 and PTX + MG132. U937 cells were incubated exclusively in RPMI-S culture medium or were treated with PTX, MG132, or PTX + MG132 for 24 hours. After incubation apoptosis was assessed using Annexin V-FITC/PI. The results represent mean ± standard deviation of three independent experiments performed in triplicate. Mann-Whitney U test. *p <0.05 all groups vs untreated control group; •p <0.05 PTX + MG132 vs all groups. caspases-9 and -3. The results in Figure 5 allow us to observe that PTX increase cleavage of caspases-9 (2.8 fold) and -3 (10.4 fold), and the release of cytochrome c (5.2 fold) compared with untreated control group p < 0.05. In similar way MG132 proteasome inhibitor increase cleavage of caspase-3 in 5.4 fold, caspase-9 in 1.7 fold and caspase-8 in 1.4 fold change and release of cytochrome c in 4.8 fold compared with untreated control group p < 0.05. It is important to stress that when we used PTX + MG132 we observed considerably cleavage of caspase-9 (13.5 fold) and caspase-3 (13.4 fold) compared with PTX or MG132 alone and with untreated control group, p < 0.05. In the same way, when we use both drugs simultaneity we observed an increase in the release of cytochrome c (5.11 fold) and cleavage of caspase-8 (1.88 fold) in comparison with untreated control group p < 0.05. Determination by flow cytometry of phosphorylated p65 protein from NF-κB, Bcl-2 and Bcl-XL antiapoptotic proteins The phosphorylated p65 protein was quantified determining the Mean Fluorescence Intensity by flow cytometry. As we expected, in comparison with the Untreated Control Group, Figure 6 shows that U937 human leukemia cells treated with PTX or the MG132 proteasome inhibitor decrease the phosphorylation of p65 (p <0.05), and in the combination of both compounds, this diminution is more pronounced. The antiapoptotic proteins Bcl-2 and Bcl-XL play a transcendent role in chemoresistance in tumor cells; therefore, these proteins could be regulated by the NF-κB transcription factor. For this, we studied the effect of PTX and MG132 in these proteins. We can observe in Figure 7A that tumor U937 cells treated with PTX, MG132, or PTX + MG132 in a similar manner reduce the expression of Bcl-2 protein in comparison with the untreated control group (p <0.05). In the same way, in Figure 7B, we can see that when U937 cells were treated with the same schedule of treatments. We also observed a reduction in Bcl-XL in comparison with the untreated control group (p <0.05), with a tendency to be the most pronounced in the group treated with both drugs. These results together are according with apoptosis, caspases cleavage, and cytochrome c release and ΔΨm loss experiments and strongly suggest that assayed treatments inhibited the expression of important proteins related with Changes in the expression of proapoptotic, antiapoptotic, and NF-κB-related genes Real-Time PCR was employed to determine relative change in gene expression ( Figure 8). Arbitrary was considered as significant upregulation or downregulation when the change was ≥ 30% in relation to constitutive gene. In PTX-treated U937 cells, we found upregulation of BAX, DIABLO, DR4, and FAS proapoptotic genes in comparison with untreated control group, and the most important upregulation observed with BAX (2.17-fold upregulation). Similarly, PTX induces downregulation of BCL-XL and MCL-1 antiapoptotic genes and of IκB and p65 NF-κB-related genes. When U937 culture cells were treated with the MG132 proteasome inhibitor, we observed upregulation of BAX, DIABLO, and FAS genes. In the case of antiapoptotic genes, MG132 induces downregulation of Survivin and p65 genes. When the cell cultures were treated with PTX + MG132 we observed upregulation of the proapoptotic genes BAX with the greatest upregulation (4.6-fold upregulation), and with FAS and DIABLO genes. In relation to PTX + MG132-treated U937 culture cells antiapoptotic genes BCL-XL, MCL-1, and Survivin were downregulated as well as the NF-κB-related genes IκB and p65. In general, with these treatment schedules the data suggest a balance in favor of proapoptotic genes in U937 human leukemia cells treated with PTX + MG132. Discussion In the present work, we studied the viability of U937 human leukemia cells treated with PTX and/or MG132 using the spectrophotometric assay of WST-1 as well as apoptosis by flow cytometry. These results are in agreement between them and with prior experiments clearly showing that PTX and MG132 possess an important antitumor activity per se, as has been reported [24,36]. This increasing in cytotoxicity when the drugs are added simultaneously to tumor cell cultures in an important manner, suggests an additive effect. In addition, the fact of having found a clear effect of time-dose dependence speaks to the specificity of the treatments. In this respect, the potential of PTX and MG132 is great because there reports of successful combinations of PTX with antitumoral drugs such as adriamycin [8] and cisplatin [23], and MG132 can synergize the antitumoral activity of TRAIL receptor agonist [37] and propyl gallate [38]. In these sense our study conincide with these reports because we observe an important induction of late apoptosis (44.1%) when we use the combination PTX + MG132 in U937 leukemia cells. The growth arrest of tumor cells in G1 phase provides an opportunity for cells to either undergo apoptosis or induce cell repair mechanisms [39,40]. Interestingly, in our study we observed with the different treatment arrest in G1 phase and apoptosis induction. In this point apparently the lower percentages of cells in S phase are due to MG132 effect because the percentage of cells treated exclusively with the proteasome inhibitor shows the same values than the cells treated with PTX + MG132, suggesting different action mechanisms between two drugs. Based in the correlation of our observations related with the ΔΨm loss, cytochrome c release, caspase assays we think that apoptosis observed it is due principally to the mitochondrial pathway. In addtion these results together are in aggremeent with previously reports [41,42]. It is known that PTX prevents the activation of NF-κB by avoiding the breakdown of its inhibitory molecule, IκB [43]; MG132 is also an NF-κB inhibitor as well as of the proteasome [44]. We used both drugs in our experiments in order to observe the modifications in p65 (NF-κB subunit) phosphorylation. In U937 leukemic cells, we found a decrease in p65 phosphorylation with PTX and MG132 or its combination compared with untreated cells (p < 0.05). The fact that the experimental treatment induces a decrease in NF-κB phosphorylation allows us to suppose the presence of important alterations in a mechanism that promotes resistance to antitumor therapy [45,46]. We decided to study the Bcl-2 and Bcl-XL proteins that possess antiapoptotic activity that can be regulated by NF-κB activation [47,48]. In others tumor cells have shown an overexpression of these proteins promoting a resistance to radiotherapy or chemotherapy [49,50]. Likewise, some studies have reported that various chemotherapeutic agents commonly used upregulated Bcl-2 and Bcl-XL expression through the NF-κB-dependent pathway [51,52]. These proteins suppress apoptosis by preventing the activation of the caspases that carry out the process [53,54]. The susceptibility in U937 leukemia cells to apoptosis induced by PTX and MG132, it can explain for the decrease in the expression of Bcl-2 and Bcl-XL proteins when the cells are expose to both drugs. Moreover the decrease in the levels of Bcl-2 leads to ΔΨm loss potential. This fact is key event for the apoptosis induction [55]. The data suggest that PTX + MG132 treatment induces caspasesdependent mitochondrial intrinsic pathway because we found disruption in mitochondrial membrane potential, cytochrome c release and an important cleavage of caspases-9 and it is well known that it leads to caspase -3 cleavage and apoptosis induction [56]. Our result show that the proapoptotic genes exhibited upregulation with the different treatments and this tendency is observed mainly in BAX, DIABLO, and FAS genes. Contrarily, the antiapoptotic genes were downregulated, mainly BCL-XL, MCL-1, and survivin. It is important to stress that in relation to proapoptotic genes study we found the highest upregulation in the BAX gene and this is in agreement with our data in relationship to the mitochondrial pathway participation observed in this paper. Above suggests that there is a gene balance that favors apoptosis induction. We found a downregulation in the IκB when leukemia cells were treated with PTX or PTX + MG132 and in p65 genes when U937 leukemic cells were treated with PTX, MG132, or its combination, suggesting a diminution of the biological availability of these factors that facilitate cell death. Conclusion Our results show that in this experimental model with U937 human leukemia cells, PTX and MG132 showed antileukemic activity, and together have an additive effect. These drugs disturb the NF-κB pathway and induce cell arrest in G1 phase, and decrease of antiapoptotic proteins Bcl-2 and Bcl-XL and induce ΔΨm loss, cytochrome c release and a caspases-3,-9,-8 cleavage resulting in an increase in apoptosis. In addition the different treatments gave rise to equilibrium in favor of the expression of proapoptotic In all cases, standard deviation was not >0.08. Arbitrary was considered as significant upregulation or downregulation when the change was ≥ 30% in relation to constitutive gene expression. genes. For these previously mentioned reasons, in general our results support the idea that chemotherapy must be administered under rational molecular bases.
2017-06-19T21:01:36.334Z
2013-02-28T00:00:00.000
{ "year": 2013, "sha1": "b4aafcbec9b274e033c80a44cdcb11b5d247741b", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/1423-0127-20-13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4aafcbec9b274e033c80a44cdcb11b5d247741b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
89608360
pes2o/s2orc
v3-fos-license
MODELLING APPROACH FOR MULTI-CRITERIA DECISION-MAKING SELECTION PROCESS FOR ARTIFICIAL LIFT SYSTEMS IN CRUDE OIL PRODUCTION. Artificial Lift system selection is a key factor in enhancing energy efficiency, increasing profit and expanding asset life in any oil-producing well. Theoretically, this selection has to consider an extensive number of variables, making hard to select the optimal Artificial Lift System. However, in practice, a limited number of variables and empirical knowledge are used in this selection process. The latter increases system failure probability due to pump – well incompatibility. The multi-criteria decision-making methods present mathematical modelling for selection processes with finite alternatives and high number of criteria. These methodologies make it feasible to reach a final decision considering all variables involved. In this paper, we present a software application based on a sequential mathematical analysis of hierarchies for variables, a numerical validation of input data and, finally, an implementation of Multi-Criteria Decision Making (MCDM) methods (SAW, ELECTRE and VIKOR) to select the most adequate artificial lift system for crude oil production in Colombia. Its novel algorithm is designed to rank seven Artificial Lift Systems, considering diverse variables in order to make the decision. The results are validated with field data in a Case study relating to a Colombian oilfield, with the aim of reducing the Artificial Lift Failure Rate. Artificial lift In some cases, Artificial Lift System (ALS) selection for crude oil production is mostly based on operator experience, on analogy or comparison with similar cases, on required flow rates, or well depths and bottomhole pressure, among other things.Although these variables can be good criteria in some cases, they do not have a strong analytical / mathematical basis and they use a set of criterions with little or no application of the scientific method.Despite of the fact that application of many of these criteria result in acceptable performance in ALS, it's worth noting that there is a big opportunity for developing an automated process based on algorithms that model mathematical procedures used for decisionmaking processes.Software applications reduce time-consuming MULTI-CRITERIA DECISION ANALYSIS METHODOLOGY Multi-Criteria Decision Making (MCDM) forms part of advanced analytical methods developed to improve efficiency and reduce time-consuming processes, and make better decisions [1].MCDM methodologies confront conflicting criteria or Input Variables (Iv) and generate matrix systems that consider possible solutions to a specific situation, sorting and ranking them quantitatively according to its relevance as possible solutions to the given situation.[2] Usually, MCDM methodologies are preferred when there is not a clear Iv that affects the output of a decision-making analysis.Instead, the change in any Iv leads to a variation of the matrix systems, hence leading to a different set of alternatives [2].This behaviour is referred to as non-dominated. There are many different MCDM methods and all of them differ in the quantitative result and usually the ranking of alternatives.For this study, three different methods were chosen (SAW, VIKOR and ELECTRE) according to the differences in their mathematical treatment in order to see which would provide the best accuracy in relation to empirical field data and on an engineering basis. ANALYTIC HIERARCHY PROCESS (AHP) Besides the three MCDM methods already mentioned, the AHP process was used to define a priority vector that contains normalized values of the n input variables that pre-determine weights (Wi) of any Iv.The procedure used is shown below [1]: 1. Create an n x n matrix (pair-wise comparison matrix) comparing all Iv against each other Iv, this matrix will be referred as M(nxn).This comparison is based on a scale predefined before AHP is used.For any correlation of Iv a numerical value (n ij ) that represents how important it is in regard to the other Ivs is required. 2. Add up all resulting values for every column Nj (see Equation 1).Then, divide every n ij by Nj to normalize them (see Equation 2); the resulting matrix is M -(nxn): 3. Add up all n ij values for every row of M -(nxn) (see Equation 3) to determine the priority vector (W) A consistent verification of the Iv is recommended at this point."To ensure that the judgments of decision makers are consistent" [3] a consistency ratio (CR) is introduced.If CR exceeds 0.1, this means that one or more of the scale values used before AHP application needs to be redefined [1]. For all three MCDM methods a matrix Xmxn is required: Once the set of alternatives (possible solution to the situation) is defined, construct a matrix X(mxn) of alternatives(Al) against criteria (Ivi): The process shown above is performed before SAW, ELECTRE or VIKOR methods are applied in order to define one primary input for all of these methods. SIMPLE ADDITIVE WEIGHTING (SAW) METHOD SAW is a method for the linear combination of weights that were given to all input variables according to their supposed influence in each possible alternative.This is the most often used method due to its relative simplicity [3]. The procedure to determine a set of alternatives for a specified situation with SAW consists of the following steps [1]: processes, standardize procedures, decrease the likelihood of errors in selection, optimize downhole pump performance, and increase asset life.This paper presents an algorithm and a software application developed to perform the selection of artificial lift systems for crude oil production in Colombia.The process is based on three MCDM methods with a prior Analytic Hierarchy Process (AHP) setup.Subsequently, its results are evaluated with a brief case study using a Colombian field's well sample, with the intention of selecting the most suitable ALS, hence reducing the current failure rate. THEORETICAL FRAME 55 Ec o p e t r ol S .A 1. Multiply X(mxn) to every factor of W and the result will be a vector SAW, where every element of the vector is a Aj (j=1…m) alternative. 2. Ranking SAW vector will provide the most suitable solution for the given situation (The higher the Aj the nearer to a unanimous decision). THE "VISEKRITERIJUMSKA OPTIMIZACIJA I KOOMPROMISNO RESENJE" (VIKOR) METHOD This is a MCDM method that gives a maximum point of "group utility" for the "majority" of decision makers, providing minimum regret to the "opponent" according to the authors [4] [5]. The procedure to determine the maximum group utility in a given situation through VIKOR is as follows [6]: 1. Determine the best value (fi * ) and the worst value (f i ) in every Iv in Xmxn (see Equation 4) matrix: Where, i stands for Iv (Input variable) index and j for A (alternative) index. 2. At this step, it is necessary to obtain the distance to the ideal positive solution. Where, Si is the distance of the i th alternative to the "positive ideal solution (best combination)", Ri is the distance of the i th alternative to the "negative ideal solution (worst combination)" [7]. Calculate Qj by the following equation Where, S* is the Minimum value of Sj, S -is the Maximum value of Sj, R* is the Minimum value of Rj, R -is the Maximum value of R j and v ranges between 0 to 1.When v is 1, this means that the selected alternative is selected by unanimity (with regard to which Iv affects the selection more) and 0 means that there is no consensus between the decision makers [8]. 4. Rank Qj , Sj and Rj from the lowest value to the highest.The lowest Qj value is the best decision to be taken in the given situation.In addition, the selected Alternative must satisfy two conditions: a. Condition one ("Acceptable advantage") [6]: Where: A 1 and A 2 are the first and the second best options in Q rank, respectively. b. Condition two ("Acceptable stability in decision making"): A 1 must be ranked the best option (the lowest value) in either Sj or Rj ranks or in both of them at the same time [6]. THE ELECTRE METHOD (ELIMINATION ET CHOIX TRADUISANT LA RÉALITÉ): Developed by French scientists [9] based on the idea that is better to accept a less accurate result than to overwhelm the decision makers with mathematical hypotheses that are too complex [10]. Since the development of this method, more have been created (ELECTRE II, ELECTRE III, ELECTRE IV and ELECTRE TRI).For this paper ELECTRE I was the method used and will be referred to as "ELECTRE".The steps for applying ELECTRE are as follows [11]. 1. From Xmxn (see Equation 4) calculate the Standard Decision Matrix (X*mxn): Where, xj,i * is an element located in row j (j=1,2,3….m)and column i (i=1,2,3…n) in the X * mxn matrix and k is an alternative row index in X * mxn matrix. Generate Standard Decision Matrix (Ymxn) from step 1 and Equation 3: Where, yj,i is an element located in row j and column i in the matrix ymxn.wi is the weight vector (see Equation 3).A j =∑W j •X j,i i=1,…n j = 1,…m Determine x j,i Vo l .8 N um . 1 J un e 2 0 1 8 56 Ec o p e t r ol S .A Where, max|┤| is the maximum value in a set of numbers. ARTIFICIAL LIFT SYSTEM SELECTION There are five types of basic Artificial Lift System (ALS) that are used in oil wells.They are classified according to their mechanical and operational differences (some of these types are sub divided into other ALS).The major ALS for oil production are Electro submersible Pump (ESP), Sucker-Rod Pump (SRP), Gas lift (GL), Hydraulic Piston Pump (HP), Hydraulic Jet Pump (HJP), Progressing Cavity Pumps (PCP) [12]and one more that is a combination of the former systems and is worth mentioning, due to the advantages it offers.This is the Electrical Submersible Progressing Cavity Pump (ESPCP). In some fields, ALS selection is mostly based on operator experience [12], analogy with similar cases, required flow rates, well depths and bottom hole pressure etc. which are good criteria but do not have a strong analytical / mathematical basis that considers other properties or characteristics, leaving out of the analysis variables such as: • The field's stage of production (newly discovered, mature etc.): due to fluid production pressure drops and new conditions arising in the wells. • The implementation of future or current recovery methods.• Supply chain constraints.• Surface facility capacity and availability. • Well service equipment availability. • Energy availability/energy costs. INPUT VARIABLES (CRITERIA) FOR ALS SELECTION In order to define the scope of this study, an onshore scenario in the Colombian Oil & Gas industry was chosen to constrain the number of input variables for the MCDM methods.Based on Alemi M. et al [12] (see Figure 1) a number of input variables were selected (See Table 1).All conventional ALS were included in the analysis, while twenty (20) Ivs were selected and reordered according to their relevance and data availability for the intended case study. ALGORITHM AND SOFTWARE APPLICATION DEVELOPMENT: The software application developed was based on an algorithm derived from the procedure, methods, ALS and Iv described in previous sections.This software is a standalone windows app with monolithic architecture in visual basic (VB.NET®), with a local Database. Figure 2 shows the flow diagram developed and used for this study.It has three main stages (from Start to End): System/methodology setup, real variable weights definition and MCDM application. CASE STUDY IN A COLOMBIAN FIELD: The Casabe oilfield is located in Middle Magdalena Valley basin.Currently, this field produces approximately 15,000 BOPD of 14.8 to 23.3 API oil (upper sands) and 15.4 to 24.8 API oil (lower sands) [13] with a low Gas to Oil Ratio (lower than 100 scf/STB on average), oil average viscosity of 40 cP and diverse water cuts per well with a water flooding process ongoing.Its lithology is not consolidated [14], and for that reason, high quantities of sand are produced. WELL SAMPLE SELECTION In order to evaluate the results of the methodology, a group of 30 wells using PCP as the ALS were chosen.This group represents Measured depth (MD) to pump intake.Expressed in Feet (ft). Input Variables Description Total fluid production (Oil + Water).Barrels per Day (BPD). Inner diameter of the smallest casing to pump intake interval.Expressed in Nominal size in inches (in). The maximum well deviation from vertical.Expressed in degrees. Emulsion dynamic viscosity of at downhole conditions.Expressed in centipoise. Sand content in produced fluids.Expressed in ppm. Distance to pump supplier production centre.Qualitative variable. Production completion type: Simple or Multiple Completed well.Qualitative variable. Recovery method applied in adjacent oilfield zones.Qualitative variable. Turn, bend or change in well three-dimensional trajectory.Expressed in degrees per 100 ft. Downhole fluid temperature. Expressed in Fahrenheit degrees (F). Available Well service equipment for ALS installation.Qualitative variable. Amount of potential wells where the selected ALS would be installed.Expressed in units. Chemical substances considered contaminants in produced fluids.Qualitative variable. Downhole Chemical treatment injected in the well.Qualitative variable. Electricity generation: In situ (electric portable power generator) or national electric grid.Qualitative variable. Available surface space.Qualitative variable.13% of the total wells that use PCP in the field and it represents the Pareto group for failure rate (44% of all failures comes from 15% of the total PCP wells).Every well failed between two to eight times in a time period of one year. VARIABLE VALUE ASSIGNMENT The variable values (relative weights) for AHP analysis were defined in accordance with engineering field experience and ALS historical data application in Colombian oil fields (See Table 2).For the distribution shown in Table 2, the CR obtained was 0.0981 and the defined VIKOR coefficient was 0.5. These values affect all the subsequent calculations and vary according to the particular conditions of each Well / Oilfield (e.g. for this application, Flowing Pressure is considered a critical criteria.In other applications this will most likely vary). SIMILAR APPLICATION FOR MCDM METHODS Previous works that use MCDM methods for ALS selection [12,15,17,19] were used as a base the study presented in this paper.In these cases Alemi et al use five ALS with 25 variables, some of them applied to similar but not equal offshore scenarios for Iranian oilfields. 3. This paper shows an application of MCDM methods to a Colombian Onshore Oilfield.For this study, 20 variables with seven ALS were considered.In addition, a novel sequential mathematical approach is made: First, an analytical hierarchy process (AHP) is used for variable values, followed by a numerical validation of input data and, finally, MCDM application to the sample of 30 wells to make the results of the three methods comparable.The first two steps of the mathematical process were not used in any of the referenced studies for MCDM in ALS selection. VARIABLE WEIGHT AFTER AHP IMPLEMENTATION: After implementation of the AHP methodology with initial relative variable values (see Table 2), a W vector of variable weights was calculated (see Figure 3).The five variables with the highest weights relate to hydraulic flow, well geometry and fluids / solids produced.These results are in accordance with the most common causes of failure in downhole equipment in the selected well sample.They represent the most important parameters in ALS design in the studied field: downhole pressure for ALS integrity, rod and pipe failure due to well deviation (wearing of rotating rod surface with the inner surface of the pipe) and peaks of sand production, due to unconsolidated reservoir sandstones, which causes ALS failure. MCDM METHODS RESULTS: After software implementation and ranking definition for every well, all of the numerical values were consolidated in a global distribution of all ALS for the three methods.According to field experience, the most important constraint in the field studied for an ALS is the high content of sand.It can cause consistent damage to the downhole pump, hence the necessity for a system capable of managing elevated solids concentrations.For this purpose, the most suitable ALS are PCP and ESPCP, while the others require a second system (i.e.: Gravel Packs) to control the effects of sand production. In Figures 4 to 6, the distribution obtained for the three methods shows a trend towards HJP, ESP and ESPCP being the most suitable Artificial Lift Systems for the wells sample.This is due to the following main reasons: ESPCP along with PCP are the best ALS for handling high sand production. • For the remaining parameters, all ALS exhibit similar behaviour for this specific well sample of the field studied. Despite the fact that the sample analysed is constituted only by wells with PCP installed due to its good performance in handling fluids with a high solids content, and good to acceptable performance in the other parameters, in the MCDM final distribution this ALS is not present among the top places in the three rankings.This highlights the need for exclusive variables or Max/Min constraints (if a specific ALS does not fulfil a requirement, it is discarded) and the fact that instead of PCP, ESPCP is present (as one of the most suitable options) in two out of three distributions for the same capacity for handling high volumes of sand. CONCLUSIONS • Mathematical modelling for decision-making in artificial lift systems selection is an excellent way of reducing time consuming processes, standardizing procedures, decreasing the likelihood of errors, optimizing performance, and increasing asset life.However, the proposed algorithm and software is not a complete replacement for the engineering ALS selection process due to the quantity and complexity of the parameters involved; both methodologies complement one another. • Every Oilfield can be divided into sectors or an individual well; each one of them has an analysis model.Any of these models could differ radically from one another or, on the contrary, be very similar in their parameters.Those differences in the input variables could result in significantly different rankings in every MCDM method after the software's implementation.Consequently, every field, sector, group or individual well has to assign specific Iv weights separately, considering that every application is different. • The results interpretation for the selected Colombian field shows an optimal selection trend towards the hydraulic jet pump (HJP) as an artificial lift system.Despite the fact that HJP does not have good performance for sand production greater than 500 ppm, the rest of the variables considered make this system one of the best, with optimal theoretical performance.By implementing supplementary sand control technologies not included in the methodology described, hydraulic jet pumping could see its performance improved for most of the 30 wells analyzed. • The order of priority for the artificial lift systems to be implemented was established for each of the mathematical models reviewed, obtaining the following potential solutions in order of priority: • Hydraulic Jet Pump, with a sand control system included (bottomhole filters, unconventional pump designs, etc.).• Electro-submersible Pump, with additional technology that can tolerate high contents of sand.• Progressing Cavity Pump, with a bottomhole bottom motor, along with the additional advantages of combining two lifting systems.This is considered a good option for deviated wells. The Colombian Petroleum Institute -ICP has developed analytical techniques to identify optimal chemical formulations that will increase the profitable recovery of our reservoirs to increase reserves and give sustainability to Ecopetrol Flowing at pump intake.Expressed in psi.Volume of gas per oil barrel.Expressed in scf/stb.Volume of water per total liquid (oil + water) volume.Expressed in percentage (%). Figures 4 to 6 show the percentage suitability (number of wells that should use that specific ALS out of the 30 wells) of the alternatives in the well sample considering particular values for the parameters and wells. Figure 6 . Figure 6.ALS distribution with the VIKOR method Nonconformity (Dkl) and Conformity (Ckl) sets.Conformity set results from comparing every element of Ymxn according to its j and i indexes and yj,i values, therefore, nonconformity set elements are the i indexes that are not present in Ckl: Add up all elements in every Emxm row to calculate a total for every Alternative (Aj).Rank all Aj values from highest to lowest (the highest Alternative's value is the best option for the given situation). 6. From the thresholds calculated in Equation 17 and Equation 18, determine Conformity Supremacy Fmxm and Nonconformity Supremacy Gmxm matrices.All elements of Fmxm (fk,l) and Gmxm (gk,l) take the value of Ck,l * and dkl, respectively, if a condition is fulfilled (See Equation 19 and Equation 20) and the main diagonal is empty due to its derivation from C * mxm and D * mxm. 7. Formation of Total Dominance Matrix (Emxm).All elements (ek,l) of this matrix are calculated based on fk,l and gk,l: 8. Table 1 . Selected Ivs for the MDCM procedure Figure 2. Algorithm's Pseudocode Table 2 . Variable values for AHP analysis • Hydraulic Jet Pump, Electrical Submersible Progressing Cavity Pump and Electro submersible Pump are the best solution for deviated wells due to the absence of rotary or reciprocating parts from surface to downhole (These ALS transform electricity/hydraulic energy into movement in downhole systems).• Electrical Submersible Pump (ESP) is one of the best options for high water cuts.A characteristic parameter in mature fields with a long history of water injection projects.•
2019-04-10T13:11:58.202Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "28581389cd7e283da96458859bab89012793dc3c", "oa_license": "CCBYNCSA", "oa_url": "https://ctyf.journal.ecopetrol.com.co/index.php/ctyf/article/download/91/20", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a336a0d0a2cfb254018318066a5a3bf49d4a3587", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
40558452
pes2o/s2orc
v3-fos-license
VOIP for Telerehabilitation: A Risk Analysis for Privacy, Security, and HIPAA Compliance Voice over the Internet Protocol (VoIP) systems such as Adobe ConnectNow, Skype, ooVoo, etc. may include the use of software applications for telerehabilitation (TR) therapy that can provide voice and video teleconferencing between patients and therapists. Privacy and security applications as well as HIPAA compliance within these protocols have been questioned by information technologists, providers of care and other health care entities. This paper develops a privacy and security checklist that can be used within a VoIP system to determine if it meets privacy and security procedures and whether it is HIPAA compliant. Based on this analysis, specific HIPAA criteria that therapists and health care facilities should follow are outlined and discussed, and therapists must weigh the risks and benefits when deciding to use VoIP software for TR. Introduction Voice over the Internet Protocol or VoIP technologies are used for more than just talking long distance to family members in another city or country. VoIP can take on several different forms including telephone handsets, conferencing units, and mobile units (Kuhn, Walsh, & Fries, 2005). Some of these software systems are used by health care providers to provide telemedicine, telepsychiatry, and TR services to patients via voice and video teleconferencing. According to the National Institute of Standards and Technology (VoIP 800-58) the main advantages of VoIP are its cost and integration with other services such as video across the Internet which then provides a teleconferencing system. Most VoIP systems are cheaper to operate than an office telephone or teleconferencing system. The disadvantages of VoIP are the start-up costs and security. Since VoIP is connected to the data network and may share some of the same hardware and software, there are more ways for the data to be compromised, so increased security on a VoIP system may be necessary. Most VoIP technology systems provide a very reliable, high quality and competent teleconferencing session with their patients. However, to determine if the VoIP videoconferencing technologies are private, secure and compliant with the Health Insurance Portability and Accountability Act (HIPAA), a risk analysis should be performed. This paper will provide a description of risk analysis issues as well as a HIPAA compliance checklist that should be used for VoIP software systems that may be used in the TR setting. Background of Hipaa Privacy and Security Regulations and Hitech The Health Insurance Portability and Accountability Act (HIPAA) implemented in 1996 encompasses a number of different provisions related to health insurance coverage, and electronic data exchange, along with provisions that address security and privacy of health data. Recent revisions were enacted as part of the American Recovery and Reinvestment Act of 2009, and address the privacy and security concerns associated with the electronic transmission of health information under the HITECH Act (Health Information Technology for Economic and Clinical Health Act) (Lazzarotti, 2009). The Privacy Rule, which took effect in April 2003, established regulations for the use and disclosure of Protected Health Information (PHI), and set into play a number of sections that outline an individual's privacy rights with regard to PHI, and the expectations set forth for health care organizations and providers in ensuring that those rights are upheld. The Security Rule, also implemented in April 2003, outlines three types of security measures that must be taken in order to comply with the privacy rule regulations and deals specifically with Electronic Protected Health Information (EPHI). These measures include administrative, physical and technical safeguards that should be administered as part of the security rule. The HITECH Act revisions require increases in civil penalties for different categories of violations and penalties will apply even where the covered entity did not know (and with the exercise of reasonable diligence would not have known) of the violation. Information Security Risks: There are three types of information security risks: Confidentiality, Integrity, and Availability. Confidentiality refers to the need to keep information secure and private. Integrity refers to information remaining unaltered by unauthorized users. Availability includes making information and services available for use when necessary. According to the NIST SP 800-58, there are many places in a network, for intruders to attack. Intrusions may also occur when VoIP telephone is restarted or added to the network The vulnerabilities described by NIST in their report on VoIP technologies are generic and may not apply to all systems, but investigations by NIST and other organizations have found these vulnerabilities in many VoIP systems (Kuhn, Walsh, & Fries, 2005). Hipaa Compliance Checklist A HIPAA compliance checklist, specific to VoIP videoconferencing used between patients and therapists to provide TR therapy, is included so that therapists and health care facilities can take any VoIP software system they are thinking of using and determine if it meets basic privacy and security provisions. Every potential user (therapist or healthcare facility) should review the privacy and security policies that are found on the VoIP software system's website to determine if they answer the questions listed in this checklist. If the question is not addressed in the policy, then the user may want to contact the software company and ask them how the company will address a particular question(s). Then the user can determine whether the question(s) that are not answered outweigh the benefits of using a VoIP videoconferencing system to provide TR therapy to their patients. PRIVACY Yes No Not included in policy Personal Information • Will employees and other users of VoIP software be able to listen in to video-therapy calls between patient and therapist? • Will video-therapy content of sessions between the therapist and patient be accessible to individuals within (employees) and outside of the software organization (other users/consumers)? • Will video-therapy content be shared further to protect the company's legal requirements, interests, enforce policies or to protect anyone's rights, property or safety? • Will video-therapy content be shared with distributors of the software or with analytical services or banking organizations etc.? • Will the VoIP software company provide the user 30-60 days to comply with a new privacy policy, if it has changed? • Will the user be able to amend personal information within a reasonable period of time and upon verification of their identity? • Can a user's contact see that they are online and choose to send them an email during a video conferencing session? RETENTION OF PERSONAL INFORMATION • Are video conferencing sessions for TR therapy services recorded? • Will video conferencing TR therapy sessions be retained and for how long? • How long will other personal information be retained and what will this include? • If a patient requests that past information be deleted, does the privacy policy state how this will occur? • Is the level of access (management) of the TR videoconferencing recording up to the user? • Does the user get the option of archiving their records offline on storage network devices? Voicemail: • Will voicemail for another VoIP user be transferred to a third party service provider? • If a third party service provider is used to convert and analyze the voicemail, is the background and training of the third party provided? • Does the background include training related to privacy and confidentiality issues related to HIPAA and other privacy statutes? Requests for Information from Legal Authorities etc. • Will personal information, communications content, and/or traffic data when requested by legal authorities be provided by the VoIP software company? • Is information on the educational backgrounds and experience of employees working at the VoIP software company who will decipher these requests provided? • Will a qualified individual who is a Registered Health Information Administrator (RHIA) with privacy, confidentiality, and HIPAA compliance experience analyze these requests? • Will a complete and accurate consent to patient disclosure be made? • Will appropriate processing of the personal data that is necessary to meet a valid request be made? • Will a subpoena or court order be requested from law enforcement and government officials requesting personal information? • Will an accounting of disclosures be made and provided to the user? • Are patients able to request a restriction of uses and disclosures? Sharing of Personal Information in Other Countries • Will a transfer of personal information outside of your country to a third party be made by the VoIP software company? • Will the use of any VoIP products automatically consent to the transfer of personal information outside of your country? • Since privacy and confidentiality regulations change across different countries, how will different countries maintain personal health related data and video? • Will other countries who may not abide by the HIPAA requirements, have the opportunity to release personal information more easily and without regard for legal requirements? • Should personal information that is acquired during video conferencing be transferred to a third party that the software company may buy or sell as part of its business agreements? • Should the patient have the right to consent to this transfer of personal information? • If the patient consents, with how many different countries will their personal information be shared, when participating in TR video conferencing therapy? Linkage to Other Websites: • Will the VoIP software contain links to other websites that may have a different privacy policy than their policy? • Does the VoIP software company accept responsibility or liability for these other websites? • Is the VoIP considered a business associate with the tele-therapy site being the covered entity? • Will the covered entity need to have business associate agreements with each of the other websites in which personal information may travel? • Will the other websites need to comply with privacy and security (HIPAA) requirements on their own? • How will the VoIP software company handle privacy and security protections under the HITECH amendment of HIPAA rules? Encryption: • Are voice, video, and instant message conversations encrypted with strong encryption algorithms that are secure and private during transmission? • Does the encryption protect video TR therapy sessions from potential eavesdropping by third parties during transmission? • Does the encryption implementation contain specific information to explain what it entails? • Can third parties be able to decode a recorded VoIP video and voice conversation by accessing encryption keys? Anti-Spyware and Anti-Virus Protection: • Is it the user's responsibility to make sure that appropriate anti-virus and anti-spyware protection is on their computer in order to prevent eavesdropping during videoconferencing TR sessions? • How secure are videoconferencing TR sessions and how much personal health information may be transmitted to other authorities? • Are patients informed of the security issues and is this included in their informed consent? User's Public Profile: • Is it optional for the user to enter information into their public profile • Is the user required to enter any information into the public profile? • If the public profile information be seen by other users can the user determine which information can be seen by whom? • Is the public profile separated into the following three categories? 1. Information that everyone can see. (1) 2. Information for only the user's contacts to see. (1) 3. Information for no one to see. (1) • Is the user's email address encrypted so no one can see it when looking at the profile? • Are there instructions on how users can update and Are there instructions on how users can update and change the profile information? Voip Risks and Recommendations: The risks, threats and vulnerabilities related to a VoIP as explained by NIST are described below with recommendations on how these can be reduced or eliminated. This list is not exhaustive as some of the VoIP systems may have privacy and security risks that are not included below. However, it does provide information as to where a risk may occur, the level of risk and a recommendation on how to prevent the risk from occurring (Kuhn, Walsh, & Fries, 2005). Allowing, Removing, Blocking Callers: • Does the VoIP software system allow the user to determine if they want to contact a person in their contact list? • Are contacts easily removed by the user? • Can the user remove or revoke authorization by blocking the user on each computer that is used? • Does the VoIP software system provide instructions on how to block a user? Audit System Activity: • Are server logs generated to provide a record of the compliance settings that the user developed? • Do the logs also provide an audit trail to track who had access to TR videoconferencing sessions and which functions were enabled or disabled for the session? Security Evaluation: • Has a security evaluation of the VoIP software system been performed by an independent group? • Does the security evaluation include authentication, password management, data management etc. and verifies that the software system implements proper security measures? Over all Recommendations: Whatever software application is chosen to be used for TR videoconferencing therapy, each therapist and health care entity should consider implementing the following recommendations before its use: • Form a team of health and legal professionals that will examine VoIP software systems to determine if it meets federal (HIPAA), state, local, and facility-wide privacy and security regulations. Since VoIP software systems can change frequently, a team of professionals is needed to stay up-to-date on those changes. Also, federal and state policies change frequently, so again the team must ensure that someone is on top of these changes. The team may consist of the health care facility attorney, risk management personnel, health information administrator/ privacy officer, security officer (IT) and representative therapists (e.g., occupational therapist, physical therapist and speech-language pathologist). • Educate and train therapists and other rehabilitation personnel who use TR software applications for video conferencing on all aspects of privacy and security issues related to video conferencing as well as exchange of other PHI. Awareness training on all aspects of HIPAA security rules in relation to TR and software use, spyware, password security, and encryption should be emphasized in relation to video conferencing. Education and training should emphasize what therapists should look for when considering use of certain software applications for video therapy in relation to privacy and security as well as quality and reliability. Many times the privacy and security of a system is overlooked because of how well it can provide a TR service. • Develop an informed consent form that patients sign that explains the TR therapy that will be provided, how the VoIP technology software will be used and why, the benefits of the TR and use of video conferencing communication, as well as the risks related to privacy and security. Have the team attorney review the informed consent to make sure it meets all federal (HIPAA), state and local regulations. • Incident response is necessary and should include documentation regarding the incident, the response to the incident, any effects of the incident as well as whether policies and procedures that were followed in response to the incident. If policies and procedures are not in place for incident response, then these should be developed with the security and privacy officers. • Use the HIPAA compliance checklist and compare it to the VoIP technology software privacy and security policies. Or, purchase HIPAA compliance software specific to VoIP that will walk you through each piece of the HIPAA legislation to make certain the software is private and secure. • Consider the future of using VoIP technology software if the HIPAA regulations change to include them as business associates or if stronger recommendations are made for VoIP software technology, since the DHHS is also looking more closely at entities that are not covered by HIPAA rules to understand better how they handle PHI and to determine whether additional privacy and security protections are needed for these entities. • Follow all applicable security safeguards when using VoIP, such as those recommended by the NIST (Kuhn, Walsh, & Fries, 2005) and Garfinkel (2005). These include not using the username and password for anything else but video conferencing, changing it frequently and not making it easy to identify; not having computer viruses on the computer used for video conferencing; never use it for emergency services; and consistently authenticate who you are communicating with especially when used for tele-therapy video sessions. • Provide audit controls for using software applications so that they are secure and private. Focus on the transmission of data through video conferencing, how that data is made private and secure during the telecommunication, and also how private and secure it is stored and released to internal and outside entities. •
2017-04-03T13:48:20.057Z
2010-10-27T00:00:00.000
{ "year": 2010, "sha1": "37505d51b6e9644a67c470bf4c4e2dc4973cec3e", "oa_license": "CCBY", "oa_url": "http://telerehab.pitt.edu/ojs/index.php/Telerehab/article/download/6056/6298", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2cd043f653c83474739e7dd6a506e3a261252e6", "s2fieldsofstudy": [ "Computer Science", "Law", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249807817
pes2o/s2orc
v3-fos-license
Fitness Promotion in a Jump Rope-Based Homework Intervention for Middle School Students: A Randomized Controlled Trial Physical activity (PA) homework offers a promising approach for students to be physically active after school. The current study aims to provide holistic insights into PA homework design and the effects of implementation in practice. In total, ninety-three middle school students were randomly assigned to a homework group (HG) or control group (CG). Participants in HG (n = 47) were requested to complete jump rope homework three times per week for 12 weeks, while their counterparts in CG attended one health education class every week. A homework sheet was used to provide instructions and record information for exercise behaviors during homework completion. Physical fitness tests were conducted to investigate the effects of the jump rope homework on the physical fitness of middle school students. After the intervention, participants in HG reported moderate to vigorous PA during jump rope exercise. The average duration for each practice was approximately 48 min. The returned homework sheets accounted for 86.88% of all homework assignments, indicating a good completion rate. Compared with their counterparts in CG, participants performing jump rope exercise indicated greater improvement in speed, endurance, power, and core muscular endurance. Jump rope homework strengthened physical fitness for middle school students, which provided a valuable addition to comprehensive school PA practice. INTRODUCTION Inadequate physical activity (PA) increases the risks of chronic diseases (i.e., obesity and cardiovascular disease) among children and adolescents, which has raised global public health concerns (Hills et al., 2011;Landry and Driscoll, 2012;Gupta et al., 2013). A recent study reported that over 80% of adolescents worldwide failed to meet the daily moderate-to-vigorous physical activity (MVPA) recommendations Guthold et al., 2020;World Health Organization [WHO], 2020]. Regular PA has been considered necessary in weight control and obesity-related disease prevention (Poitras et al., 2016;Brown et al., 2019). A physically active lifestyle in childhood and adolescence often continues into adulthood and improves overall wellbeing at an older age (Landry and Driscoll, 2012;McPhee et al., 2016). In addition to the health-related considerations, the development of motor skills provides another reason for PA engagement during childhood and adolescence. The neuromuscular system is highly plastic during critical periods, which imply an optimal time for developing motor skills (Gabbard, 2012). Missing the critical period for a specific function makes it difficult to reach full potential in adult life (Salkind, 2002;Gale et al., 2004). Therefore, the developmental perspective also highlights the importance of achieving adequate PA levels at a young age. School plays a critical role in influencing students' daily PA level because of accessible resources for PA participation, such as sports facilities and competent physical educators (Pate et al., 2000;Trost et al., 2008). In addition, a large number of children and adolescents spend most of their daytime at school (Ha et al., 2015;Baumgartner et al., 2020). Therefore, the school has been considered an ideal setting for PA intervention (Story et al., 2009). However, due to the increasing demand for academic achievement, PA levels tend to decline as students age, which poses challenges in school health practice (Pate et al., 2002;Dumith et al., 2011;Rauner et al., 2015). As an extension of school-based PE programs, PA homework addresses the already limited and still-shrinking time for PA participation at school (Mitchell et al., 2000a). Time spent after school is characterized by a high level of sedentary behaviors (Trost et al., 2008). Making good use of this time period has the potential to increase daily PA for children and adolescents (Naylor et al., 2008;Duncan et al., 2011Duncan et al., , 2019. Indeed, after-school PA programs have been proved effective in decreasing the risk of obesity (Martínez Vizcaíno et al., 2008), developing motor and cognitive functions (Kamijo et al., 2011), improving academic performance (Durlak et al., 2010), and leading to physically active lifestyles . Jump rope is a whole-body movement that allows participants to engage in MVPA. Average metabolic equivalent (MET) was reported to reach 11.7 and 12.5 in a 5-min rope skipping at a rate of 125 reps/min and 145 reps/min, respectively (Ainsworth et al., 2000). Consistent findings were identified in another study using the OMNI perceived exertion scale and heart rate as measures of exercise intensity. The OMNI scale is a category rating format that contains both pictorial and verbal descriptors positioned along a numerical response range of 0-10 (Robertson et al., 2000). Children (aged 10.6 ± 0.9 years) reported an average score of 6.4 and a corresponding heart rate of 180 bpm when skipping at the rate of 140 reps/min. The results suggest moderate to high-physical exertion during the jump rope exercise (Buchheit et al., 2014). The specific advantages of jump rope make it a promising exercise modality within and beyond school settings. Jump rope is characterized by low requirements for physical space and equipment cost, which facilitate access to PA (Ha et al., 2014). Schools, particularly in Asian countries, are usually crowded with a large number of students (Johns and Ha, 1999). A feasible solution to the space restrictions would be of great value in practice. Jump rope can be performed in limited space, which justifies its wide application to school-based PA (Hao et al., 2019;Baumgartner et al., 2020). Affordability is another advantage of jump rope, which addresses PA barriers related to low socioeconomic status (Kim et al., 2020;Yang et al., 2020). Students from lower socioeconomic areas face an increased risk of obesity (Stamatakis et al., 2010). Affordable equipment is regarded as a critical factor in encouraging PA and lowering relevant health risks in this population (Ha et al., 2015(Ha et al., , 2017. Motivation is a key factor in PA participation and adherence over time (Yang and Xu, 2014). Research has shown that participants would be more motivated if exercise was perceived as fun (Kim et al., 2020). Jump rope is considered an enjoyable exercise modality for adolescents (Hernandez et al., 2009;Ha et al., 2014;Sung et al., 2019;Yang et al., 2020), which implies the feasible application of jump rope to PA homework. In the existing research, homework was mainly delivered through non-active forms such as sports event attendance, written assignments (Mitchell et al., 2000b), and fitness concept learning (Jorgenson and George, 2001). The effects of active homework on physical fitness largely remain unknown. Claxton and Wells (2009) conducted a 12-week PA homework program among college students. Based on self-reported PA levels, the study indicated that homework could be an effective method of increasing the PA of college students. The study design can be further improved in two aspects. First, the subjective survey can be replaced by an objective assessment. Second, while homework has been proved effective in PA promotion, further investigations can focus on the influence of homework on physical fitness. The current study aims to provide holistic insights into PA homework design and the effects of implementation in practice. It is our interest to investigate whether homework assigned in the form of jump rope could improve physical fitness for middle school students. Study Design and Recruitment A two-arm parallel group RCT was conducted following the Consolidated Standards of Reporting Trials (CONSORTs) (Boutron et al., 2008). The study consisted of the following four phases: recruitment, pre-test, intervention, and posttest. Recruitment was conducted in the first 2 weeks of the spring semester (March) in 2021. All the participants were recruited from a middle school in Qingdao, China. Research assistants answered questions from the students and parents to ensure research information to be fully acknowledged. Eligible participants should meet the following criteria: (1) participants and their parents signed the informed consent forms; (2) participants were not student athletes; (3) participants had no recent injury that impaired motor performance; and (4) participants did not attend any other after-school PA programs during the study. The eligible participants were randomly assigned to either a homework group (HG) or a control group (CG). All the participants completed the pre-test in the 3rd week of the semester. The 12-week intervention was then conducted from weeks 4 to 15. Different tasks were assigned to HG and CG during the intervention. Participants in HG completed jump rope homework, while their counterparts in CG attended health education classes. The post-test was conducted in the 16th week of the semester (July). The recruitment initially identified 116 students who expressed their willingness to participate in the study. The screening process excluded 23 students because of regular participation in after-school PA programs (N = 12), student athletes (N = 10), and recent injury (N = 1). Therefore, 93 eligible participants (men = 46, women = 47) were randomly assigned to HG (N = 47, women = 24) or CG (N = 46, women = 23). A one-way independent ANOVA was conducted to compare HG with CG at the baseline. No significant between-group differences were identified in age, body mass index (BMI), or measures of physical fitness ( Table 1). Figure 1 displays the enrollment and allocation processes. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Qingdao University. Randomization and Blinding Participants were assigned a computer-generated number before allocation. Another set of numbers was randomly generated by the computer to identify the participants allocated to HG. Allocation was carried out by an external researcher for the consideration of concealment. Researchers were blinded because there was no direct interaction between researchers and participants. While the participants in HG completed the assigned homework, those in CG attended health classes led by a hired instructor who was not involved in any part of the research. It is worth noting that participant blinding may be somewhat compromised in the current study. Participants were blinded only if they were unable to distinguish between treatments applied to different groups (Maher et al., 2003). Because all the participants were at the same school, the current design could not exclude the possibility that participants of HG and CG might realize the different treatments during daily communication. Intervention The participants in CG attended health classes every Friday from 5:10 to 5:50 p.m. The class was instructed by a specialist in health education. Knowledge on nutrition, exercise, and a healthy lifestyle was provided. On the other hand, participants in HG completed a 45-min jump rope exercise every Monday, Wednesday, and Friday during after-school hours (after 5 p.m.). In total, eight fundamental skipping drills were selected, including basic hop, alternate foot step, scissors, front-and-back jumps, side-to-side jumps, "jumping jack, " single leg jumps, and double under. The drills were introduced and practiced in the first 15 min of each PE class as warm-up activities. Because all the students had experience of learning and practicing jump rope in elementary school, the skipping drills were familiar to the participants. To facilitate homework completion, we designed a homework sheet (Figure 2) that listed the workout plan with detailed information as to drills for practice, skipping rate, the number of sets, and break time. The sheet also had blanks for participants to report the date of exercise, start time, practice duration, and rating of perceived exertion (RPE). The participants were asked to turn in the completed homework sheet and received a new one from research assistants. The homework was recommended rather than required, as participants and their parents acknowledged that the homework completion did not influence their final grade in physical education. Trusting students to complete their PA homework honestly is an issue faced by PE teachers and researchers (Gabbei and Hamrick, 2001). The current study adopted strategies to hold students accountable for the completion of homework. Participants needed to turn in the completed homework sheet to their PE teachers in exchange for a new one for the next practice. To enhance parental involvement, we asked parents to sign the sheet as a verification that the participants completed the practice as suggested (Gabbei and Hamrick, 2001;Hart, 2001). The homework sheet was valid only if it was completely filled out and signed by parents. Descriptive Statistics of Homework Completion Descriptive statistics were collected to reflect homework completion. The percentage of valid homework sheets received from the participants indicated the rate of homework completion. Start time and duration provided information for the homework implementation. RPE has been shown to be a valid and convenient instrument to quantify the intensity of exercise and training (Foster et al., 2001;Impellizzeri et al., 2004). The Borg Category-Ratio (CR) scale was used to measure the exercise intensity of perceived effort (Shariat et al., 2018). Participants reported a score between 0 (no effort at all) and 10 (maximal effort) to reflect physical exertion during the jump rope homework. Measures of Physical Fitness Physical fitness was assessed by five tests on speed, flexibility, core muscular endurance, explosive power, and endurance. Speed was measured by a 50 m sprint. Time was automatically recorded by an infrared system with a precision of 0.01 s (Model: CSTF-FH, Tongfang Co., Ltd., China). Students started in a standing position. The better performance in two trials was recorded. Flexibility was assessed by a sit-and-reach test. Participants slowly reached as far as possible with one hand on top of the other and keeping their knees straight on the floor. Performance was measured by an electronic device with a precision of 0.1 cm (Model: CSTF-YW, Tongfang Co., Ltd., China). Participants conducted two trials in the sit-and-reach test. The longer distance was used as the measure of flexibility performance. Explosive power was assessed by broad jumping. The electronic device (Model: CSTF-TY, Tongfang Co., Ltd., China) automatically recorded the jump distance with the precision of 0.1 cm. Two attempts were allowed in the test, and the better performance was used for data analysis. Sit-ups have been shown to be effective for measuring core muscular endurance for both male and female (Bianco et al., 2015). Participants lie on a cushion with their knees bent at approximately right angles. Participants placed one hand over the other on the chest, and raised their body toward their knees. Successful repetitions in 1 min were counted by trained research assistants. Endurance was assessed by an 800 m run on a standard 400-m lap. Research assistants used stopwatches to record the times of participants completing the test. Participants performed once in both the sit-up test and 800 m run. Statistical Analysis Data analysis was performed in two steps. First, one-way repeated measures ANOVA was used for pre-and post-test comparisons of each individual group. The independent variable consisted of two time intervals: pre-and post-test. Dependent variables were the outcomes of the fitness tests on speed, flexibility, core muscular endurance, explosive power, and endurance. Second, a 2 × 2 repeated measures multivariate analysis of variance (MANOVA) was conducted to investigate the effects of jump rope homework on physical fitness, with group (HG and CG) as a between-group factor and time (pre-test and post-test) as a within-group factor. Outliers were defined as 3 SDs from the mean. The Shapiro-Wilk test was performed to verify the normality assumption. The homogeneity of variance assumption was checked by Levene's test of equality of error variances. The effect size was calculated by partial eta squared (η 2 ), with values of 0.01, 0.06, and 0.14 defining small, moderate, and large effects (Cohen, 1988). Statistical significance was defined by the cutoff point of 0.05. All the statistical analyses were conducted by SPSS 25. RESULTS The 47 participants performing jump rope homework were supposed to submit 1,692 homework sheets in 36 sessions throughout the 12-week intervention. A total of 1,470 valid homework sheets were received by the end of the intervention, which accounted for 86.88% of all homework assignments. The distribution of the start time showed that most practices (19.09%) began between 8:00 p.m. and 8:30 p.m. Over 70% of practices began after 7:00 p.m., indicating that a majority of participants were available for PA during the time period (Figure 3). Jump rope homework took 47.98 (SD = 6.87) minutes on average, which was longer than the scheduled 45-min practice (t 46 = 2.97, p = 0.01). Participants reported a mean RPE score of 5.28 (SD = 1.95), suggesting a moderate to vigorous level of PA associated with jump rope homework. DISCUSSION The current study implemented a 12-week PA homework intervention by means of jump rope exercise. Participants reported engaging in MVPA during homework completion. The participants might be undergoing a growth spurt during adolescence, which was evident by the improved physical fitness in CG. It is worth pointing out that participants in HG showed greater improvement than their counterparts in CG. The repeated measures MANOVA indicated significant interaction effects on speed, endurance, power, and core muscular endurance, suggesting additional benefits of jump rope exercise for the pubertal growth. The findings also indicate an integration of multiple elements into jump rope exercise (Pangrazi and Beighle, 2010;Trecroci et al., 2015). Concurrent development in agility, coordination, balance, and reaction can be achieved by means of variations in rope swing, skipping drills, movement directions, and stepping rhythms (Orhan, 2013;Partavi, 2013;Eler and Acar, 2018;Yang et al., 2020). In addition, the quick stretch-shortening cycle contractions during repetitive jumps indicate the characteristics of plyometric training which has been proved effective in improving speed and jump performance (Miyaguchi et al., 2014;García-Pinillos et al., 2020). It is worth noting the limited effects of jump rope exercise on flexibility. A practical implication is that stretching practice is needed after jump rope exercise for flexibility improvement. Over the past decade, the Comprehensive School Physical Activity Program (CSPAP) has been widely accepted and applied to school health practice. The idea of CSPAP is to provide students with adequate PA opportunities by means of a multicomponent approach before, during, and after school hours (Pulling Kuhn et al., 2021). Quality PE is the foundation of the program, along with the other four components, namely, PA during school, PA before and after school, family and community engagement, and staff involvement (Erwin et al., 2013). However, research has shown a limited effect of schoolbased PA interventions on the physical fitness and health behaviors of students. Love et al. (2019) conducted a metaanalysis on 17 studies. Evidence showed that school-based PA programs did not positively impact students' physical activity. Therefore, implementing PA homework during after-school hours is an important addition to school PE. Yang et al. (2020) conducted a jump rope-based intervention program during after-school hours (5:00 p.m. to 6:00 p.m.). Jump rope classes were guided by instructors and conducted in a school gym. Significant improvement in muscular strength, body composition, and bone mineral density was identified in the jump rope group compared with the control group. In the current study, participants conducted the jump rope homework in a self-regulated approach. Although the amount of PA can be warranted in a well-organized, instructor-led jump rope class, implementing such an intervention program imposes a high level of demand on resources within the school (i.e., quality instructors and well-equipped facilities). Jump rope homework, on the other hand, does not require particular facilities and resources for PA participation, thus indicating prominent potential for wide applications. The current study explored strategies to implement an effective PA homework program. Mitchell et al. (2000b) summarized four essential factors for successful homework, including relevance of homework to class content, understanding of homework, parent support, and student accountability. In fact, the jump rope homework was designed and implemented in compliance with the factors proposed in the previous study. To ensure all participants in HG completed the homework, PE teachers used the first 15 min of each class to teach the skipping drills assigned to the homework. Jump rope is considered ideal to warm-up activities because it is more active and dynamic than traditional routines of stretching and jogging (Chu, 1998). By integrating jump rope into the warm-up section of PE classes, we established a connection between homework and class content. Printed learning materials have been proved a useful instrument in administering homework (Weston et al., 1997;Gabbei and Hamrick, 2001). In the current study, the homework sheet provided explicit instructions on the drills, skipping rate, the number of sets, and break time. It has been noticed that the importance of clear task descriptions for homework completion (Gabbei and Hamrick, 2001). The instructions provided in the homework sheet facilitated participants' understandings of practice. Parental involvement is another determinant of homework completion (Smith and Claxton, 2003). We asked parents to verify children's performances and efforts by signing the homework sheet. The signature involved parents with the responsibility of assistance and supervision, which has been considered a useful strategy to hold students accountable for homework completion (Hart, 2001). It is also important to stress the positive role of jump rope in facilitating access to PA after school. Evidence shows that over 70% of practices begin after 7 p.m. Because the students usually left school at 5 p.m., it might take time for the commute, dinner, and homework for other subjects. In fact, choosing jump rope as the homework content was based on a series of practical considerations. When designing the homework intervention, we realized the likelihood that students might do the homework at night. Compared with popular sports such as soccer and basketball, jump rope is suitable to play at night. To organize a team sport, specific requirements on court, lighting, and the number of participants need to be counted. A homework based on team sport might not be completed if any of the conditions were not satisfied. Jump rope addresses restrictions in time and space. Such advantages in organization and implementation enabled participants to do exercise when they were available. The percentage of valid homework suggests jump rope is feasible for PA homework. To our knowledge, this is the first study applying the randomized controlled design to investigate the effects of active homework on physical fitness for middle school students. The empirical evidence contributed to in-depth understandings of PA homework design and implementation. However, the limitations of the current study must be clarified. The PA homework enhanced the physical fitness of the participants, but the longterm effects of the intervention were unclear. A follow-up test would be helpful to examine the sustainability of the program, which should be addressed in future studies. Another concern with the current study design lies in the lack of control over participants' exercise behaviors throughout the study. Attending the health education classes could have led the students of CG to increase their PA level, which might eventually affect the differences between groups in physical fitness. In addition, because all participants were recruited from the same school, it is possible that participants in CG might actually take part in the intervention with their friends assigned to HG. Parental role in homework completion should be considered as well. Although previous research assumed a positive role of parents in stimulating students' PA participation (Gabbei and Hamrick, 2001;Hart, 2001), it is necessary to raise awareness of the situation in which parents and participants in HG could easily fake good compliance. The limitation in study design could lead to a contamination of the results. In the current study, statistical analyses indicated significant differences in the magnitude of improvement between groups. It is reasonable to assume that either participants in CG taking part in jump rope exercise or faking good compliance with the PA homework would decrease the between-group differences in physical fitness. The significant difference between HG and CG suggests a limited impact of those behaviors, if any, on the findings. Lacking effective methods of evaluating participants' exercise behaviors at home is a major challenge in both research and teaching practice, which needs to be addressed in the subsequent research to ensure successful completion of PA homework. CONCLUSION The current study investigated the effects of a 12-week jump rope homework intervention on the physical fitness of middle school students. Participants reported MVPA in the jump rope exercise and showed a good completion rate of the homework. Jump rope exercise induced significant improvements in speed, endurance, power, and core muscular endurance. Further comparisons between groups indicated greater improvement in speed, endurance, power, and core muscular endurance of participants in HG, indicating additional jump rope benefits for pubertal growth. The promising findings on exercise behaviors and physical fitness lead to the conclusion that PA homework based on jump rope exercise is effective in enhancing physical fitness for middle school students. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of Qingdao University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS FH and YH: conceptualization and writing-original draft preparation. FH and QF: methodology. YS and YZ: validation. FH and YS: formal analysis. QF, YZ, and YS: writingreview and editing. YZ: visualization. YH and QF: supervision. YH: project administration. All authors collaborated in preparing the manuscript.
2022-06-18T15:18:50.077Z
2022-06-16T00:00:00.000
{ "year": 2022, "sha1": "b53d6895999f1fdc6527e5b5855bf58c2c82fb20", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.912635/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "735791d6fc6d34c7006e2464b396dd6199e2a787", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
245856196
pes2o/s2orc
v3-fos-license
Emerging Contaminants in Streams of Doce River Watershed, Minas Gerais, Brazil This study investigated the occurrence and risk assessment of ten pharmaceutical products and two herbicides in the water of rivers from the Doce river watershed (Brazil). Of the 12 chemicals studied, ten (acyclovir, amoxicillin, azithromycin, ciprofloxacin, enrofloxacin, fluoxetine, erythromycin, sulfadiazine, sulfamethoxazole, glyphosate and aminomethylphosphonic acid) had a 100% detection rate. In general, total concentrations of all target drugs ranged from 4.6 to 14.5 μg L−1, with fluoroquinolones and sulfonamides being the most representative classes of pharmaceutical products. Herbicides were found at concentrations at least ten times higher than those of the individual pharmaceutical products and represented the major class of contaminants in the samples. Most of the contaminants studied were above concentrations that pose an ecotoxicological risk to aquatic biota. Urban wastewater must be the main source of contaminants in waterbodies. Our results show that, in addition to the study of metal in water (currently being conducted after the Fundão dam breach), there is an urgent need to monitor emerging contaminant in waters from Doce river watershed rivers, as some chemicals pose environmental risks to aquatic life and humans due to the use of surface water for drinking and domestic purposes by the local population. Special attention should be given to glyphosate, aminomethylphosaphonic acid, and to ciprofloxacin and enrofloxacin (whose concentrations are above predicted levels that induce resistance selection). INTRODUCTION On the 5th of November 2015, one of the biggest environmental tragedies in the world occurred in the municipality of Mariana, MG (Brazil): the collapse of the Fundão dam, belonging to Samarco (a joint venture of Brazilian Vale and Anglo-Australian BHP Billiton), was responsible for releasing about 50 million m 3 of mining waste into the environment (Porto, 2016). The disaster, classified as very large and sudden (due to the severity of negative impacts caused), directly affected about 663.2 km of one of the most important Brazilian rivers (Doce river), which stretches between the states of Minas Gerais and Espírito Santo (IBAMA, 2015). Among the environmental impacts caused by the silt wave of tailings, the destruction of permanent protected areas and native vegetation of the Atlantic Forest and above all, the impact on aquatic ecosystems should be highlighted. The spoil heaps of the Fundão dam quarry flooded the district of Bento Rodrigues, however, it was dammed by the Risoleta Neves hydroelectric power plant. This was practically the only area of floodplain affected by the disaster. The material deposits in the area were considered an ecological time bomb due to their potential to release metals into the environment, including water-although this remains controversial in the literature (Queiroz et al., 2018). After the disaster, monitoring the water quality of the Doce river became a priority to track the potential impact on the disaster on aquatic environment. In addition, this study is important because the Doce river water is used to supply several cities in the states of Minas Gerais and Espírito Santo. However, the main focus of these studies was to evaluate the metal concentrations in the water. As far as we know, there have been no studies that have evaluated organic contamination of the water by emerging contaminants such as pesticides and personal and pharmaceutical products. In addition to mining activities, the Doce river watershed experiences continuous discharges of untreated wastewater, as well as contamination from agriculture (e.g., fertilizers and pesticides) and inadequate disposal of municipal waste (ANA, 2015) which are inevitably reflected in the presence of these emerging contaminants in the water. Once in the aquatic environment, drug and pesticide residues can cause potential environmental risks by affecting aquatic organisms and, in the case of antibiotics, promoting the spread of antibiotic-resistant genes (Gomes et al., 2017;Gomes et al., 2019;Mendes et al., 2021). In addition, the use of contaminated water for crop irrigation can lead to the accumulation of pesticides and pharmaceuticals in crops and their uptake into the food web (Gomes et al., 2019;Gomes et al., 2020b). Here, we tracked concentrations of pharmaceuticals and pesticides in water from rivers in the Doce river Watershed from 2018 to 2019. We wanted to draw attention to the need to focus water investigations on the presence of novel contaminants that may affect water and environmental safety, in addition to metals. Study Area The study area includes the Doce river drainage watershed, in the state of Minas Gerais (Brazil) (Figure 1). The region has approximately 199,000 inhabitants, mainly located in the urban area of the cities of Ouro Preto (74,558 inhabitants) and Mariana (61,228 inhabitants) (IBGE, 2020). Samples were collected from four sites with pronounced human activities in vicinity of the collecting point (Supplementary Tables S1, S2): In the Carmo River, samples were collected in the Mariana downtown area, typically characterized by urban discharges (Figure 1, site 1), and near the small town of Acaiaca (3,994 inhabitants) (Figure 1, site 2), which is surrounded by some agricultural fields with extensive livestock (mainly) and eucalyptus plantations. In the Gualaxo do Norte River, the samples were collected in an area surrounded by agricultural fields (mainly arable); the water also receives effluents from mining in SAMARCO iron ore mining ( Figure 1, site 3). Finally, samples from the Doce river were collected near Risoleta Neves dam ( Figure 1, site 4). The Doce river is formed by the confluence of the Piranga and Carmo rivers and, also receives urban runoff from the city of Ponte Nova via the Piranga river ( Figure 1; Supplementary Tables S1, S2). Selection of the Studied Chemicals Pharmaceuticals (acyclovir, amoxicillin, azithromycin, ciprofloxacin, doxycycline, enrofloxacin, fluoxetine, erythromycin, sulfadiazine and sulfamethoxazole) were selected based on their abundance in on surface waters worldwide (Grill et al., 2016;Bertram et al., 2018;Beatriz et al., 2020;Gupta et al., 2021). Antibiotics such as amoxicillin (β-lactam), azithromycin and erythromycin (macrolides), ciprofloxacin and enrofloxacin (fluoroquinolones), doxycycline (tetracyclines), sulfamethoxazole and sulfadiazine (sulfonamides) are among the most commonly used in human and animal treatment, aquaculture and as feed additives (Giang et al., 2015). Acyclovir is one of the most effective and widely used anti-herpes agents (Mucsi et al., 1992;Gupta et al., 2021) and, fluoxetine is one of the most commonly prescribed antidepressants (Bertram et al., 2018). Glyphosate, on the other hand is the most commonly used herbicide in the world (Gomes et al., 2014) and is frequently used in the fields surrounding the sampling sites. Conversely, aminomethylphosphonic acid (AMPA) is the major metabolite of glyphosate, which is formed in the environment mainly through the degradation of the herbicide by microbes (Brock et al., 2019). In addition, organic phosphonates used in both industrial and domestic applications (detergents, flame retardants, corrosion inhibitors, antilimescale agents and in the textile industry) are also sources of AMPA in aquatic ecosystems (Levine et al., 2015;Grandcoin et al., 2017). Sampling Campaign and Preparation Sampling was conducted in June 2018 (total precipitation from 0.2 to 0.6 mm/average flow rate 3.98 to 48.54 m 3 /s), November 2018 (total precipitation from 146.1 to 210.1 mm/average flow rate 9.73 to 118.26 m 3 /s), and April 2019 (total precipitation from 108.0 to 142.8 mm/average flow rate 5.95 to 102.75 m 3 /s) (Supplementary Tables S3, S4). All sampling equipment was thoroughly cleaned with 70% ethanol before fieldwork and then washed with deionized water. Three water samples (5,000 ml) were collected at 50 m intervals at each point. The surface water samples were collected in sterile amber glass bottles. Samples were stored in ice (4°C) until arrival at the laboratory and then filtered through glass fiber membranes (0.45 µm, Millipore). Samples were then separated for evaluation of drugs (acyclovir, amoxicillin, azithromycin, ciprofloxacin, doxycycline, enrofloxacin, fluoxetine, norfloxacin, erythromycin, sulfamethoxazole, and sulfadiazine) and herbicides [glyphosate and aminomethylphosphonic acid (AMPA)]. The pH of the samples was adjusted to 6.5 and 2.5 for the drug and herbicide analyses, respectively. For the drug analyses, the filtered water samples (500 ml) were concentrated by solid-phase extraction (SPE) using a Visiprep ™ SPE Vaccum manifold (Sigma-Aldrish, Brazil) with 200 mg, 3 ml −1 Phenomenex Strata-X ® cartridges (Torrance, California, United States). SPE conditions were the same as those described by to Beatriz et al. (2020). Cartridges were conditioned with 4 ml methanol followed by 6 ml ultrapure water and analytes were eluted in 4 ml methanol. For herbicide evaluation, samples were concentrated using C18 cartridges (500 mg/6 ml; Applied Separations, United States) previously conditioned with 15 ml of acidified water (pH 2.5) and 5 ml of methanol (Mendes et al., 2021). The cartridges containing the samples were eluted with 3 ml of 50% methanol in water (v/v). For both antibiotics and herbicides, the eluate was dried in a SpeedVac device (RC1010, Thermo), and the residues resuspended in the mobile phase (water and acetonitrile in a 50:50 v/v ratio with 0.1% formic acid and 5 µM ammonium formate for antibiotics and 5 mM ammonium acetate in water for herbicides). Chromatographic Analyses Analyses were performed using a LC-MS/MS system consisting of a Xevo TQD triple quadrupole mass spectrometer (Waters) with electrospray (ESI) ionisation source and an HPLC Varian SYS-LC-240-E with autosampler. Drugs were evaluated following (Beatriz et al., 2020), while glyphosate and AMPA were evaluated following (Gomes et al., 2015). For the drugs, chromatographic separations were performed using a 4.6 mm × 150 mm, 5 µm particle size Zorbax Eclipse XDB-C8 column (Agilente, Milford, United States) using water as phase A and acetonitrile/water (95:5 v/v) as phase B, both containing 0.1% formic acid and 5 mM ammonium formate. For the herbicides, chromatographic separations were performed using an Ascentis ® C18 column (Sigma-Aldrich, Brazil) with a mobile phase consisting of 5 mM ammonium acetate in water (phase A) and 5 mM ammonium acetate in methanol (phase B), both pH 7.0. Mass spectrometry analyses were performed in positive and negative ion modes for antibiotics and herbicides, respectively. Acyclovir (ACY), amoxicillin (AMO), azithromycin (AZI), ciprofloxacin (CIP), doxycycline (DOX), enrofloxacin (ENR), fluoxetine (FLU), erythromycin (ERY), sulfadiazine (SDZ), sulfamethoxazole (SMX), glyphosate (GLY) and AMPA (Sigma-Aldrich, Canada) in analytical grade were used to construct the calibration curves. Standard stock solutions (1,000 μg ml −1 ) of these compounds were prepared using different compositions of methanol, water and acetonitrile, with formic acid and ammonium formate, depending on solubility. The six-point calibration curves showed good linearity for the analytes (r 2 ≥ 0.95; p < 0.0001). Each sample batch included three blanks, three standards, and three fortified samples (for quality control). The recoveries for all compounds were greater than 87%. The limit of detection (LOD) and limit of quantification (LOQ) of each analyte are listed in Table 1. Ecological Risk Assessment The predicted no effect concentration (PNEC) was estimated using the ecological structure-activity relationships (ECOSAR) model (Moore et al., 2003) and was calculated by dividing the noobserved effect concentration (NOEC) found in the literature by an assessment factor (AF) of 1,000, which represents chronic toxicity (Ikem et al., 2017). Hazard quotient (HQ) was used to assess the environmental risk of chemicals and their potential to cause adverse effects in the environment (Carlsson et al., 2006) and was calculated as follows: HQ MEC PNEC Where PNEC is the predicted no-effect concentration (from literature) and MEC is the measured environmental concentration. For MEC, the mean of the concentrations found over time (n 9) for each collection site was used. Statistical Analyses Results were expressed as the average of three replicates. Statistical analyses were performed using JMP 10.0 software (SAS Institute Inc.). Results were subjected to normality (Shapiro-Wilk) and homogeneity (Bartlett) tests and then statistically analyzed. Univariate repeated measures ANOVA, with time as a within-subject factor and sites as the main effect, were used to analyze differences in chemical concentrations during the sampling period. The sphericity of the data was tested using Mauchly's criteria to determine if the univariate F-tests were valid for within-subject effects. If F-tests were invalid, the Greenhouse-Geisser test was used to estimate epsilon (ε). Contrast analysis was used when there were significant differences in the variables examined. Occurrence of Pharmaceutical Products on Surface Waters With the exception of DOX, all surface water samples were contaminated with the tested drugs ( Figure 2). The highest concentrations of antibiotics were found in CIP (up to 4,854.6 ng L −1 ), followed by SMX (up to 9,640 ng L −1 ). The concentrations of DOX (up to 2.25 ng L −1 ) were lower (or were not detected) compared to the other drugs ( Table 2). With the exception of ERY and FLX (p > 0.05), a significant interaction (p < 0.05) between time and site of sampling was observed for the drugs ( Table 2). Regardless of the sampling date, the concentrations of ACY, AMX, CIP, SDZ, SMX (except for the last sampling date) were higher and ENR concentration was lower at site one than at the other sampling sites (Figure 2). High concentrations of ACY, AZI, CIP, SDZ and SMX were detected in the water samples from site one on the first sampling date and lower concentrations of ACY, AMX, ENR, ERY, and SDZ were detected on the second sampling date compared to the other sampling dates (Figure 2). CIP concentrations decreased and ENR concentrations increased over time in the site 2 water samples ( Figure 2). In addition, ERY, SDZ, and SMX concentrations were lower in site 2 water samples at the second time point compared to the other sampling time points (Figure 2). With the exception of ENR, whose concentration decreased at the second sampling time point, and ACY, whose concentration increased at the last sampling time point, the concentrations of the other drugs in the water samples from Site three did not differ significantly (p < 0.05) over time ( Figure 2). When compared over time of collection, the concentrations of ACY, AZI, and SDZ were higher in the water samples from Site four on the first sampling date and the concentrations of AMX and ERY were lower on the second sampling date (Figure 2). Acyclovir (ACY), amoxicillin (AMX), aminomethylphosphonic acid (AMPA), azithromycin (AZI), ciprofloxacin (CIP), doxycycline (DOX), enrofloxacin (ENR), erythromycin (ERY), fluoxetine (FLX), glyphosate (GLY), sulfadiazine (SDZ) and sulfamethoxazole (SMX). Frontiers in Environmental Science | www.frontiersin.org January 2022 | Volume 9 | Article 801599 FIGURE 2 | Concentrations of pharmaceutical target products in surface water samples from rivers of the Doce river watershed, Minas Gerais, Brazil. Values are means ± SD of three replicates. Values marked with * differ significantly (p < 0.05) within the same sampling site by the contrast test. Predicted no effect concentration (PNEC) is shown as dotted red line. NOEC concentration with no observed effect. Occurrence of Glyphosate and AMPA in Surface Waters All surface water samples were contaminated with GLY and AMPA ( Figure 3). For GLY and AMPA, a significant interaction (p < 0.05) was observed between time and site of sampling ( Table 2). Higher concentrations of these chemicals were observed in samples from sites 2 and three compared to samples from sites one and 2 on the first day of sampling. Glyphosate and AMPA concentrations increased over time in samples from sites 1 and 2, respectively (Figure 3). At site 3, AMPA concentrations were lower on the second sampling date (Figure 3). Glyphosate and AMPA concentrations in samples from site four did not differ over time (Figure 3). Ecological Risk Assessment With the exception of ACY, DOX, ERY, the observed concentrations of the chemicals were higher than their calculated PNEC (ecotox) (Figures 2, 3 and Table 3). For ACY, all concentrations observed at site one were greater than the calculated PNEC (ecotox) ; for site 2, only the concentrations found on the first and last sampling dates were greater than the calculated PNEC (ecotox) . At all sites, the observed concentrations of AZI, DOX, ERY, and SMZ were lower than the PNEC (resi. sel) ( Table 3). Only the HQ of DOX and ERY were lower than one for all sites. At sites 3 and 4, HQ < 1 was also observed for ACY (Table 3). DISCUSSION Of the ten pharmaceutical products studied, all had a 100% detection rate, except for AZI (91.6%) and DOX (41.6%) ( Table 3). In general, the total concentration of all target pharmaceutical ranged from 4,595.40 to 14,478.59 ng L −1 , with fluoroquinolones (CIP + ENR) and sulfonamides (SDZ + SMX) accounting for 28.08-49.42% and 17.57-53.28%, respectively. Based on the average proportion at all sites, the proportion of different pharmaceuticals was as follows: Fluoroquinolones ≥ Sulfonamides > Macrolides (AZI + ERY) (5.01-14.05%) > β-Lactam (AMX) (4.05-18.65%) > antiretrovirals (ACY) (1.5-1.9%) antidepressants (FLU) (0-4.6%) > tetracyclines (DOX, <0.1%). Among the fluoroquinolones, CIP was the most frequently detected antibiotic, regardless of the site and time of sampling (Figure 2), which is not surprising since CIP is the most commonly prescribed fluoroquinolone worldwide (Andreu et al., 2007) whose bactericidal effect is based on inhibition of DNA replication by inhibition of bacterial DNA topoisomerase and DNA-gyrase. CIP has been detected in milligram amounts in sewage sludge (Golet et al., 2003;Martínez-Carballo et al., 2007). However, in water samples, the detected concentrations are lower. In untreated hospital wastewater, CIP concentrations ranged from 1,100 to 44,000 ng L −1 in Vietnam and 388-578 ng L −1 in Malaysia (Duong et al., 2008;Thai et al., 2018). In urban wastewater, CIP was previously detected at concentrations ranging from 242 to 415 ng L −1 in China (Low et al., 2021) and in municipal landfills, concentrations ranged from 60.2 to 4,482 ng L −1 (Wu et al., 2015). In Brazilian surface waters, CIP concentrations were below 0.41 ng L −1 in the Atibaia River (São Paulo) (Locatelli et al., 2011) and ranged from 180 to 340 ng L −1 in rivers from the four largest hydrographic catchments of the city of Curitiba (Paraná) (Beatriz et al., 2020). The higher CIP concentration in surface waters observed in our study must be related to the lack of wastewater treatment (present in the other Brazilian cities cited) and the direct discharge of urban wastewater into the waters of Doce river watershed rivers. Similarly, a CIP concentration of 15,000 ng L −1 was observed in surface water in South Africa (Agunbiade and Moodley, 2014). Indeed, samples from sites under the influence of direct urban discharges (sites 1, 2 and 4) had high concentrations of the antibiotics compared to sites without urban proximity (site 3). Interestingly, ENR concentrations were lower in samples from site one compared to the other sites ( Figure 2). This antibiotic is used in veterinary medicine (Rusu et al., 2015), and indeed high ENR concentrations were found at sites near livestock (sites 2 and 3). It is important to note that ENR can be degraded to CIPRO through biotransformation (Walters et al., 2010), which could contribute to the CIP concentrations in water at sites 2 and 3. In the group of sulfonamides, SMX were detected higher concentrations (from 332.78 to 7,112.44 ng L −1 ) than SDZ (from 3.55 to 61.44 ng L −1 ) (Figure 2; Table 3). In rivers near the city of Curitiba (Brazil), SXM was found at concentration of 1859 ng L −1 , while SDZ were reported at concentration of 27 ng L −1 (Beatriz et al., 2020). In South Africa, SMX was detected at concentrations of 7,300 (Matongo et al., 2015) and 14,000 ng L −1 in surface waters (Ngumba et al., 2016). In Kenya, concentrations of up to 40,000 ng L −1 have been observed in river waters (K'oreje et al., 2016) while SDZ has been detected in concentrations of up to 40 ng L −1 in rivers from Nigeria (Oluwatosin et al., 2016). In China, up to 764.6 ng SMX l −1 has been detected in rivers (Chen and Zhou, 2014). Sulfonamides are bacteriostatic antibiotics that interfere with folic acid synthesis and are mainly used for acne and urinary tract infections-justifying their high concentrations in rivers near cities (sites 1 and 2). In China, sulfonamides were the major class of antibiotics found in rivers. SDZ and SMZ were detected at 100% and had mean concentrations of 259.6 and 7.6 ng L −1 , respectively (Chen and Zhou, 2014). Macrolides such as AZI and ERY inhibit bacterial protein biosynthesis, while the β-lactam AMX acts by binding to penicillin-binding proteins, resulting in the activation of autolytic enzymes in the bacterial cell wall. These antibiotics are used for both human and veterinary purposes. This may explain why macrolide and β-lactam concentrations were lowest at site 4, where there are no direct urban discharges and where crop cultivation is the main activity in the environment (Figure 2). Among macrolides, ERY was observed at high concentrations in our study, regardless of time and sites of collection (Table 3; Figure 2). ERY concentrations up to 20,000 ng L −1 and 1,000 ng L −1 were observed in surface waters in South Africa (Matongo et al., 2015) and Nigeria (Oluwatosin et al., 2016), respectively. AZI concentrations ranged up to 650 ng L −1 in Brazilian rivers (Beatriz et al., 2020) and up to 30 ng L −1 in South African rivers (Módenes et al., 2017), while it was not detected in Nigerian rivers (Oluwatosin et al., 2016). As for β-lactam, AMX has been detected in concentrations up to 99.4 mg L −1 in wastewater in Egypt (Abou-Elela and El-Khateeb, 2015). In Brazil, this antibiotic has been detected at concentrations up to 1,570 ng L −1 in rivers from the Curitiba region (Beatriz et al., 2020) and up to 1,284 ng L −1 in rivers from the state of S o Paulo (Locatelli et al., 2011). Data on the concentrations of ACY and FLX are scarce in the literature. These drugs are used to treat human, which justifies their high concentrations in areas with urban runoff (Figure 2). ACY is generally used as the first choice in the treatment of viral infections such as herpes simplex, Varicella zoster, herpes zoster, herpes labialis and acute herpetic keratitis (OʼBrien and Campoli-Richards, 1989). In Brazilian rivers (Curitiba, Paraná), concentrations ranged from ACY to 990 ng L −1 (Beatriz et al., 2020). In Germany, ACY concentrations in river water ranged up to 190 ng L −1 (Prasse et al., 2010). Average concentrations of FLX in surface waters concentrations ranged from 12 to 1,400 ng L −1 worldwide (Kolpin et al., 2002;Christensen et al., 2009). In Brazil, FLX concentrations in streams of Curitiba were as high as 620 ng L −1 (Beatriz et al., 2020). FLX is primarily used to treat depression, but also helps treat other mental disorders such as obsessive-compulsive disorder, bulimia nervosa, and panic syndrome, and is one of the most commonly prescribed psychotropic drugs in Brazil (Quintana et al., 2015). As with some other drugs studied, we found relatively high concentrations of pharmaceuticals in the waters of the Doce river watershed (Figure 2). Seasonal aspects could have influenced the results obtained. For example, the concentrations of ACY, AMX, AZI, CIP, ERY, SDZ and SDX were high at some sites on the first sampling date which corresponding to the dry season. During a low-precipitation period, water flow decreases and, assuming that pollution sources are constant, dilution effects must play a central role in the occurrence and concentrations of pharmaceuticals in the water samples sampled (Locatelli et al., 2011). The flow of a river is the result of complex natural processes that occur at the catchment scale and are largely influenced by precipitation (Yunus and Nakagoshi, 2004). Changes in streamflow affect water quality (Caruso, 2001) and pollution of rivers increases when streamflow is low, due to low dilution capacity (Yunus and Nakagoshi, 2004). We clearly observed the influence of rainfall on flow (Supplementary Table S4) and concentration of the studied drugs (Figure 1). At least for two of the sampling sites, the concentrations of the analyzed drugs (except for DOX and ENR) were higher when precipitations were the lowest (June 2018). The higher concentrations of DOX and ENR at the higher rainfall levels (294 mm in November 2018 and 108 mm in April 2019), indicate that the source of these drugs increased during rainy season. This could be due to increased seepage and runoff (Yunus and Nakagoshi, 2004) or to the increased use of these drugs during the rainy season. Although high concentrations of pharmaceuticals were found in the water samples, the most worrying results are associated with the observed GLY and AMPA concentrations (Figure 3). GLY and AMPA contamination levels exceeded those observed for pharmaceutical products by several times. These contaminants were observed at concentrations ranging from 51.88 to 117.07 and 23.46-41.25 μg L −1 , respectively, indicating that herbicides are the main source of contamination in the rivers studied. In the Paraná River watershed (Brazil), GLY concentrations ranged from 0.4 to 91.91 μg L −1 (Mendonça, 2018), while AMPA was detected at concentrations up to 14.78 μg L −1 (Da Silva et al., 2003). In another study, glyphosate concentrations up to 100 μg L −1 and AMPA concentrations up to 50 μg L −1 were detected in the Arroio Passo do Pilão watershed (Brazil). GLY is not only used in crops and eucalyptus plantations but is also widely used for weed control in Brazilian cities, and its use is often unregulated. Therefore, it is possible that GLY concentrations >80 μg L −1 at sites under urban areas (Figure 3). In addition to GLY, the observed concentrations of AMPA in water samples must be derived from its use in industry and household products (such as detergents). (Levine et al., 2015;Grandcoin et al., 2017). Unlike pharmaceuticals, concentrations of GLY and AMPA were not affected by pluviosity (except for sites one and 2 for glyphosate and AMPA, respectively). Considering the dilution effect of high pluviosity and river flow on river pollutants (Yunus and Nakagoshi, 2004), we hypothesize that herbicide use was increased during rainy season. In fact, glyphosate uses in Brazil is declining from April to September, as the herbicide is mainly used during the rainy season, when crops are growing (Dias et al., 2021). At concentrations as low as 5 μg L −1 , the herbicide glyphosate reduced algal diversity in phytoplankton communities of freshwater streams (Smedbol et al., 2018) and the EC10 value for GLY and AMPA in the macrophyte Salvinia molesta was 16 and 6.1 μg L −1 , respectively (Mendes et al., 2021). Therefore, it is reasonable to assume that the concentrations of these herbicides found in the Doce river watershed could trigger an alteration of aquatic life, and to assess the potential risk of these chemicals (along with the pharmaceutical products evaluated), we conducted a risk assessment. PNEC values are based on toxicological data from the literature. In this study, we selected the NOEC of species representative of the those found in Brazil to calculate the PNEC (ecotox) , using an assessment factor of 1,000, which represents chronic toxicity (Ikem et al., 2017). If the reported concentrations in the environment are higher than the PNEC, there is a toxicological risk to the environment. With the exception of ACY, DOX, ERY, the concentration of all other chemicals studied poses a potential toxicological risk. In the case of ACY, the observed concentrations at site one are also of toxicological concern. The risk level is generally classified into four groups: no risk (HQ < 0.01), low risk (0.01 ≤ HQ ≤ 0.1), medium risk (0.1 ≤ HQ ≤ 1), and high risk (HQ > 1) (Rodriguez-Mozaz et al., 2020). Only DOX had a HQ < 0.01, and did not pose an ecotoxicological risk to the aquatic environment. For the site 3, ACY poses a low risk (HQ 0.09), and for site 4, ACY poses a moderate risk (HQ 0.52). Similarly, ERY poses a medium risk (HQ < 1) to aquatic life, regardless of sampling locations. However, for all other chemicals sampled, HQ was greater than 1 (and can reach 7,317.36), representing a high ecotoxicological risk to the aquatic environment. The mean value of HQ for Cd, Pb, Cr, Zn, Cu and As in the Doce river ranged from 226.30 to 841.60 (Gabriel et al., 2020). Although the HQ indices for these metals and metalloids represent a high ecotoxicological risk, they are lower than the HQ calculated here for some chemicals (i.e., AMX, CIP, GLY, and AMPA) ( Table 3). These results demonstrate the urgent need to consider emerging contaminants (and not just metals) in risk assessment, given the importance of these chemicals to aquatic ecosystems. It is also important to note that concentrations of AMX, CIP, and ENR are high than the proposed PNEC for resistance selection. Antimicrobial resistance is an emerging concern, as the spread of resistance genes is a global problem with direct detrimental effects on the economy and public health (Kent et al., 2020). Moreover, very few studies have investigated the toxicity of drug mixtures in natural water samples. For example, Gomes et al. (2020a) observed interactive effects of AMX, ENR, and oxytetracycline on Lemna minor plants, which demonstrates the importance of evaluating both the isolated and integrative toxic effects of chemicals. Clearly, toxicological testing involving exposure to a cocktail of multiple drugs is needed, especially in highly contaminated surface water (Anh et al., 2021), as noted here. The main objective of this study was to draw attention to the presence of considerable amounts of emergent contaminants in the waters of the Rio Doce basin, which, among other contaminants, such as trace elements, can limit aquatic life. Our data also suggest that environmental factors, especially pluviosity (and its effect on water flow), play an important role in the concentrations of chemicals found in the water. The fate of organic contaminants is influenced by the physicochemical and biological properties of the water and sediments. Indeed, temperature, pH, microbial activity and light conditions may affect the availability of the contaminants (Moncmanová, 2007) and alter their rates of degradation, sorption, and bioaccumulation. Therefore, we cannot comment on the exact contribution of an upstream source to the concentration of chemicals along the river (downstream sites). To this end, studies with isotopically labeled chemicals would permit to elucidate the fate as well as the specific role of anthropogenic activities on concentrations of emerging contaminants in the Rio Doce basin rivers. However, in a climate change scenario, we pointed out the possible increased of toxicological impacts of contaminants. As a result of rising temperatures, increased drought, El-Niño Southern Oscillation, and reduced pluviosity (Caruso, 2001), there may be low water flow and increased concentrations and harmful effects of chemicals on aquatic ecosystems. CONCLUSION Through sampling and analysis, the concentrations and distribution of 12 contaminants (pharmaceutical products and herbicides) were determined at four different sites in rivers of the Doce river watershed. Although the concentrations detected were within the range of those observed in other emerging countries, the sampled waters were highly contaminated, especially by the herbicide GLY and its metabolite AMPA. The risk assessment analysis conducted here shows that most of the chemicals assessed are present at concentrations above the PNEC (ecotox) level, posing a potential threat to the aquatic environment. In addition, several antibiotic concentrations are higher than those known to cause antibiotic resistance, particularly those in the fluoroquinolone class. The concentrations of chemicals studied were related to human activities in vicinity of the sampling sites, but the lack of water treatment in urban areas could be the main cause of river contamination. Based on the HQ index, the risk assessment approach provided useful guidance on which chemicals needs to our priority attention for future control and remediation. In this context, particular attention needs to be give to GLY, AMPA, fluoroquinolones and sulfonamides. The results show that there is an urgent need to monitor the presence of emerging contaminants in water, which, in addition to metals (the main target in the study of water quality in the rivers of the Doce river watershed), may pose risk to the environment and humans due to the frequent use of surface water for drinking and domestic purposes by the local population. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS MG, FV, and PJ conceived and designed the experiments, gave technical support and conceptual advice. MG, JB, RK, and PJ performed chemical analysis. MG, JB, and PJ wrote manuscript. FV provided technical and editorial assistance. All authors read and approved the manuscript.
2022-01-12T14:14:01.600Z
2022-01-12T00:00:00.000
{ "year": 2021, "sha1": "7631dc06bc26f282bd8823d46f87757dcf65ee34", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2021.801599/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "7631dc06bc26f282bd8823d46f87757dcf65ee34", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
15258071
pes2o/s2orc
v3-fos-license
Association between parenting practices and children's dietary intake, activity behavior and development of body mass index: the KOALA Birth Cohort Study Background Insights into the effects of energy balance-related parenting practices on children's diet and activity behavior at an early age is warranted to determine which practices should be recommended and to whom. The purpose of this study was to examine child and parent background correlates of energy balance-related parenting practices at age 5, as well as the associations of these practices with children's diet, activity behavior, and body mass index (BMI) development. Methods Questionnaire data originated from the KOALA Birth Cohort Study for ages 5 (N = 2026) and 7 (N = 1819). Linear regression analyses were used to examine the association of child and parent background characteristics with parenting practices (i.e., diet- and activity-related restriction, monitoring and stimulation), and to examine the associations between these parenting practices and children's diet (in terms of energy intake, dietary fiber intake, and added sugar intake) and activity behavior (i.e., physical activity and sedentary time) at age 5, as well as BMI development from age 5 to age 7. Moderation analyses were used to examine whether the associations between the parenting practices and child behavior depended on child characteristics. Results Several child and parent background characteristics were associated with the parenting practices. Dietary monitoring, stimulation of healthy intake and stimulation of physical activity were associated with desirable energy balance-related behaviors (i.e., dietary intake and/or activity behavior) and desirable BMI development, whereas restriction of sedentary time showed associations with undesirable behaviors and BMI development. Child eating style and weight status, but not child gender or activity style, moderated the associations between parenting practices and behavior. Dietary restriction and monitoring showed weaker, or even undesirable associations for children with a deviant eating style, whereas these practices showed associations with desirable behavior for normal eaters. By contrast, stimulation to eat healthy worked particularly well for children with a deviant eating style or a high BMI. Conclusion Although most energy balance-related parenting practices were associated with desirable behaviors, some practices showed associations with undesirable child behavior and weight outcomes. Only parental stimulation showed desirable associations with regard to both diet and activity behavior. The interaction between parenting and child characteristics in the association with behavior calls for parenting that is tailored to the individual child. Background Eating and physical activity (PA) habits originate in early childhood [1,2], and track into later life [3,4]. Parents can have a strong influence on their children's dietary intake and activity behavior: they can control the availability of and exposure to food and activity opportunities, they act as role models, provide their children with support and structure, and use specific parenting practices [5]. In contrast to the overall parenting style, which refers to general patterns of parenting and the emotional climate in which parents' behaviors are expressed, parenting practices are content-specific acts of parenting [6], such as rules about dietary intake or activity behavior. The current study focused on the latter, i.e., behavior-specific parenting practices. Many studies have examined the influence of foodrelated parenting practices and feeding styles on children's dietary intake and weight. Restricting the intake of unhealthy food items, for instance, has been found to be associated with a higher intake of those items and with a higher body mass index (BMI; see e.g., the following reviews [7][8][9]). Other studies, however, have found associations between restriction and desirable dietary intake behavior [e.g., [10,11]]. Studies examining diet-related parenting practices other than restriction have also reported inconsistent results [7]. Promotion, stimulation or pressure to eat certain foods have been reported to have both favorable [e.g., [12][13][14]] and unfavorable [e.g., [15]] effects on children's diet, and were found to be associated with a lower child BMI [e.g., [16]]. Conflicting findings have also been found with regard to monitoring children's dietary intake, which was reported to be associated with childhood overweight [e.g., [17]], but also with a lower child BMI [e.g., [18]] and a healthier diet [19]. Many of these studies used a cross-sectional design. For example, 19 of the 22 studies included in the review by Faith and colleagues [7] on the effects of feeding strategies were cross-sectional, rather than longitudinal. The results of cross-sectional studies are difficult to interpret, which might explain the conflicting findings reported by these studies. These conflicting findings also led us to the decision not to formulate specific hypotheses for the current study regarding the directions of the associations between diet-related parenting practices and children's dietary intake and BMI. The parenting practices examined in the current study were restriction of unhealthy intake, monitoring a child's diet and stimulation of healthy intake. As regards activity behaviors, many studies have examined the association between parental support and encouragement to be physically active, which seem to be important positive predictors of children's PA [20]. Many other studies, however, did not find an effect on PA [21]. Explicit rules restricting television watching have been found to be associated with less sedentary behavior [e.g., [22][23][24]], but also with lower levels of PA in boys and higher levels of PA in girls [e.g., [22]]. Monitoring a child's activity has been found to be associated with increased PA [19]. The activity-related parenting practices examined in the current study were restriction of sedentary time, monitoring a child's activity behavior and stimulation of PA. As with diet-related practices, we did not formulate specific hypotheses regarding the directions of the associations between activity-related parenting practices and children's activity behavior and BMI. Examining the effect of different parenting practices on energy balance-related behaviors is important to ascertain which practices should be recommended to parents to prevent childhood obesity. In addition, it is important to assess to whom these practices should be recommended, implying research into the association between background characteristics and parenting practices. For example, the use of more controlling dietrelated parenting practices, including more restriction, has been shown to be associated with several parental characteristics, including lower BMI, higher educational level and social class, both older and younger age, white ethnicity and employment [10,19,[25][26][27][28]. Pressure to eat was found to be positively related to parental non-white ethnicity, female gender, and lower socioeconomic status, and diet monitoring to maternal older age, higher BMI and higher educational level [26,29,30]. It is not only parental characteristics which appear to be related to specific parenting practices: child characteristics have also been found to evoke different parenting practices. For example, controlling practices, encouragement or pressure, and monitoring have all been found to correlate with either higher or lower child weight [e.g., [30][31][32][33]], and controlling practices were used more for girls than for boys [34]. It is less clear which background factors predict the use of activity-related parenting practices, although a study among a Latino sample showed that parental employment is associated with more control over the child's PA [19]. Based on these previous findings, we hypothesized that the following child characteristics would be associated with the use of various energy balance-related parenting practices: weight status-related variables (i.e., the child's birth weight and BMI [e.g., [30][31][32][33]]) and gender [34]. We also hypothesized that children's eating style and activity style would be associated with parenting practices. As regards parental background characteristics, we expected the following variables to be associated with the parenting practices: parental BMI [26][27][28][29], educational level [10,26,30], employment [19,26], ethnicity (i.e., country of birth) [26], and age [10,19,26,27,29]. It has been claimed that there is an urgent need to know whether the effects of parenting practices are similar across different groups of children [e.g., [10,18]]. Answering this question requires moderation research [35]. Several child characteristics may moderate the effects of diet-related parenting practices. Dietary restriction and control have been reported to have stronger undesirable effects on the dietary intake of girls than of boys [e.g., [7,19]], but studies regarding the moderation of gender in the relationship between restriction and weight status have reported mixed results [7]. Recently, we reported that restriction showed an association with desirable dietary intake for normal weight children, but not for overweight children [10]. Also, the association between restriction and desirable dietary intake behavior of 2-year-olds was found to be weaker or even absent in children with a problematic eating style (i.e., those who do not like many foods, eat reluctantly, or are slow eaters [10]). Again, empirical evidence regarding moderators of parental influences on child activity behavior is generally lacking, but in line with findings regarding the dietary intake domain, we hypothesized that similar interactions between parenting practices and child factors would exist for the activity domain. Based on the studies referred to above, we hypothesized that the following child characteristics would moderate the associations between parenting practices and child energy balance-related behavior and BMI: weight-related variables (i.e., birth weight and BMI) [10], gender [e.g., [7,19]], and eating style and activity style [10]. Figure 1 shows a summary of our main hypotheses. The present study examined parental and child associates of energy balance-related (i.e., diet-related and activity-related) parenting practices (blue arrows in Figure 1). We also examined the association between energy balance-related parenting and activity behavior and dietary intake in 5-year-old children, as well as the prospective influence of these practices on children's BMI development up to age 7 (green arrows). Finally, based on previous studies [e.g., [10,18]], we examined whether child background characteristics moderated the impact of the parenting practices (red arrow). Respondents and procedure The KOALA Birth Cohort Study (the Netherlands) is a prospective cohort study which started in the year 2000. Healthy pregnant women were recruited from an existing cohort for a study of the etiology of pregnancyrelated pelvic girdle pain, as well as through recruitment channels from among 'alternative lifestyle' circles (e.g., through anthroposophist midwives and general practitioners, and organic food shops [36]). The latter group of women (17.9%) were likely to have an alternative lifestyle in terms of dietary habits (e.g., preferring organic foods), child rearing, vaccination programs, antibiotics use, etc. All participants signed informed consent, and approval was obtained from the Maastricht University/ University Hospital Maastricht medical ethics committee. In total, the mothers of 2834 children participated and completed mail-based questionnaires during pregnancy and regularly after birth. Ten children were excluded because of congenital defects (e.g., Down syndrome). Questionnaires When the children were around 5 years old, parents completed a questionnaire regarding their energy balance-related parenting practices, their child's dietary intake, activity behavior, weight and height, and several other child and parent characteristics. A total of 2026 questionnaires (71.7%) were returned. Children for whom the 5-year questionnaire was returned had a slightly higher birth weight compared to children for whom this questionnaire was not returned (3521 vs. 3468 grams, p < 0.05). At age 7, a follow-up questionnaire was sent, assessing only child weight and height, and questionnaires regarding 1819 (89.8%) children were returned. There was no selective attrition between age 5 and age 7 with regard to BMI z-score at age 5 (p > 0.05). Child background characteristics The child's eating style was assessed on two dimensions: the child's picky eating [37] and the child's appetite. We also assessed whether the child had an active activity style. For more information about these concepts, see Table 1. In addition, the child's birth weight (in grams) and gender were also assessed in the current study. Parental background characteristics The questionnaire assessed the number of working hours per week of the father and mother, their educational level and their country of birth. Educational level was recoded into three levels (low, medium and high), in line with international classification systems [38]. Country of birth was recoded into 'Netherlands' versus 'other'. Maternal age at the time of the child's birth was also assessed in the current study. Parenting practices We assessed parenting practices regarding children's dietary intake and activity behavior. The items used to assess these parenting practices and the corresponding Cronbach's α values are listed in Table 1. The parenting practices 'restriction' of unhealthy intake and 'monitoring' were assessed using the validated scales of the Child Feeding Questionnaire (CFQ [37]), translated into Dutch. Since our study focused on parenting practices in relation to weight gain prevention, we considered 'stimulation of healthy intake' to be more suitable than the original 'pressure' to eat scale of the CFQ. Pressure to eat is a practice which is often used to increase children's weight [39]. In addition, we 'converted' the dietrelated items of the CFQ to the activity context in order to create an 'Activity-related Parenting Questionnaire', consisting of three scales similar to the diet-related CFQ scales: 'restriction of sedentary behavior', 'monitoring activity' and 'stimulation to be physically active'. The similarity between the diet-related scales and the activity-related scales enabled cross-behavioral comparison of the correlates and the effects of the energy balancerelated parenting practices. The 'Activity-related Parenting Questionnaire' has, however, not been previously validated. Child dietary intake, activity behavior and BMI Children's dietary intake was assessed using a Food Frequency Questionnaire (FFQ), assessing intake during the 4 weeks preceding the questionnaire. This FFQ was specifically developed to assess children's energy intake, and was validated using the doubly labeled water method [40]. The FFQ consisted of 71 items. Additional questions were asked for 27 foods, asking for the specific types or brands consumed and preparation methods. Parents indicated their child's habitual consumption frequency of each of the food items by checking 1 of 6 frequency categories ranging from 'never' to '6-7 days a week'. Respondents were asked to report portion sizes in natural units (e.g., pieces, slices), household units (e.g., glasses, spoons) or grams (e.g., grams of meat). Parents were asked to measure the volume of the cups and glasses they used for the children. The average energy intake (kJ) and fiber intake (in grams per MJ) per day were calculated using the 2001 Netherlands Food Composition (NEVO) table [41]. Nutritional values of products that were not (or not yet) included in the 2001 NEVO table were provided by a dietician. Added sugar intake (expressed as a percentage of total energy intake), which is not included in the NEVO table, was calculated using values from an earlier study [42]. Added sugar was defined as the amount of saccharose, glucose and/or fructose added to a food or meal by the consumer or the manufacturer. Children's activity behavior was assessed using questions based on a standard questionnaire for measuring activity behavior, used in Dutch Youth Health Care [43]. Parents were asked on how many days in a normal week during the last 4 weeks their child had gone to school on foot or by bicycle, had played sports at school (e.g., during physical education lessons), had played sports outside the school at a sports club, and had played outside (outside school hours). A second item assessed the average duration of each of these activities. The duration and number of days were multiplied to calculate the number of minutes spent on a particular activity per week. The number of minutes spent on the various activities were then added up to calculate the total number of minutes of physical activity per week, which was divided by 7 to get the average time (in Table 1 Descriptive and scale information of child characteristics and parenting practices (N = 2021) minutes) the children were physically active per day. Sedentary screen-based behavior was assessed in a similar manner, asking parents about their child's television watching (including videos and DVDs) and computer playing. In the 5-year questionnaire, parents were asked to report their child's weight and height (measured without shoes and clothes, specified to one decimal), in order to calculate the child's body mass index (BMI, i.e., weight (kg)/(height (m)) 2 ). BMI was then recoded into BMI z-scores compared to the 1997 national reference population (i.e., the Fourth Dutch National Growth Study [44]). At age 7, parents were asked to report their child's height and weight again, and these values were recoded into BMI z-scores for the age of 7 years. Data analyses The analyses were conducted using SPSS 15.0. Cronbach's α values were calculated as an estimate of the lower bound of reliability of the scales used (e.g., the CFQ scales). All analyses described below were adjusted for recruitment group (alternative versus conventional lifestyle), and p-values < 0.05 were considered statistically significant. First, linear regression analyses were used to examine child and parent background correlates of the use of the six energy balance-related parenting practices (i.e., restriction of intake, monitoring intake, stimulation of healthy intake, restriction of sedentary time, monitoring activity, and stimulation to be active). This is illustrated by the blue arrows in Figure 1. The correlates that were examined were child background characteristics (child gender, birth weight, BMI z-score at age 5, activity style, hungry eating style and picky eating style) and parental background characteristics (parental BMI, employment, educational level, and country of birth; maternal age). All correlates were entered simultaneously, correcting for potential confounding by the other variables. Second, linear regression models were fitted to assess the associations between the parenting practices and each of the six outcome variables: energy intake, fiber intake, added sugar intake, PA, and sedentary behavior (all at age 5), and BMI z-score at age 7 (see Figure 1, green arrows). The diet-related parenting practices were included in the analyses using the dietary intake variables as the outcome, while the activity-related parenting practices were included in the analyses using PA and sedentary time as an outcome. Both diet-related and activity-related parenting practices were included in the analyses examining the influence on BMI at age 7. These analyses were adjusted for the child characteristics and parental background characteristics described above, including child BMI z-score at age 5. The analyses with BMI z-score at age 7 as a dependent variable thus reflected BMI z-score development between ages 5 and 7. The analyses with the BMI z-score at age 7 as the dependent variable were repeated after excluding children who were underweight (BMI z-score < 5 th percentile) at age 5, to examine whether this affected the findings. Third, in order to examine whether child characteristics moderated the association between parenting practices and children's energy balance-related behavior and BMI development, we calculated interaction terms between the parenting practices and the various child characteristics (i.e., child gender, birth weight, BMI z-score at age 5, activity style, hungry eating style and picky eating style; see Figure 1, red arrow). The interaction terms were added to the regression analyses described above (i.e., the model including the parenting practices, adjusted for the parent and child background characteristics) in a separate step using a stepwise forward entering procedure [45]. This forward procedure involved adding the interaction term that had the highest correlation with the unexplained variance of the outcome variable to the model, on condition that it significantly improved the predictive value of the model. This procedure was repeated until the predictive value of the model could no longer be significantly improved by any of the interaction terms not yet included in the model [45]. Subsequently, stratified linear regression analyses were performed for the interaction terms that were included in the model in this separate step, in order to examine the association with the parenting practice in the different strata of the moderator variable (i.e., the child characteristic). Continuous variables (e.g., child birth weight) were dichotomized for this purpose, using a median split. We only report the interactions for which the association between the parenting practice and the outcome was statistically significant in either or both of the strata of the moderator variable. Results Of the 2026 children participating in the questionnaire survey around age 5, 51.2% were male. The children's mean daily energy intake was 6176 kJ (1467 kCal), with a standard deviation (SD) of 1286 kJ (306 kCal). The children consumed an average of 2.5 grams of dietary fiber per MJ of energy intake (SD = 0.6), while added sugar intake contributed 15.8% (SD = 6.6%) to their total energy intake. Children were physically active for an average of 116 minutes per day (SD = 55), and spent 59 minutes on sedentary screen-based activities (SD = 42). Mean BMI z-score at age 5 was -0.27 (SD = 0.99), compared to -0.29 (SD = 0.94) at age 7. Descriptive information regarding children's eating styles and activity style is listed in Table 1. Mean maternal BMI was 24.0 kg/m 2 (SD = 3.8), mean paternal BMI was 25.0 kg/m 2 (SD = 3.1). Mothers worked an average of 18.0 hours (SD = 11.1) and fathers 37.8 hours (SD = 10.1) per week. A total of 3.0% of the mothers and 3.7% of the fathers had not been born in the Netherlands. Educational level was high for 54.1% of the mothers and 53.2% of the fathers, medium for 38.0% and 33.9%, and low for 7.9% and 12.9%, respectively. Average maternal age at the time of the birth of their child was 32.2 years (SD = 3.7). Correlates of parenting practices Several child characteristics were significantly related to the parenting practices used at age 5 years ( Table 2). Parents imposed more dietary restriction on girls than on boys. The reverse was true for restriction of sedentary time: boys were more restricted by their parents than girls. A higher child BMI z-score (at age 5) was associated with more dietary restriction and more stimulation of healthy intake. Dietary intake was more restricted by parents whose child had a hungry or picky eating style. Children who were picky eaters were less likely to be stimulated to eat healthy. Children with an active activity style were less likely to be restricted in their sedentary time, but also more likely to be stimulated to be physically active than their normal peers. Various maternal characteristics were associated with the parenting practices ( Table 2). A higher maternal BMI was associated with less restriction and less stimulation with regard to dietary intake. Maternal educational level was positively associated with stimulation of healthy intake, stimulation to be physically active and restriction of sedentary time, while mothers' working Table 2 Child and parent correlates of parenting practices at child's age of 5 years (N = 2021) Diet-related Activity-related Restric-tion un-healthy Monitoring Stimu-lation healthy Restric-tion se-dentary *p < 0.05; ** p < 0.01; ***p < 0.001. a Parenting practices were assessed on a scale from 1-5. All independent variables were entered simultaneously. All analyses were adjusted for recruitment group. b Variable not included in the current analysis. Eating style was only included as a predictor in the analyses using diet-related practices as dependent variables; activity style was only included in the analyses using activity-related practices as dependent variables. Notes: BMI = Body mass index, PA = Physical activity, NL = the Netherlands. hours were negatively related to the monitoring of both dietary intake and activity behavior, and to stimulation to be physically active. Paternal educational level was inversely associated with monitoring of dietary intake. Associations between parenting practices and diet and activity behavior at age 5 Stimulation of healthy intake and monitoring a child's diet were not only associated with more dietary fiber intake, but also with less added sugar intake (Table 3). Children's energy intake was not associated with the parenting practices. Restriction of sedentary behavior was related to more sedentary time and less PA. Stimulation to be active was positively associated with PA, and negatively with sedentary behavior. Associations between parenting practices and BMI development from age 5 to age 7 All analyses regarding BMI at age 7 were corrected for BMI at age 5, so that BMI results at age 7 reflect BMI development from 5 to 7 years of age. Stimulation of healthy intake was negatively associated with BMI development up to age 7, while restriction of sedentary time was positively related to BMI development (Table 4). Repeating the analyses while excluding children who were underweight at age 5 (N = 187) did not substantially change the results. Child background characteristics as moderators of parenting practice impact There were several significant interactions between parenting practices and child background characteristics, an overview of which is provided in Figure 2. The green bars in Figure 2 represent associations between parenting practices and desirable behavior (i.e., increased healthy intake/decreased unhealthy intake as regards diet-related practices; increased PA/decreased sedentary time as regards activity-related practices), while the red bars represent associations with undesirable behavior (i.e., decreased healthy intake/increased unhealthy intake; decreased PA/increased sedentary time). The associations between the diet-related practices and children's dietary intake were found to be moderated by the children's eating style and weight status. Restriction was associated with increased energy intake for children who were characterized as relatively hungry (standardized β = 0.18, p < 0.05), but not for their peers with a normal appetite (β = -0.05, non-significant (n.s.); see Figure 2a). The desirable associations between diet monitoring and both dietary fiber and added sugar intake were not found for hungry children or picky eaters (absolute β values < 0.05, n.s.). These desirable associations were, however, found for children who were not reported to be relatively hungry (β = 0.11, p < 0.001 for fiber intake; β = -0.09, p < 0.01 for sugar intake; see Figures 2b and 2c), and were not reported to be picky eaters (β = -0.13, p < 0.01 for sugar intake; Figure 2d). By contrast, the desirable association between stimulation of healthy intake and added sugar intake was found for picky eaters (β = -0.17, p < 0.001), but not for normal, non-picky eaters (β < 0.01, n.s.; Figure 2e). Stimulation of healthy intake showed a slightly stronger desirable association with fiber intake for children with a BMI above the median at age 5 (β = 0.22, p < 0.001), compared to children with a lower BMI at age 5 (β = 0.15, p < 0.001; Figure 2f). There Table 3 Association between parenting practices at child's age 5 and child's dietary intake and activity behavior at 5 years (N = 2021) Parenting practice Energy intake (kJ/day) Sedentary behavior (minutes/day) Diet Restriction unhealthy intake 0.12*** -0.12*** **p < 0.01; ***p < 0.001. a Analyses were adjusted for child BMI z-score at age 5, recruitment group (alternative vs. conventional), child background characteristics (gender, birth weight, activity style and eating style) and parent background characteristics (parental BMI, educational level, employment and country of birth; and maternal age). b Variable not included in the current analysis. Diet-related parenting practices were only included as a predictor in the analyses using dietary behaviors as dependent variables; activity-related parenting practices were only included in the analyses using the activity behaviors as dependent variables. Notes: kJ = kilo Joules; g/MJ = intake in grams of nutrient per MJ of total energy intake; En% = Energy intake of nutrient as a percentage of total energy intake; PA = Physical activity. were no interactions between diet-related practices and the child's gender. Regarding activity behavior, the desirable association between stimulation to be physically active and the child's PA was only found for children with birth weight below the median (β = 0.19, p < 0.001), and not for children with a higher birth weight (β = 0.04, n.s.; see Figure 2g). There were no interactions between activityrelated practices and the child's gender or activity style. Discussion The current study examined child and parent correlates of energy balance-related parenting practices, as well as the association between these practices and diet and activity behavior at age 5, and BMI development from age 5 to 7 years. Parents were found to be more restrictive regarding their daughters' diet than their sons', which is in line with previous research [34]. However, the current study also showed that girls were less restricted than boys when it came to sedentary time. Parents may have different priorities for boys and girls when it comes to restricting unhealthy behaviors; perhaps inactivity is of greater concern to parents where their sons are concerned, while overconsumption is of greater concern to parents where their daughters are concerned. Parental restriction of unhealthy intake was Stimulation PA 0.01 -0.01 † p < 0.10; * p < 0.05. a Analyses were adjusted for child BMI z-score at age 5, recruitment group (alternative vs. conventional), child background characteristics (gender, birth weight, activity style and eating style) and parent background characteristics (parental BMI, educational level, employment and country of birth; and maternal age). b Underweight at age 5 was defined as a BMI z-score < 5 th percentile. Notes: BMI = Body mass index, PA = Physical activity. also positively associated with child BMI, in agreement with previous studies [31]. Child BMI was also positively associated with parental stimulation of healthy intake. Both the increased restriction of unhealthy intake and the increased stimulation of healthy intake in heavier children might reflect reactions of parents to their child's weight, trying to get heavier children to eat a healthier diet so as to decrease their weight. A similar mechanism might be operative for children with a hungry or picky eating style, who were shown to be more restricted by their parents. Parents might feel that these children need more external control over their eating to compensate for their deviant eating style. In view of the cross-sectional nature of our data, however, we cannot exclude the possibility that these children's eating style actually became more deviant in reaction to the strict control their parents exercised over their diet. In line with the latter explanation, various studies have reported that high parental control over child eating interferes with children's self-control over their intake [e.g., [9]], thus leading to a deviant eating style. There were also several parental characteristics that predicted which practices parents would apply. Maternal BMI was found to be inversely associated with dietary restriction and stimulation, which confirms previous findings [26][27][28]. Maternal educational level was positively associated with stimulation of both healthy intake and PA. This adds to previous research showing that parental education is positively associated with restriction and other controlling practices [10,25,26]. The number of hours that the mothers worked was negatively associated with monitoring their children's diet and activity behavior and stimulation to be physically active. A similar association was previously reported by Brown and colleagues [25], showing that parents who stayed at home to take care of their children exercised stricter control over their children's diet. As working parents leave part of the child rearing to others, such as child-care staff [e.g., [46]], they may be inclined to be less strict during the limited time they can spend with their children. a. Restriciton unhealthy intake -Energy intake With regard to the associations between parenting practices and their children's behavior and BMI development, we found that monitoring a child's diet and stimulating healthy intake were both associated with the child having a healthy diet. Stimulation of healthy intake even had a desirable effect on the child's BMI development up to the follow-up at age 7. By contrast, dietary restriction was not associated with any of the dietary outcomes, nor was it associated with BMI development. Previous studies have shown conflicting results with regard to all three of the above parenting practices (monitoring [e.g., [17][18][19]]; stimulation [e.g., [12][13][14][15]]; restriction [e.g., [7][8][9][10][11]]), with some studies supporting our findings and some contradicting them. We believe that the key to resolving these conflicting findings might lie in the interaction between children and parents. In line with our hypotheses based on previous studies [e.g., [10,18]], the current study showed that the associations between parenting practices and child behavior and weight development depended on the children's characteristics. Dietary restriction was associated with undesirable dietary intake behaviors by children with a deviant eating style (i.e., children who were relatively hungry compared to peers). In line with this, previous research showed that the associations between restriction and desirable dietary intake behavior at a very young age (2 years) were partly lacking in children with deviant eating styles [10]. Analogous to our findings with regard to restriction, the associations between monitoring and a desirable child diet were not found for relatively hungry children or picky eaters. By contrast, stimulation to eat healthy was found to be specifically beneficial for picky eaters, as well as for children with a high BMI. In line with previously raised hypotheses [10], this indicates that although restriction and monitoring might be less suitable for children with certain unfavorable characteristics (e.g., deviant eating style, high BMI), stimulating these children to eat a healthy diet seems all the more effective for them. It is worrying, however, that picky eating also correlated with less parental stimulation to eat healthy. Educating parents might therefore be an important step toward improving children's diet, perhaps especially for children with a deviant eating style. The effects of rules about television viewing on activity behaviors have previously been found to depend on the child's gender, with desirable effects on girls, but undesirable effects on boys [22]. We did not find indications of such a difference in the current study, but we did find undesirable correlations between restriction of sedentary time and behavior and BMI development, for both boys and girls; restriction was associated with increased sedentary time, decreased PA, and an increased BMI development up to age 7. This contradicts previous studies that showed that explicit rules restricting children's television watching were associated with less viewing time [22][23][24]. An explanation for these contradictory findings might lie in the assessment of restriction of sedentary time in the current study, which not only included explicit rules limiting television and computer use, as in the previous studies, but took a broader view of restrictive parenting. For example, the measure of restriction of sedentary behavior in the current study included items assessing what parents thought would happen if they did not restrict their child's sedentary behavior (see Table 1). The inclusion of such broader items was based on the diet-related restriction scale of the CFQ [37]. Stimulation to be active was positively associated with children's PA and negatively with sedentary time in our study, which is in line with a review showing that encouragement and support are important predictors of increased PA [20]. The findings of the current study have implications for both research and practice. With regard to research, studies into the effects of parenting practices that do not incorporate the possibility of moderation by child characteristics will tend to produce conclusions on the effects of parenting practices that strongly depend on their study population. In addition to the moderators identified in the current study, previous research has revealed several additional child factors that moderate the effects of parenting practices, including the child's personality, temperament [10,11] and gender [7]. These interactions might also contribute to the many contradictions in the current evidence base on diet-related parenting practices. Therefore, we believe that research into the effects of parenting practices cannot be limited to the direct association between practices and outcomes, but should always incorporate a theory-based examination of possible moderation effects [35,47]. The practical implications of the current findings are that overweight prevention interventions targeted at parenting practices should be tailored to individual child characteristics, since specific parenting practices might be beneficial for one child, but useless (or even potentially disadvantageous) for another. The current study had several strengths and limitations. One of the strengths is that it included a longitudinal follow-up to assess the effects on BMI development. However, behavioral outcomes were only assessed crosssectionally. Thus, we cannot establish whether these behaviors are the consequence of certain parenting practices, or that they perhaps evoke these parenting practices. The same goes for eating style and activity style, which we regarded as relatively stable child characteristics, and therefore included as predictors of parenting practices. They could, however, also be influenced by parenting practices. Many of the previous studies in this research area have limited themselves to cross-sectional explorations, and there is a need for prospective research to establish causality [e.g., [7,21]]. It is reassuring, though, that the associations between two of the parenting practices (i.e., stimulation of healthy intake and restriction of sedentary time) and behavior were supported by the associations with later BMI development, pointing in the same direction. An additional strength is that the data in the current study were assessed prospectively, limiting the risk of recall bias and other problems inherent in retrospective research. A major limitation of the current study is that all data, including dietary intake, activity behavior and anthropometrics, were self-reported by the parents, which may have led to bias. However, previous research has shown that parental reports of weight and height differed little from measured data [48]. An additional limitation is that the Cronbach's α values of some of our scales were relatively low. Although a Cronbach's α ≥.6 is generally considered acceptable [49], some authors advocate different cut-off points. Furthermore, caution is warranted when generalizing our results to the broader population of young Dutch children. Parents with an 'alternative', relatively healthy lifestyle were overrepresented in our sample, due to the choice of recruitment methods, i.e., recruiting some of the women from 'alternative lifestyle' circles [36]. The relatively healthy average lifestyle of our study sample is reflected in the children's relatively low mean BMI z-scores. However, secondary analyses showed that excluding the children who where underweight at age 5 did not change our findings. Moreover, all analyses were adjusted for recruitment channel. Finally, it may be noted that the reported effect sizes are small, indicating that the amount of variance in behavior and weight status explained by the parenting practices is limited. This may be partly attributable to the fact that parenting behavior is a concept that is hard to assess, and there is no consensus about the proper way to measure it. There are dozens of questionnaires assessing diet-related parenting practices, activity-related parenting practices, or both [e.g., [37,[50][51][52][53]]. We feel quite confident, though, about the instruments adapted from the CFQ [37] for the current study, although the dietrelated 'stimulation of healthy intake' scale and the 'Activity-related Parenting Questionnaire' were not previously validated. The fact that our adapted scales for 'stimulation of healthy intake' and 'restriction of sedentary time' predicted BMI change from age 5 to age 7 may be considered reassuring in this respect. Future research would benefit from a consensus about feasible and valid measurement methods. Conclusions The current study showed that although most energy balance-related parenting practices were associated with desirable behaviors, there are also practices (e.g., restriction of sedentary time) that influence 5-year-old children's behavior and subsequent weight outcomes in a negative sense. Stimulating a child seems to be an effective practice to achieve both a healthy diet and a healthy activity pattern in children. However, the associations between several of the parenting practices and child behavior were found to depend on child characteristics, which calls for parenting that is tailored to each individual child.
2014-10-01T00:00:00.000Z
2011-03-14T00:00:00.000
{ "year": 2011, "sha1": "58c6ad9c247dcfbda6b5ec3bd635148a1b8740f9", "oa_license": "CCBY", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-8-18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "beb9b395284e8ebe10951792d1381f8fce36efc9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
4712806
pes2o/s2orc
v3-fos-license
Recent Development of Inorganic Nanoparticles for Biomedical Imaging Inorganic nanoparticle-based biomedical imaging probes have been studied extensively as a potential alternative to conventional molecular imaging probes. Not only can they provide better imaging performance but they can also offer greater versatility of multimodal, stimuli-responsive, and targeted imaging. However, inorganic nanoparticle-based probes are still far from practical use in clinics due to safety concerns and less-optimized efficiency. In this context, it would be valuable to look over the underlying issues. This outlook highlights the recent advances in the development of inorganic nanoparticle-based probes for MRI, CT, and anti-Stokes shift-based optical imaging. Various issues and possibilities regarding the construction of imaging probes are discussed, and future research directions are suggested. ■ INTRODUCTION Bioimaging refers to the visualization of biological structures and processes. A variety of techniques with their own advantages have been developed for that purpose to meet the needs in various clinical and laboratory settings. 1−4 In many cases, imaging probes that can label target molecules or organs are used to provide enhanced visibility and to enable the acquisition of more detailed structural and functional information. 5−7 Consequently, the use of imaging probes is becoming indispensable for biological research and disease diagnosis. Recent advances in the development of imaging probes have led to the bioimaging at subcellular or molecular level. 8−10 That said, the majority of the imaging probes currently used in clinics are organic molecules or metal−organic compounds, 11−13 whose utility is limited because of their intrinsic physical and physiological properties. To list a few examples, fluorescent dyes used for optical imaging suffer from photobleaching, 14 and magnetic resonance imaging (MRI) contrast agents made of Gd 3+ -chelates exhibit weak contrast effect due to their low magnetic moment. 15 These small molecule-based probes also have a short circulation time in vivo, resulting in poor targeting efficiency and insufficient imaging enhancement. 16 Nanotechnology has facilitated the development of unprecedented imaging probes with outstanding performance. 17−19 Inorganic nanoparticles are one of the most widely studied materials in this regard due to their unique physical and chemical properties that originate from their nanoscale dimensions. 20 Various nanoparticle probes for bioimaging were developed using their magnetic, X-ray attenuation, and optical properties (Figure 1). For example, magnetic nanoparticles (e.g., superparamagnetic iron oxide nanoparticles) have been applied as strong T 2 MRI contrast agents, showing much improved detection sensitivity over conventional Gd 3+based MRI contrast agents. 21−23 Nanoparticles of high-Z elements (e.g., gold, 24,25 bismuth, 26−28 and tantalum 29,30 ) have been studied for enhanced computed tomography (CT) contrast agents owing to their high X-ray attenuation. The better optical and chemical stability of quantum dots (QDs) and their relatively easily tunable emission wavelength compared with those of fluorescent dyes enable the use of QDs as robust fluorescent tags in optical imaging. 31−33 Despite these advantages, inorganic nanoparticle-based imaging probes still have many drawbacks that prevent their extensive use in clinical settings, which include magnetic susceptibility artifacts of T 2 MRI contrast agents, 34 photoinduced tissue damage from ultraviolet (UV) excitation source for QDs, 35 and potential toxicity of heavy metal-containing nanoparticles. 36, 37 As a result, very few nanoparticle probes are approved for clinical use. Many efforts have been made in recent years to address the limitations of typical inorganic nanoparticle imaging probes. To overcome the intrinsic limitations of T 2 MRI, extremely small iron oxide nanoparticles were utilized as T 1 MRI contrast agents. 38,39 Shallow tissue penetration depth of UV excitation could be circumvented by using near-infrared (NIR) light for the excitation of nanoparticle probes. 40−43 Moreover, several nanoparticle surface modification methods have been developed to provide enhanced biocompatibility and functionalities such as stimuli-responsiveness, targeted imaging, and therapy. 44 Here, we focus on the recent progress in inorganic nanoparticle probes for MRI, CT, and anti-Stokes shift-based optical imaging, of which the characteristics are summarized in Table 1. We discuss various issues that need to be considered when developing nanoparticle probes. Finally, we propose future research directions for the next generation imaging probes. ■ MRI CONTRAST AGENTS MRI is a noninvasive medical imaging technique based on the principle of nuclear magnetic resonance (NMR). 45 In a strong magnetic field, hydrogen nuclei absorb resonant radiofrequency pulses, and subsequently the excited nuclei return to the initial state by emitting the absorbed radio frequency energy. MRI contrast is generated by the different relaxation characteristics of the hydrogen atoms in tissues that are affected by the presence of nearby magnetic materials. For example, paramagnetic materials enhance the longitudinal relaxation processes (also called T 1 relaxation processes), producing brighter MR signal, while superparamagnetic and ferromagnetic materials accelerate the transverse relaxation processes (also called T 2 relaxation processes), resulting in hypointense MR signal. Using these properties, complexes of paramagnetic gadolinium ions (Gd 3+ ) and superparamagnetic iron oxide nanoparticles (SPIONs) have been used as T 1 and T 2 contrast agents, respectively. 46 Recently, though, most nanoparticlebased MRI contrast agents have been withdrawn from the market, leaving Gd(III) complexes to dominate the current market for the MRI contrast agents. 47 This situation brings up a question: is it still worth pursuing nanoparticle-based MRI contrast agents? To deal with this question, it is necessary to consider various factors including safety, efficacy, and market shares. First-generation magnetic nanoparticle-based T 2 MRI contrast agents such as Feridex and Resovist were used to detect liver lesions, and secondgeneration agents such as Combidex were developed for the diagnosis of lymph metastases. 34 They were withdrawn from the market not for safety concerns, but rather due to their small market shares: T 1 contrast agents are preferred in clinics due to bright MR images, and more importantly, Gd(III)-based T 1 contrast agents are able to cover most organs including the liver. 48 Furthermore, the contrast effects of the early generation magnetic nanoparticle-based contrast agents were not sufficiently strong owing to their small core size and low crystallinity. 49 Newly developed magnetic nanoparticles have a strong chance to compete with the Gd(III)-based contrast agents. 17 Nanotechnology has facilitated the development of unprecedented imaging probes with outstanding performance. First, while a serious side effect of Gd(III) complexes, such as nephrogenic systemic fibrosis, is an issue of major concern, 50 iron oxides are generally regarded as benign and biologically tolerable. 51 When intravenously injected, the iron oxide nanoparticles are typically degraded in liver and spleen, and subsequently incorporated into iron metabolic pathways. 52 Indeed, although the early generation SPION-based contrast agents for intravenous injection are no longer available in clinics, iron oxide nanoparticles are still used for the treatment of iron deficiency anemia 53 and for MRI of gastrointestinal tract via oral administration. 54 Second, nanoparticle syntheses based on the thermal decomposition of metal complexes yield high-quality nanoparticles with tunable size and superior crystallinity. 23,55 As a result, the magnetic property of the nanoparticles can be controlled from nearly paramagnetic to ferromagnetic by tuning their size from a few to ∼100 nm (Figure 2a,b). Such modulation of nanoparticle size allows the magnetic nanoparticles to be used either as a nontoxic alternative to Gd(III)based T 1 contrast agents or as a highly sensitive T 2 contrast agent. For example, extremely small-sized iron oxide nanoparticles (ESIONs) less than 3 nm in core size exhibit a large T 1 contrast effect in high-resolution MR angiography. 38 On the other hand, ferrimagnetic iron oxide nanoparticles (FIONs) with a diameter larger than 30 nm enable highly sensitive T 2weighted MRI of individual cells due to their strong magnetic property and facile cellular uptake. 56 In addition, FIONs with an average core size of 22 nm exhibit ∼7 times stronger T 2 contrast effects than those of the first generation SPION-based agents predicted by outer-sphere relaxation theory. 21 Such a strong contrast effect can be attributed to the balance between the magnetization and the diffusion rate of the 22 nm-sized FIONs, which respectively are directly and inversely propor-tional to the nanoparticle size. 57 Moreover, it is also possible to control the MR contrast effect by changing the magnetic composition of nanoparticles. For instance, the addition of paramagnetic Gd 3+ ions into iron oxide nanoparticles improves T 1 contrast effect due to the increased interactions between the Gd 3+ ions and water molecules. 58 Likewise, manganese ferrite and zinc-doped ferrite nanoparticles show increased net magnetization, resulting in much stronger T 2 contrast effect. 22,59 Third, while modification of Gd(III) complexes usually requires complicated multistep organic reactions, the surface of nanoparticles can be modified relatively easily using conventional bioconjugate chemistry with various functional molecules. 60 Since the interactions between biological tissues and nanoparticles are mainly determined by the surface characteristics of the nanoparticles, biodistribution and cellular uptake of the nanoparticles can be readily controlled by the surface modification. 44 Furthermore, conjugation of targeting ligands allows more accurate diagnosis by providing information on the biological processes of interest. 61 To date, various targeting ligands including antibodies, 62 aptamers, 63 folic acid, 64 and Arg-Gly-Asp (RGD) peptide 65 have been studied for tumor This situation brings up a question: is it still worth pursuing nanoparticle-based MRI contrast agents? To deal with this question, it is necessary to consider various factors including safety, efficacy, and market shares. diagnosis, leading to more enhanced binding affinity and specificity. In addition to the targeting ligands, various functional molecules such as fluorescence dyes, radioisotopes, and drugs can also be attached to the nanoparticles, which allows multimodal imaging or simultaneous imaging and therapy (referred to as theragnosis). 66 As described above, there is still enormous potential in the nanoparticle-based MRI contrast agents, and new trials for more sensitive MR imaging are in progress. One of the challenging issues in the development of MR contrast agents lies in overcoming the intrinsic limitations of MRI such as low sensitivity and artifact signals. For example, either hyperintense or hypointense signal can be generated from endogenous factors such as fat, air, bleeding, calcification, or metal deposition, and they are sometimes confused with MR signals generated by contrast agents. 34 To address this issue, T 1 −T 2 dual-mode MRI contrast agents have been introduced by combining superparamagnetic nanoparticles with paramagnetic metal ions (Figure 2c). 67 The dual-mode contrast agents generate bright and dark signals in T 1 -and T 2 -weighted MRI, respectively, enabling the intrinsic ambiguities to be overcome. In addition, sensitivity and accuracy of MRI can be improved by obtaining complementary information using multimodal imaging. 68 Therefore, various methods of preparing multimodal imaging probes have been proposed, including the direct conjugation of fluorescent molecules or radioisotopes, 69 the assembly of magnetic nanoparticles with quantum dots (QDs) or upconversion nanoparticles (UCNPs), 70 and the doping of radioisotopes into magnetic nanoparticles. 71 Another challenging issue is designing the way that MR contrast agents respond to the stimuli of surrounding environments such as pH, temperature, and specific enzymes. For the case of Gd(III)-complex MR contrast agents, conformational changes of their chelate structures in response to various stimuli have been proposed. 72 In contrast, there have been scarce reports on the stimuli-responsive nanoparticlebased MR contrast agents because a magnetic field generated by superparamagnetic nanoparticles is not affected by conformational change of ligands, making the contrast effect "always on". On the other hand, clustering of the magnetic nanoparticles can change the T 2 relaxation rate, which is referred to as magnetic relaxation switch (MRS). 73 Because the aggregation of nanoparticles can be induced by specific interactions with specific target molecules, various small molecules including oligonucleotides, enzymes, and drugs are detected by MRS using MRI scanners and NMR spectrometers. 61 However, in vivo MRS remains very challenging as signal attenuation depends on the nanoparticle concentration as well as the degree of clustering. Recently, it is shown that extremely small iron oxide nanoparticles assembled within pHresponsive polymers can activate the MR signals in acidic conditions (Figure 2d). 74 When the nanoparticles are aggregated, strong T 2 contrast effect prevents T 1 contrast effect. However, the disassembly of the nanoparticles in acidic condition leads to increase in r 1 and decrease in r 2 , which results in signal enhancement in T 1 -weighted MRI. Although magnetic nanoparticles are not currently available as MR contrast agents for systemic delivery, much attention and effort has been made to develop superior nanoparticle- based contrast agents that hold great promise to provide enhanced sensitivity and more accurate diagnosis. Besides MR contrast agents based on iron oxide nanoparticles, lanthanide ion-doped nanoparticles are also strong candidates for novel MRI contrast agents. For example, NaGdF 4 nanoparticles have been developed as multimodal imaging agents for T 1 -weighted MRI, CT, and upconversion imaging. 75 In addition, Dy 3+ and Ho 3+ ions exhibit unique magnetic characteristics such as short electronic relaxation time and large magnetic moment, which are suitable for high-field MRI. 76 Although high-field MRI improves the resolution and sensitivity, the contrast effect of iron oxide nanoparticles is marginally increased because their magnetization is already saturated. On the other hand, the magnetization of Dy 3+ and Ho 3+ is not saturated at the high magnetic field, making NaHoF 4 and NaDyF 4 good candidates for a T 2 contrast agent for high-field MRI. Furthermore, it is expected that optimized contrast effect can be obtained by modulating particle size, surface coating, and magnetic field. ■ CT CONTRAST AGENTS Computed tomography (CT) is a medical imaging procedure based on the interaction of X-ray with a body or a contrast agent. 18 While rotating an X-ray tube and a detector, the intensity of X-ray is measured from different angles, and crosssectional (tomographic) images are generated with the aid of a computer using the X-ray intensity profiles. CT is one of the most widely used whole body imaging techniques owing to its high spatial resolution and rapid image acquisition. As such, it is frequently employed to visualize various anatomical structures, including brain, lung, cardiovascular system, and abdominal diseases. The innate sensitivity of CT is not sufficiently high for most applications, and thus contrast agents are often required to detect a subtle change of soft tissues. Approximately half of the CT scans in clinics are aided by contrast agents. 77 Since the X-ray attenuation effect of a material generally increases with its atomic number, high-Z elements are preferred as CT contrast agents. 78 To date, barium-and iodine-based contrast agents have been used in clinical situations. Because CT can detect approximately 10 −2 M concentration of a contrast agent, 79 a high dose should be administered, which raises a concern about the toxicity of the contrast agents. For example, although barium sulfate suspension has been administered via oral route for gastrointestinal imaging for decades, it cannot be used as an intravascular contrast agent due to its renal and cardiovascular toxicity. 80 Iodine-based small molecules such as iopamidol and iodixanol were approved as intravenous CT contrast agents by the Food and Drug Administration of the United States. There are still several concerns regarding the safety of the iodinated contrast agents such as allergic reaction and renal toxicity. 81 In addition, the blood circulation time of the iodinated contrast agents is very short, preventing their preferential accumulation in a lesion. Besides toxicity and pharmacokinetics, barium-and iodinebased CT contrast agents do not exhibit sufficient CT contrast effect at higher X-ray tube voltages. 82 This is because the X-ray attenuation effect of an element sharply increases at its K-edge energy level, and subsequently decreases at higher energy levels ( Figure 3a). 83 Many of the current CT scanners are operated at tube voltages ranging from 80 to 140 kV, and high voltages are usually used for large or obese patients. Given that the K-edge energy levels of iodine and barium are 33.2 and 37.4 keV, 83 respectively, there is a large mismatch between the energy required for the peak attenuation and the average energy of X-ray photons emitted from the high voltage tubes. For elements that have too high K-edge energy levels such as gold (80.7 keV) or bismuth (90.5 keV), their contrast effects are not very strong either, because the majority of the emitted X-ray photons generated by current tubes have lower energy than the K-edge levels of those elements. 82 It is noteworthy that polychromatic X-ray is generated in an X-ray tube, and the tube voltage represents the maximum energy of the generated X-ray photons. Therefore, the contrast effect of an element should be evaluated in a wide range of X-ray energies rather than by an attenuation coefficient at a single energy level (Figure 3b). Recent reports show that materials with intermediate K-edge levels such as ytterbium (K-edge at 61.3 keV) and tantalum (Kedge at 67.4 keV) exhibit higher CT contrast effect compared with iodine. 82 Last but not least, the market price of the CT contrast agents is also a critical factor for the regular use of CT in clinics because a large amount of dose is typically required for each scanning session. For example, although the gold-based CT contrast agents are attractive as an alternative to iodinated contrast agents owing to their good biocompatibility and facile synthesis, 24,25 roughly 50 g of gold is consumed for each whole body scanning session, which makes the clinical use of goldbased CT contrast agents almost unrealistic in terms of cost. Lanthanides such as ytterbium can be cheaper alternatives, but the industrial production scale of lanthanides is not large enough to provide a sufficient amount of CT contrast agents. 84 Although the radiation dose of CT is a great concern, 85 this does not lower the importance of the contrast agents. By virtue of its fast scan speed, wide availability, and low cost, CT is still the most popular imaging tool. Various CT scanning methods and image reconstruction techniques have been actively developed to overcome current limitations. 86−88 Since contrast agents allow higher conspicuity of images, it is anticipated that optimized contrast agents will reduce both the radiation exposure and the administered dose, leading to safer imaging. In conjunction with novel imaging techniques, the optimized contrast agents also would enable new diagnostic capabilities of CT by providing molecular and cellular information in addition to simple anatomical details. 9 For example, nanoparticles of high-Z elements have been used for imaging of blood vessels, 25 tumors, 27 transplanted cells, 89 and atherosclerosis. 88 Furthermore, development of lanthanide-based imaging 75 (e.g., upconversion optical imaging and T 1 -weighted MRI) and conjugation of fluorescence dyes allow multimodal imaging (Figure 3c). 29 These multifunctional nanoparticles are expected to lead to more accurate diagnosis and facile treatment by combination image-guided procedures (Figure 3d). Unlike other imaging modalities, CT imaging typically requires a large amount of contrast agents owing to its low sensitivity, which may cause serious side effects. Although most reports on CT contrast agent based on high-Z elements have stated that the nanoparticles are safe, their long-term toxicity has yet to be elucidated. For successful translation into clinical use, it is Besides MR contrast agents based on iron oxide nanoparticles, lanthanide ion-doped nanoparticles are also strong candidates for novel MRI contrast agents. desirable to develop the optimized nanoparticles with favorable biodistribution profile, while maintaining rapid excretion. ■ MULTIPHOTON FLUORESCENCE IMAGING PROBES While whole-body imaging techniques such as MRI and CT play an important role in medical imaging owing to their high resolution and superior penetration depth, their long acquisition time prevents their practical use in real-time monitoring. Fluorescence imaging can be used to overcome such limitations. In general, fluorescence imaging is capable of obtaining high temporal and spatial resolution with good sensitivity. 90,91 The utility of in vivo fluorescence imaging for live animals, however, has been hampered by the shallow penetration depth of light in tissues and decreased spatial resolution that comes from light scattering. For this reason, there have been demands for the development of innovative fluorescence imaging probes and techniques. One of the recent examples of progress in this field is utilizing the anti-Stokes emission process that generates emission light with a shorter wavelength than that of the excitation light. 92,93 If combined with near-infrared (NIR) excitation sources, increased tissue penetration depth, as well as reduced background autofluorescence or light scattering, can be achieved. 94 Multiphoton absorption is a well-known anti-Stokes emission process that has a potential to reduce both the photoinduced damage of samples and photobleaching of fluorophores. 92 Unfortunately, most small molecule-based multiphoton fluorescent dyes still suffer from their low photostability that prevents repeated excitation and prolonged imaging. Therefore, inorganic nanoparticle-based multiphoton fluorescence probes are studied as an alternative due to their improved resistance to photobleaching and relatively facile surface modification with functional molecules. Especially, semiconducting QDs are very attractive in that their emission spectra are tunable and their multiphoton absorption cross sections are much larger than those of traditional fluorescent dyes. 95 There are several other issues that need to be considered to fully make use of the potential of the QD-based multiphoton fluorescence probes in bioimaging, most notably safety and imaging efficiency. While cadmium-containing QDs such as CdSe/CdS/ZnS core−shell nanoparticles have been demonstrated as a twophoton imaging probe, potential toxicity from cadmium is a major concern. To address this issue, cadmium-free QDs have been studied. 96−98 For example, manganese-doped ZnS (ZnS:Mn) nanoparticles have been used in multiphoton imaging. 97 Besides their low toxicity, the manganese dopants change the emission wavelength from ∼430 nm to ∼580 nm, allowing more light to escape from the tissues. The large threephoton absorption cross section of ZnS:Mn nanoparticles, which is 4 orders of magnitude larger than those of ultraviolet (UV) fluorescent dyes, enables the three-photon excitation by 920 nm NIR laser, allowing deeper tissue penetration compared with two-photon imaging (Figure 4a). Spatial resolution is also much improved due to the reduced out-of-focus excitation and diminished background fluorescence (Figure 4b). Other than the ZnS:Mn nanoparticles, InP/ZnS 99 or CuInS 2 /ZnS QDs 100 are also promising candidates for less-toxic probes. As described earlier, the multiphoton excitation of QDs using NIR laser presents a number of advantages over UV excitation of fluorescent dye, although light scattering from biological tissues remains a problem that limits the fluorescence imaging of deep tissues. To further minimize the light scattering, the second near-infrared (NIR-II) range (1000 to 1700 nm) has been suggested as a better optical window (Figure 4c). 41,42 For instance, ZnS:Mn QDs, which were previously described as a three-photon imaging probe, were shown to have a superior two-photon imaging characteristic under NIR-II excitation (Figure 4d). 101 Compared with the two-photon excitation of ZnS host using a 600 nm laser, direct two-photon excitation of the manganese dopants by a 1050−1310 nm light source can benefit from the large two-photon absorption cross section of the manganese ions and deeper light penetration depth of the NIR-II window. However, the quantum efficiencies of QDs by NIR or NIR-II multiphoton excitation are still very low. The emission light from the QDs is also subjected to the absorption and scattering by the tissues, which further reduces the imaging quality. Finally, multiphoton imaging requires a microscope equipped with an expensive high-power femtosecond pulsed laser as an excitation source, and the laser beam should be focused for scanning, which delays the data acquisition. Therefore, the development of QDs with high multiphoton quantum efficiency as well as the development of imaging techniques for rapid acquisition of high-resolution images is urgent for the wide application of multiphoton imaging. ■ LUMINESCENCE UPCONVERSION IMAGING PROBES Upconversion is another mechanism of the anti-Stokes emission processes, and it has received much attention in recent years to develop novel luminescent probes. 93 Similar to the multiphoton absorption, a NIR excitation source can be used for deeper light penetration and minimal background autofluorescence. Compared with the multiphoton absorption, however, the upconversion mechanism involves the photon absorption through real electronic intermediate states, resulting in a much higher emission efficiency and a longer luminescence lifetime up to several hundred microseconds. 102,103 Therefore, UCNPs can be excited at a low-power density using a continuous-wave laser diode. As such, image scanning by the focused pulsed laser is not necessary, and data acquisition can be performed much faster using wide-field microscopy. 104−107 Unlike the QDs, the emission wavelength of lanthanidedoped inorganic UCNPs is not related to the quantum confinement effect but dependent on the energy levels of individual lanthanide elements. 108,109 Therefore, emission color tuning is achieved by controlling the elemental composition of the UCNPs. 110,111 Luminescence lifetime is also tunable from several to thousands of microseconds by changing the type or the percentage of dopants, 103 which allows multiplex imaging not only by different emission colors but also by different lifetimes. The long luminescence lifetime of the UCNPs is also beneficial to the time-gated fluorescence imaging, where increased image contrast is obtained by separating the UCNP emission from light scattering. 112 Although the upconversion efficiency of the lanthanidedoped inorganic UCNPs is exceeded by those of the organic dye-based UCNPs that use triplet−triplet annihilation upconversion, 113,114 it has been shown that the photon collection efficiency can be enhanced by functionalizing the lanthanide-doped inorganic UCNPs with antenna materials such as NIR dyes (Figure 5a), 115−118 gold nanoshells, 70 or QDs. 119 These antennas can also expand the range of absorption wavelength, allowing the flexible choice of excitation sources. In addition, more robust chemical stability and photostability of the inorganic UCNPs than those of the upconverting organic dyes in aqueous or air condition render them better suited for bioimaging applications. Further efforts to enhance the emission efficiency of the lanthanide-doped inorganic UCNPs include the controlled doping of core−shell structured UCNPs with different lanthanide dopants to facilitate the energy transfer, 102 and the high-irradiance excitation of UCNPs, where the luminescence quenching of the dopant ions is alleviated by the strong excitation power. 120 While UCNPs are promising as a new luminescence imaging probe, it is important to point out the current limitations of the UCNPs. First of all, limited tunability of the emission wavelength can restrict the multiplex imaging. Because of the ladder-like energy levels of the lanthanide emitters, there are always multiple emission peaks. For example, erbium ions generate green and red emission, and thulium ions are known to exhibit UV, blue, and NIR emission. 121 Although there have been several reports on obtaining pure emission colors from UCNPs, especially for red, emission wavelength tuning is still a challenging issue that requires further exploration. 122,123 Another issue is the heating effect by 980 nm NIR laser, which is usually used for the excitation of many types of UCNPs. Since water molecules can absorb the photons of the incident 980 nm laser, their temperature may increase. This heating effect may not be noticeable in most cellular imaging situations. 104 However, the high-power laser used for in vivo imaging may induce a thermal change large enough to affect the upconversion luminescence properties of the UCNPs and possibly cause damage to tissues. To minimize the heating effect, alternative excitation wavelengths whose energies are less absorbed by water molecules have been sought. 124 In a recent study, Nd 3+ ions were introduced to UCNPs as a new sensitizer dopant that can be excited at 800 nm with marginal heating effect (Figure 5b). 125−127 Further research on various combinations of sensitizer and host materials is clearly needed to develop an optimized system for clinical use. UCNPs are not free from the biosafety issues. Since an increasing number of UCNPs are now studied for in vivo applications, careful evaluation of their potential toxic effects is of great importance. To date, several systematic studies have been made to investigate the in vivo toxicity of UCNPs in mice, 128−132 zebrafish embryos, 133,134 and Caenorhabditis elegans worms, 135−137 and many results suggest little to no toxicity with small doses (e.g., <1 mg/kg). UCNPs indeed can be regarded as safer than the cadmium-containing QDs, but they are not completely safe as an overdose of UCNPs still can induce a severe toxicity. Consequently, the administration of UCNPs for bioimaging should be kept as minimal as possible, Since the nanoparticle probes are usually composed of various inorganic elements and organic molecules for core material and surface coating, respectively, individual components can be tailored for specific applications. which again emphasizes the significance of developing efficient probes. ■ CONCLUSION Various inorganic nanoparticles have been developed and used as probes for in vivo biomedical imaging. For MRI and CT, several nanoparticle-based contrast agents have been shown to outperform conventional small molecule-based contrast agents in terms of imaging quality. Moreover, they are less toxic and easier to functionalize with targeting or stimuli-responsive ligands for effective treatment. The use of QDs or UCNPs for optical imaging also works as a good alternative to the optical imaging by organic dyes. The higher photostability and the larger absorption cross section of the QDs and UCNPs endow in vivo imaging with high-resolution and good signal-to-noise ratio. Especially, imaging techniques based on the multiphoton or upconversion process can make use of NIR light to obtain the images of deeper tissues. Recently, photoacoustic imaging has also emerged as a promising imaging technique to provide centimeter penetration depth with micrometer resolution. 138 Even though photoacoustic imaging exhibits better tissue penetration capability than anti-Stokes shift-based luminescence imaging, simultaneous imaging of multiple targets is only allowed by luminescence multicolor imaging, which is an advantage of luminescence-based imaging. 111 Despite all these benefits, nanoparticle imaging probes are not yet ready to completely replace the conventional contrast agents or fluorescence/luminescence dyes. Future research should be aimed at improving the efficiencies of the imaging modalities as well as the nanoparticle probes. It is also worth mentioning the possibilities of the nanoparticle probes for multifunctional capabilities. Since the nanoparticle probes are usually composed of various inorganic elements and organic molecules for core material and surface coating, respectively, individual components can be tailored for specific applications (Figure 5c). 139 For example, Gd 3+ ions which exhibit T 1 MRI contrast effects can be doped into UCNPs to produce a luminescence/MRI dual-modal imaging probe. And, Lu 3+ (or Yb 3+ ) ions of UCNPs which have high atomic numbers can show a higher CT contrast enhancement than iodinated ones. This kind of multimodal imaging strategy is quite useful because the advantages of each imaging modality, such as high sensitivity or high penetration depth, can be combined altogether. Moreover, multimodal imaging probes not only provide the means for complementary imaging of the same region of interest but also can enable the imaging of different regions by individual imaging techniques. As a result, more comprehensive and reliable diagnosis is possible with Moreover, multimodal imaging probes not only provide the means for complementary imaging of the same region of interest but also can enable the imaging of different regions by individual imaging techniques. As a result, more comprehensive and reliable diagnosis is possible with smaller quantities of nanoparticle probes than those for separate imaging. smaller quantities of nanoparticle probes than those for separate imaging. Judicious design is necessary, though, as simple integration without any specific purpose may not bring any synergistic effect. In addition to the combination of different imaging modalities, drug molecules can be incorporated onto the surface of the nanoparticle probes using bioconjugation chemistry, producing theranostic agents. There are still plenty of untapped possibilities for such combinations that remain to be realized. As always, biosafety of inorganic nanoparticle probes is critical, and it should be assessed carefully to fully draw out the potentials of the nanoparticle probes in bioimaging. While several issues regarding the toxicity, biodistribution, and clearance of nanoparticles in living animals have been investigated for the past decade, our current understanding is still far from complete. A bottom line would be synthesizing nanoparticle probes using less toxic elements and green chemistry if possible. Surface functionality and the overall size of the nanoparticle probes, which are closely related to the physical properties of the nanoparticle probes, are also known to affect the circulation, uptake, distribution, and clearance properties in vivo. Therefore, it would be required to optimize various factors for the best in vivo results, since the nanoparticle probes with the highest physical performance do not always necessarily exhibit the greatest biological efficacy.
2018-04-26T22:59:23.337Z
2018-01-23T00:00:00.000
{ "year": 2018, "sha1": "4a85d433e7e9008ea599cd22be68c19a29fd824c", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acscentsci.7b00574", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a85d433e7e9008ea599cd22be68c19a29fd824c", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
256750851
pes2o/s2orc
v3-fos-license
Prevalence, Seroprevalence and Risk Factors of Avian Influenza in Wild Bird Populations in Korea: A Systematic Review and Meta-Analysis Since the first recorded outbreak of the highly pathogenic avian influenza (HPAI) virus (H5N1) in South Korea in 2003, numerous sporadic outbreaks have occurred in South Korean duck and chicken farms, all of which have been attributed to avian influenza transmission from migratory wild birds. A thorough investigation of the prevalence and seroprevalence of avian influenza viruses (AIVs) in wild birds is critical for assessing the exposure risk and for directing strong and effective regulatory measures to counteract the spread of AIVs among wild birds, poultry, and humans. In this study, we performed a systematic review and meta-analysis, following the PRISMA guidelines, to generate a quantitative estimate of the prevalence and seroprevalence of AIVs in wild birds in South Korea. An extensive search of eligible studies was performed through electronic databases and 853 records were identified, of which, 49 fulfilled the inclusion criteria. The pooled prevalence and seroprevalence were estimated to be 1.57% (95% CI: 0.98, 2.51) and 15.91% (95% CI: 5.89, 36.38), respectively. The highest prevalence and seroprevalence rates were detected in the Anseriformes species, highlighting the critical role of this bird species in the dissemination of AIVs in South Korea. Furthermore, the results of the subgroup analysis also revealed that the AIV seroprevalence in wild birds varies depending on the detection rate, sample size, and sampling season. The findings of this study demonstrate the necessity of strengthening the surveillance for AIV in wild birds and implementing strong measures to curb the spread of AIV from wild birds to the poultry population. Introduction Avian influenza (AI), also known as the "bird flu," a disease caused by influenza type A viruses, affects a wide variety of domestic and wild birds. Based on their pathogenicity in birds, influenza A viruses are classified as either highly pathogenic or low pathogenic avian influenza viruses, known as HPAI and LPAI viruses, respectively [1,2]. Wild birds, particularly migratory aquatic birds of the order Anseriformes (ducks, geese, and swans) and Charadriiformes (shorebirds and gulls) are natural reservoirs of LPAI viruses [3][4][5]. As LPAI viruses primarily replicate in duck intestinal tracts, their transmission among wild birds occurs primarily through the fecal-oral route [1]. LPAI viruses are excreted in feces and have been demonstrated to survive in water for an extended period of time [6]. Thus, waterborne transmission could play a significant role in the spread of LPAI viruses among migratory waterbirds. Generally, AI viruses do not cause disease in wild birds, although subtypes of HPAI viruses can invade and replicate in different organs and may cause severe infections [4,7]. HPAI viruses evolve by mutation when the virus, carried in its mild form by a wild bird, is introduced into poultry [5,8]. These viruses are capable of infecting a wide range of animal species, such as swine, birds, companion animals, marine animals, and humans [7,9]. The transmission of avian influenza viruses (AIVs) from infected wild birds to domestic birds is perceived to occur through the sharing of water sources or the contamination of feed [10,11]. In humans, zoonotic subtypes of AIVs are transmitted mainly through direct contact with infected domestic poultry [11,12]. To date, eight AIVs have been reported to infect humans, of which, the H5N1 and H7N9 subtypes are associated with high morbidity and mortality in a large number of humans [11,13,14]. In South Korea, the HPAI subtype, H5N1, was first detected in duck meat imported from mainland China in 2000, which resulted in the loss of 4588 tons of meat [15]. From 2003 to 2004, an HPAI outbreak affected 392 chicken and duck farms in South Korea, causing a total discard of 5,285,000 birds, which was equivalent to $458 million [15,16]. In wild birds, the first cases of the H5N1 HPAI virus infection were primarily observed in Hong Kong in late 2002 [17,18]. Since then, multiple AI outbreaks associated with the H5N1 subtype have been reported in Asia, Africa, and Europe, all of which have been ascribed to wild migratory birds [19,20]. These documented cases imply that wild aquatic birds may play a major role in carrying AIVs over long distances via migration. Waterfowl are the most observed migratory birds, and winter birds are predominantly associated with the occurrence of AI in South Korea [2,16]. Although many countries have been able to halt the spread of H5N1 in animal and human populations by conducting regular surveillance and enforcing strict animal health regulations, the virus remains endemic to poultry populations, primarily in low-income countries with inadequate animal health and surveillance facilities. Owing to the rapid evolution of HPAI viruses, their devastating impact on the global poultry industry, and the threat they pose to public health, it is critical to understand the prevalence of AIVs in wild birds for risk assessment and preparedness against future outbreaks. The prevalence and seroprevalence of AIVs in the wild bird populations of South Korea have been reported in various individual studies; however, no attempt has been made to consolidate these studies to derive a robust prevalence estimate of AIVs using a meta-analytical approach. The crucial benefit of meta-analysis is that it combines evidence to achieve a more robust point estimate with a higher statistical power as compared with that obtained from a single study from where the data originated [21,22]. Currently, systematic reviews and meta-analyses are perceived as the best available knowledge sources to make decisions regarding treatment choices [23], and meta-analyses are broadly used to calculate precise estimates of disease frequency, such as disease incidence rates and prevalence proportions [21,24]. In various studies, meta-analysis and regression analysis techniques have been used to generate overall prevalence estimates of infectious agents in animal populations and provide empirical evidence on associated risk factors [11,25,26]. In this study, we performed a systematic review and meta-analysis to estimate the overall prevalence and seroprevalence of AI in wild birds, using data from available studies conducted in South Korea. We hypothesized that the detection rate of AI in wild birds would depend on the sampling period, detection method, sample size, and sample type. Thus, subgroup analysis was adapted to investigate the sources of heterogeneity between the reported prevalence from individual studies using the above-mentioned variables. Study Design and Systematic Review Protocol A systematic review and meta-analysis were performed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) guidelines [27] to determine the prevalence and seroprevalence of AI in wild birds in South Korea (Table S1). The review question was structured in accordance with the "population, exposure, comparator, and outcome" (PECO) format. In this systematic review, the "population of interest" refers to the wild birds, and "exposure" refers to the AIVs. As this study is a systematic review and meta-analysis of prevalence, the category of "comparator" was not relevant to this study. The "outcomes of interest" included the detected prevalence and seroprevalence of AIVs in wild birds in South Korea. Literature Search Strategy An extensive literature search was conducted with no language restriction using MEDLINE (via PubMed), Scopus, Web of Science, and South Korean databases, such as RISS and KISS, to identify studies published between 1980 and 2021. The last literature search was conducted on 23 December 2021. The following keywords: (wild bird* OR migratory bird* OR waterfowl OR Galliformes OR Charadriiformes OR Anseriformes) AND (avian influenza* OR AI OR bird flu OR avian flu OR influenza A virus OR AIV) AND (Korea OR South Korea) AND (prevalence OR inciden* OR proportion OR cases OR surveillance OR seroprevalence) were used to find eligible studies on the prevalence and seroprevalence of AIVs in wild birds in South Korea. An asterisk was used to extend a search term to related words with the same meaning (e.g., inciden* for incidence and incident). Eligibility and Exclusion Criteria In this systematic review and meta-analysis, the inclusion criteria were as follows: cross-sectional studies, primary studies conducted in South Korea, studies that assessed the prevalence and/or seroprevalence of AI in wild birds, studies that reported the sample size and the number of positive samples or the prevalence/seroprevalence rate, and studies with virus-isolation data. Studies were excluded if they were not conducted in South Korea, if samples were collected from animals other than wild birds, and if they did not report the total number of samples alongside the number of positive samples detected or the exact calculated prevalence rate. The titles and abstracts were screened for suitability using predetermined criteria. The full texts of potentially relevant articles were obtained and evaluated. Data Extraction Data on the prevalence and seroprevalence of AI in wild birds in South Korea were extracted by two independent reviewers, and any disagreements were resolved through discussion and consensus. From all eligible studies, information regarding the first author, year of publication, publication status (i.e., published or non-published), sample type (i.e., feces, cloacal swabs, carcass, or blood), detection method, sampling season, sampling location, bird species, detected AI subtype, sample size, and the number of positive samples was extracted. Data were extracted and organized into a pre-developed Microsoft Excel spreadsheet. Risk of Bias Assessment The eligible studies were assessed for internal and external validity by two independent reviewers using the Joanna Briggs Institute (JBI) critical appraisal tools for prevalence studies [28,29]. Each study was classified as having a low, high, or unclear risk of bias. The checklist contained nine questions, but only eight were evaluated because one question (regarding the response rate) was irrelevant to this study. Data Synthesis Data analysis was conducted using R version 4.1.2 (R Studio version 1.4) software [30,31]. The meta-analysis was performed and the forest plots were generated using the "meta" and "metafor" packages [32][33][34]. The total number of samples collected and the number of positive samples detected in each study were used to calculate overall prevalence estimates. To fulfill the assumption of a normal distribution, the logit transformation method was applied to the data [24,26,35] using the following formula: where "n" is the total sample size and "p" is the prevalence of the pathogen under study. A generalized linear mixed model, together with a logit transformation, demonstrates better performance; different studies recommend the use of this approach, which was adapted in this study to pool the data [35,36]. A random effects model was used to generate the pooled prevalence and seroprevalence of AIV in wild birds in South Korea. To combine the study estimates, the between-study variance (τ 2 ) was estimated using the maximum likelihood method. The overall effect size of the logit model and its corresponding 95% confidence interval (CI) were calculated and back-transformed to prevalence rates for ease of interpretation. The between-study heterogeneity was assessed using the Q test and I 2 statistic, which accounts for the amount of the observed variance that reflects the variance in true effects rather than sampling error [37]. The heterogeneity between studies was considered substantially high if the Q test yielded a statistically significant p-value (p < 0.05) and I 2 was greater than 50%. To investigate the reason for heterogeneity, a subgroup analysis was undertaken using four pre-specified variables, including sampling season (i.e., fall/winter and spring/summer), sample size (i.e., more than 1000 or less than 1000), sample type (i.e., feces, cloacal swabs, carcass, and blood), detection method (i.e., ELISA, reverse transcription-polymerase chain reaction (RT-PCR), rRT-PCR, hemagglutination (HA) test, virus isolation, hemagglutinin inhibition (HI) test, and agar gel precipitation test (AGPT)) that could potentially affect the reported prevalence in the literature. Publication bias was assessed through visual inspection of the symmetry of the contour-enhanced funnel plots, and a quantitative estimate of publication bias was performed using Egger's regression test [38,39]. After confirming publication bias, the Duval and Tweedie trim-and-fill method was used to estimate an unbiased effect by imputing missing studies in the funnel plot [40]. Search Results Initially, 853 records were obtained by conducting an electronic database search. After duplicates were removed, 434 studies remained, and their titles and abstracts were reviewed for eligibility. After the title and abstract screening, 337 of the 434 records were removed. The remaining 97 studies were subjected to full-text screening, of which 48 were deemed irrelevant to this study, and the remaining 49 were finally included in the quantitative synthesis (meta-analysis). The study selection process is illustrated in Figure 1. Study Characteristics Among the 49 studies eligible for the meta-analysis, 39 assessed the prevalence and 10 assessed the seroprevalence of AIVs in wild birds in South Korea. Of the prevalence studies, 24 studies were published in peer-reviewed journals and 15 were non-published records (e.g., government reports, research institute reports, and student dissertations). Regarding the sampling season, 16 studies collected samples in fall and winter, whereas the other 23 studies did not report the sampling season. The sample types included feces (30 trials), carcasses (10 trials), cloacal swabs (10 trials), and combinations of samples (2 trials); two trials did not specify the type of samples used. Samples were collected from the Anseriformes (10 trials), Charadriiformes (5 trials), other species (9 trials), and non-reported bird species (36 trials). Regarding seroprevalence studies, three studies were published in South Korean or international academic journals, and the other seven were nonpublished records. Regarding the sampling season, three studies collected samples in the fall and winter, whereas the other seven studies did not report the sampling season. Blood samples were collected from the Anseriformes species (8 trials), Charadriiformes (5 trials), other species (8 trials), and non-reported species (3 trials). The characteristics of studies included in this systematic review and meta-analysis are summarized in Tables S2 and S3. Study Characteristics Among the 49 studies eligible for the meta-analysis, 39 assessed the prevalence and 10 assessed the seroprevalence of AIVs in wild birds in South Korea. Of the prevalence studies, 24 studies were published in peer-reviewed journals and 15 were non-published records (e.g., government reports, research institute reports, and student dissertations). Regarding the sampling season, 16 studies collected samples in fall and winter, whereas the other 23 studies did not report the sampling season. The sample types included feces (30 trials), carcasses (10 trials), cloacal swabs (10 trials), and combinations of samples (2 trials); two trials did not specify the type of samples used. Samples were collected from the Anseriformes (10 trials), Charadriiformes (5 trials), other species (9 trials), and nonreported bird species (36 trials). Regarding seroprevalence studies, three studies were published in South Korean or international academic journals, and the other seven were non-published records. Regarding the sampling season, three studies collected samples in the fall and winter, whereas the other seven studies did not report the sampling season. Blood samples were collected from the Anseriformes species (8 trials), Charadriiformes (5 trials), other species (8 trials), and non-reported species (3 trials). The characteristics of studies included in this systematic review and meta-analysis are summarized in Tables S2 and S3. Prevalence Estimates Thirty-nine studies investigated the prevalence of AIVs in wild birds in South Korea ( Figure 3). Overall, the pooled prevalence was estimated to be 1.57% (95% CI: 0.98, 2.51) with high between-study heterogeneity (I 2 = 100%). Subgroup analyses were performed Seroprevalence Estimates Ten studies assessed the seroprevalence of AIVs in wild birds in South Korea. The pooled seroprevalence estimate was 15.91% with a 95% CI of 5.89-36.38 (Figure 4). Betweenstudy heterogeneity was significantly high (I 2 = 100%). To identify the reasons for hetero-geneity, we conducted subgroup analysis using bird species, detection method, sample size, publication status, and sampling season as potential effect modifiers. All variables had a significant influence on seroprevalence rates, except for publication status (Table 2). Regarding bird species, the highest seroprevalence was detected in the Anseriformes species (30.45% (95% CI:18.97, 45.03)), followed by the Charadriiformes and non-reported species with seroprevalence estimates of 2.95% (95% CI: 0.24, 27.43)) and 2.85% (95% CI: 1.17, 6.76), respectively. The lowest seroprevalence was detected among other bird species (rather than Anseriformes and Charadriiformes), with an estimate of 2.83% (95% CI: 0. 40, 17.26). Heterogeneity was still high within all subgroups (I 2 > 90%), except for the Charadriiformes species (I 2 = 49%). Based on the detection method, the highest seroprevalence, 31.47% (95% CI: 20.47, 45.02), was detected by ELISA, whereas the lowest seroprevalence of 2.46% (95% CI: 1.12, 5.31) was indicated by the HI test. The sample size also demonstrated a significant association with the seroprevalence rate, with the highest seroprevalence (30.93% (18.48, 46.93)) observed in studies with less than 1000 samples compared with 5.03% (1.25, 18.20) observed in those with more than 1000 samples (p < 0.01). Subgroup analyses also revealed that the highest seroprevalence was detected among studies that collected samples from fall to winter (36.48% (24.05, 51.01)) than in studies that did not report the sampling season (10.48% (3.46, 27.66)) (p < 0.02). Regarding the publication status, no significant difference in seroprevalence was observed between the published (9.07% (1.91, 33.78)) and non-published studies (19.85% (7.52, 43.01)) (p < 0.37). The results of the subgroup analysis are presented in Table 2. Publication Bias Publication bias occurs when the likelihood of a study being published is influence by its findings. In contrast to smaller studies with low effects, larger studies with relativel high effects are more likely to be published because they are statistically significant. Th results in publication bias. To assess the presence of publication bias, contour-enhance funnel plots were generated with the effect sizes on the x-axis and their standard error Publication Bias Publication bias occurs when the likelihood of a study being published is influenced by its findings. In contrast to smaller studies with low effects, larger studies with relatively high effects are more likely to be published because they are statistically significant. This results in publication bias. To assess the presence of publication bias, contour-enhanced funnel plots were generated with the effect sizes on the x-axis and their standard errors on the y-axis ( Figure 5). On visual inspection, the studies were symmetrically distributed on both sides of the mean effect and demonstrated significant results (p < 0.05). This symmetrical pattern suggests that a publication bias is unlikely. To avoid subjective inferences from funnel plot visualizations, Egger's regression test was applied to quantify the presence of funnel plot asymmetry. Egger's regression test yielded p-values of 0.094 and 0.506 for prevalence and seroprevalence outcomes, respectively, indicating no funnel plot asymmetry; hence, publication bias was not confirmed. Viruses 2023, 15, x FOR PEER REVIEW 11 of on the y-axis ( Figure 5). On visual inspection, the studies were symmetrically distribute on both sides of the mean effect and demonstrated significant results (p < 0.05). This sym metrical pattern suggests that a publication bias is unlikely. To avoid subjective inferenc from funnel plot visualizations, Egger's regression test was applied to quantify the pre ence of funnel plot asymmetry. Egger's regression test yielded p-values of 0.094 and 0.50 for prevalence and seroprevalence outcomes, respectively, indicating no funnel pl asymmetry; hence, publication bias was not confirmed. Discussion AIVs in wild birds pose a pandemic threat to humans and the poultry industr worldwide. Previous studies have confirmed the relationship between the wild bird m gratory route and AIV prevalence in South Korea by evaluating the geographical distr butions of HPAI outbreaks and cases of mortality in wild birds [2,79]. Therefore, it is critical importance to understand the current status of AI prevalence and seroprevalenc in wild birds for use as an early warning system. In this study, we performed a systemat review and meta-analysis to consolidate the data from individual primary studies th evaluated the prevalence and seroprevalence of AIVs in wild birds in South Korea. Th overall prevalence was estimated to be 1.568% (0.976; 2.510), indicating that approx mately 2% of the wild bird population in South Korea are carriers of AIVs. Discussion AIVs in wild birds pose a pandemic threat to humans and the poultry industry worldwide. Previous studies have confirmed the relationship between the wild bird migratory route and AIV prevalence in South Korea by evaluating the geographical distributions of HPAI outbreaks and cases of mortality in wild birds [2,79]. Therefore, it is of critical importance to understand the current status of AI prevalence and seroprevalence in wild birds for use as an early warning system. In this study, we performed a systematic review and meta-analysis to consolidate the data from individual primary studies that evaluated the prevalence and seroprevalence of AIVs in wild birds in South Korea. The overall prevalence was estimated to be 1.568% (0.976; 2.510), indicating that approximately 2% of the wild bird population in South Korea are carriers of AIVs. According to the census of winter migratory birds conducted in South Korea, approximately 1.63 million winter birds visited South Korea in 2020 [80]. Of these, 850,000 birds belonged to the order Anseriformes and accounted for 52% of the total. Based on the results of this meta-analysis, it can be estimated that approximately 32,600 migratory birds in South Korea carry AIVs. Chen et al. (2019) discovered that the prevalence and seroprevalence of AI were 2.5% and 26.5%, respectively, in wild birds in China [81]. The relatively low prevalence of AIVs in wild birds in South Korea is consistent with the knowledge that South Korea is not a breeding site, but rather a wintering area for adult wild birds, particularly waterfowl, such as ducks and geese [17]. On the other hand, the seroprevalence estimate was 15.911% (5.891; 36.383), suggesting that approximately 16% of the wild bird population in South Korea has been exposed to AIVs. As the antibody-positive cases included individuals that had recovered from AIV, the seroprevalence would tend to be relatively high compared with the prevalence. Another possible explanation for the high seroprevalence of AIVs in wild birds is that during the migration route, the migratory birds aggregate at nesting and feeding sites, which results in high rates of contact between birds, facilitating AIV transmission and a high prevalence of antibodies in the bird population [82,83]. It is, therefore, likely that wild birds arriving in South Korea will have had repeated exposure to AIVs, which leads to the persistence of anti-AIV antibodies over long periods in their bodies. The antibodies detected in wild birds could only be the result of seroconversion induced by a natural viral infection, as they are not immunized against AIV. Thus, they could play a critical role in spreading the virus to the surrounding environment, livestock, and humans. Of the six variables used in the subgroup analysis, two (bird species and sample type) showed a significant influence on the prevalence rate of AIVs in wild birds. In contrast, four variables (bird species, detection method, sample size, and sampling season) showed a significant relationship with seroprevalence rates. Small studies (less than 1000 samples) demonstrated higher prevalence and seroprevalence rates than large studies (more than 1000 samples). This could be related to the fact that studies with small sample sizes are associated with higher effect sizes than bigger studies. Another possible reason is that the larger studies included in the analysis are mostly the non-published government reports that collected samples from different provinces of the country as part of a normal AIV surveillance routine, thus reducing the chance of getting positive samples compared with small studies that mainly collected samples from specific locations during or after an HPAI outbreak, thus increasing the probability of getting more positive samples. Regarding the species of wild birds, the highest AIV prevalence and seroprevalence rates were detected in the Anseriformes species compared to others. These results are in line with previous reports that waterfowl are the predominant migratory birds and are primarily associated with AI occurrence in South Korea [2,16]. Similar findings were also reported in China, where the highest AIV prevalence (6.8%) and seroprevalence (41.8%) were observed in the Anseriformes species compared with that in non-Anseriformes species [81]. Based on the sample type, the highest prevalence rate was revealed in carcasses compared to other sample types (p < 0.01). One possible reason for these results is that carcass samples were collected during or shortly after the 2014 HPAI outbreak (H5N8) in South Korean duck and chicken farms and wild birds found in the Donglim reservoir, Jeonbuk province [2,51,68,69]; most of these carcasses were confirmed to have died from the HPAI virus (H5N8) clade 2.3.4.6 [69,84]. Considering the detection method, the highest seroprevalence was detected by ELISA rather than the HI test (p < 0.01). This difference in performance could be due to the low sensitivity of the HI test for detecting AIV antibodies, particularly the H5N1 and H3N2 serotypes [85,86]. Furthermore, the highest seroprevalence rate detected during the fall-to-winter season is consistent with the National Institute of Environmental Research report "Surveillance and monitoring of wildlife diseases in Korea, 2012," which states that the prevalence of AIVs increases from October to December (stage 1), when waterfowl migrate from the north, and in April (stage 4), when passing migratory birds are moving to the north [2]. Surprisingly, the results of the subgroup analysis confirmed no significant influence of the sampling season on the prevalence estimates. However, this should be interpreted with caution, as many studies included in the assessment of the prevalence did not clearly report the sampling season, and many studies fell into a subgroup of "not reported." Consequently, this could have limited the power of the statistical tests to detect the significance while it was present. In addition to the above-mentioned moderators, prevalence rates in birds are likely to vary depending upon the surveillance period, the sampling region, and whether surveillance was performed in response to an outbreak or conducted as routine surveillance [11]. As most of the studies included in this metaanalysis did not provide clear information about these variables, we could not evaluate their contribution to the observed prevalence and seroprevalence rates. Although the prevalence and seroprevalence estimates between subgroups were significantly different, the within-subgroup heterogeneity was substantially high, indicating that none of the variables could entirely explain the reasons for between-study heterogeneity. This study had a few limitations. First, there was substantial variation in the prevalence rates among individual studies. Although we used a couple of moderators to investigate the source of heterogeneity, only a few studies clearly reported on these variables, and a large number fell into the "not reported" subgroup. Furthermore, insufficient information was available to adequately categorize studies based on the reason for surveillance (in response to an outbreak or as routine surveillance), surveillance period, and sampling region, which are also relevant covariates that could possibly demonstrate significant relationships with the observed pooled prevalence estimates. Despite these limitations, the findings of this meta-analysis provide a more robust estimate of AIV prevalence and seroprevalence in wild birds in South Korea than that obtained from a single study. Conclusions In conclusion, this study provides solid evidence for the current prevalence of AIVs among the South Korean wild bird population. These findings demonstrated that a large number of wild birds in South Korea, particularly those of the order Anseriformes, are carriers of AIVs, and others have already been exposed to AI because of the high detection rate of anti-AIV antibodies. This poses a threat to the poultry industry and, potentially, to humans in South Korea, due to the critical role of wild birds in the spread of AIV. Furthermore, migratory wild birds have different flyways, which affect the distribution of AIVs in different countries. A multi-country surveillance system would provide detailed information on the prevalence and distribution of AIVs in this region. The evidence from this study highlights the need to strengthen existing preventive measures and increase surveillance activities to impede the risk of AIV transmission from wild birds to domestic poultry and human beings. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/v15020472/s1, Table S1. PRISMA 2020 check list; Table S2. Characteristics of 39 studies included in the meta-analysis of prevalence of avian influenza viruses in wild birds in South Korea; Table S3. Characteristics of 10 studies included in the meta-analysis of seroprevalence of avian influenza viruses in wild birds in South Korea.
2023-02-11T16:10:56.877Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "bcce0364383226a1146a5a99ccddf3b84098431b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/15/2/472/pdf?version=1675854091", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4776480975f8233dfbc3066498a1316312bc2ed6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
196181767
pes2o/s2orc
v3-fos-license
Attention over Heads: A Multi-Hop Attention for Neural Machine Translation In this paper, we propose a multi-hop attention for the Transformer. It refines the attention for an output symbol by integrating that of each head, and consists of two hops. The first hop attention is the scaled dot-product attention which is the same attention mechanism used in the original Transformer. The second hop attention is a combination of multi-layer perceptron (MLP) attention and head gate, which efficiently increases the complexity of the model by adding dependencies between heads. We demonstrate that the translation accuracy of the proposed multi-hop attention outperforms the baseline Transformer significantly, +0.85 BLEU point for the IWSLT-2017 German-to-English task and +2.58 BLEU point for the WMT-2017 German-to-English task. We also find that the number of parameters required for a multi-hop attention is smaller than that for stacking another self-attention layer and the proposed model converges significantly faster than the original Transformer. Introduction Multi-hop attention was first proposed in end-toend memory networks (Sukhbaatar et al., 2015) for machine comprehension. In this paper, we define a hop as a computational step which could be performed for an output symbol many times. By "multi-hop attention", we mean that some kind of attention is calculated many times for generating an output symbol. Previous multihop attention can be classified into "recurrent attention" (Sukhbaatar et al., 2015) and "hierarchical attention" (Libovický and Helcl, 2017). The former repeats the calculation of attention many times to refine the attention itself while the latter integrates attentions for multiple input information sources. The proposed multi-hop attention for the Transformer is different from previous recurrent attentions because the mechanism for the first hop attention and that for the second hop attention is different. It is also different from previous hierarchical attention because it is designed to integrate attentions from different heads for the same information source. In neural machine translation, hierarchical attention (Bawden et al., 2018;Libovický and Helcl, 2017) can be thought of a multi-hop attention because it repeats attention calculation to integrate the information from multiple source encoders. On the other hand, in the Transformer (Vaswani et al., 2017), the stateof-the-art model for neural machine translation, feed-forward neural network (FFNN) integrates information from multiple heads. In this paper, we propose a multi-hop attention mechanism as a possible alternative to integrate information from multi-head attention in the Transformer. We find that the proposed Transformer with multi-hop attention converges faster than the original Transformer. This is likely because all heads learn to influence each other, through a head gate mechanism, in the second hop attention (Figure 1). Recently, many Transformer-based pretrained language models such as BERT have been proposed and take about a month for training. The speed at which the proposed model converges may be even more important than the fact that its accuracy is slightly better. 2 Multi-Hop Multi-Head Attention for the Transformer Multi-Head Attention One of the Transformer's major successes is multihead attention, which allows each head to capture different features and achieve better results compared to a single-head case. Given the query Q, the key K, and the value V , they are divided into each head. Here, h (= 1, . . . , H) denotes the index of the head, where a is the output of scaled dot-product attention, W O is a parameter for a linear transformation, and d is a scaling factor. Finally, the output of multi-head attention, m, is input to the next layer. The calculation of attention using scaled dot-product attention is defined as the first hop ( Figure 1). Multi-Hop Attention In the original Transformer (Vaswani et al., 2017), information from each head is integrated by simple concatenation followed by a linear transformation. Attention is refined by stacking the combination of self-attention sub-layer and position-wise feed-forward neural network sub-layer. However, as layers are stacked, convergence becomes unstable. Consequently, there is a limit to the iterative approach by layering. Therefore, we propose a mechanism to repeat the calculation of attention based on a mechanism other than stacking layers. The original Transformer is considered to consist of six single-hop attention layers. On the contrary, in the proposed method, some layers have Table 3: Difference between 6-layer Transformer with multi-hop and 7-layer stacked vanilla Transformer six self-attention layers in both the encoder and the decoder, respectively, and six source-to-target attention layers in the decoder. The first hop attention of the multi-hop attention is equivalent to the calculation of scaled dot-product attention (Equation 1) in the original Transformer. The second hop attention consists of multi-layer perceptron (MLP) attention and head gate, as shown in Figure 1 and the following equations. First, MLP attention between the output of the first hop, a (h) i , and the query, Q, is calculated. Attention is considered as the calculation of a relationship between the query and the key/value. Therefore, in the second hop, attention is calculated again by using the output of the first hop, rather than the key/value. Equations 4 and 5 are head gate in Figure 1. The head gate normalizes the attention score of each head to β (h) i , using the softmax function, where h ranges over all heads. In hierarchical attention (Bawden et al., 2018), the softmax function is used to select a single source from multiple sources. Here, the proposed head gate uses the softmax function to select a head from multi- Vaswani et al. (2017) reported that dot-product attention is superior to MLP attention, we used MLP attention in the second hop of the proposed multi-hop attention because it can learn the dependence between heads by appropriately tuning the MLP parameters. We conclude that we can increase the expressive power of the network more efficiently by adding the second hop attention layer, rather than by stacking another single-hop multi-head attention layer. Data We used German-English parallel data obtained from the IWSLT2017 1 and the WMT17 2 shared tasks. The IWSLT2017 training, validation, and test sets contain approximately 160K, 7.3K, and 6.7K sentence pairs, respectively. There are approximately 5.9M sentence pairs in the WMT17 training dataset. For the WMT17 corpus, we used new-stest2013 as the validation set and newstest2014 and newstest2017 as the test sets. For tokenization, we used the subword-nmt tool (Sennrich et al., 2016) to set a vocabulary size of 32,000 for both German and English. Experimental Setup In our experiments, the baseline was the Transformer (Vaswani et al., 2017) model. We used fairseq (Gehring et al., 2017) 3 toolkit and the source code will be available at our github repository 4 . For training, we used the Adam optimizer with a learning rate of 0.0003. The embedding size was 512, the hidden size was 2048, and the number of heads was 8. The encoder and the decoder each had six layers. The number of tokens per batch was 2,000. The number of training epochs for IWSLT2017 and WMT17 were 50 and 10, respectively. In all experiments using the IWSLT2017, models were trained on an Nvidia GeForce RTX 2080 Ti GPU, while in all experiments using the WMT17, models were trained on an Nvidia Tesla P100 GPU. Results Results of the evaluation are presented in Tables 1 and 2. In Table 2, the proposed multi-hop attention is used only at the fourth layer in the encoder. In the evaluation of German-to-English translation for IWSLT2017, the proposed method achieved a BLEU score of 34.31, which indicates that it significantly outperforms the Transformer baseline, which returned a BLEU score of 33.46. For WMT17, the proposed method achieved a BLEU score of 23.91, indicating that it also significantly outperformed the Transformer baseline, which returned a BLEU score of 21.33. In IWSLT2017 German-to-English and English-to-German translation tasks, various conditions were investigated, as shown in Table 1. The best models are shown in Figure 2. The baseline training time was 1143.2s per epoch in IWSLT2017 German-to-English translation, and the training time for the proposed method is 1145.6s per epoch. We found that increasing the number of parameters did not affect training time. Difference between Multi-Hop and 7-layer Stacked Transformer We compared the proposed method with the original Transformer. Table 3 shows the translation accuracies when the number of layers was changed from 4 to 7, encoder and decoder, respectively. Here, Vanilla refers to the original Transformer and Multi-hop refers to the proposed method where the multi-hop attention layer is used at the fourth layer in the encoder. As shown in Table 3, the 7-layer model BLEU score is lower than that of the 6-layer model. In the experiments, the number of parameters required by the 6-and 7-layer models was 55,459K, and 62,816K, respectively, and the number of parameters for the multi-hop method was 55,492K. The proposed method only increases the number of parameters by one percent compared to simply stacking one multi-head layer. Thus, it is evident that simply increasing the number of parameters and repeating the attention calculation doesn't necessarily improve performance. On the other hand, the proposed method does not improve the BLEU score when the number of layers is four and five. This is probably because the parameters of each head in the baseline Trans- former are likely to converge properly when there are relatively few parameters. Another interpretation is that the normalization among heads forced by the proposed method works as noise. As a conclusion, the proposed method demonstrates that appropriate connection can be obtained by recalculating attention in the layer where the head has a dependency. Table 1 shows the effect of introducing second hop attention to various positions in the encoder. The second column shows the positions where the second hop attention is used. The best result was obtained when the second hop attention was used only for the fourth layer in the encoder. Performance decreased as the second hop attention was introduced to more layers, i.e., the worst result was obtained when using the second hop in all layers (second hop in layer 1,2,3,4,5,6). Further studies are needed to elucidate the relationship between performance and position of the second hop attention. Table 5 shows the validation loss of models for the IWSLT2017 German-to-English translation task with the second hop layers whose dropout rate is 30%. All models have 6 layers and the positions of the second hop layers have narrowed from all 6 layers to only 6th layers. It should be noted that, in the first epoch (row 1, Table 5), the model with the second hop in all layers has the lowest validation loss, while the baseline model has the highest validation loss. Figure 2(a) shows the learning curve based on the same data shown in Table 5, It is apparent that the models with the second hop converge faster than the baseline model. Figure 2(b) is an enlarged view of Figure 2(a), focused on the lowest validation loss for different models. We find that the validation loss is lower when there are fewer second hop attentions. Figure 3 shows the learning curves for the models with multi-hop attention used only once anywhere in layer 1 to 6. We find the model with second hop attention in layer 6 converges fastest. In terms of convergence, as opposed to accuracy, it seems appropriate to use second hop attention only in the last (6th) layer in the encoder. Related Work The mechanism of the proposed multi-hop attention for the Transformer was inspired by the hierarchical attention in multi-source sequenceto-sequence model (Libovický and Helcl, 2017). The term "multi-hop is borrowed from the end-to-end memory network (Sukhbaatar et al., 2015) and the title "attention over heads" is inspired by Attention-over-Attention neural network (Cui et al., 2017), respectively. Ahmed et al. (2018) proposed Weighted Transformer which replaces multi-head attention by multiple self-attention branches that learn to combine during the training process. They reported that it slightly outperformed the baseline Transformer (0.5 BLEU points on the WMT 2014 English-to-German translation task) and converges 15-40% faster. They linearly combined the multiple sources of attention, while we com-bined multiple attention non-linearly using softmax function in the second hop. It is well known that the Transformer is difficult to train (Popel and Bojar, 2018). As it has a large number of parameters, it takes time to converge and sometimes it does not do so at all without appropriate hyper parameter tuning. Considering the experimental results of our multi-hop attention experiments, and that of the Weight Transformer, an appropriate design of the network to combine multi-head attention could result in faster and more stable convergence of the Transformer. As the Transformer is used as a building block for the recently proposed pre-trained language models such as BERT (Devlin et al., 2019) which takes about a month for training, we think it is worthwhile to pursue this line of research including the proposed multi-hop attention. Universal Transformer (Dehghani et al., 2019) can be thought of variable-depth recurrent attention. It obtained Turing-complete expressive power in exchange for a vast increase in the number of parameters and training time. As shown in Table 4, we have proposed an efficient method to increase the depth of recurrence in terms of the number of parameters and training time. Recently, Voita et al. (2019) and Michel et al. (2019) independently reported that only a certain subset of the heads plays an important role in the Transformer. They performed analyses by pruning heads from an already trained model, while we have proposed a method to assign weights to heads automatically. We assume our method (multi-hop attention or attention-over-heads) selects important heads in the early stage of training, which results in faster convergence than the original Transformer. Conclusion In this paper, we have proposed a multi-hop attention mechanism for a Transformer model in which all heads depend on each other repeatedly. We found that the proposed method significantly outperforms the original Transformer in accuracy and converges faster with little increase in the number of parameters. In future work, we would like to implement a multi-hop attention mechanism to the decoder side and investigate other language pairs.
2019-07-14T07:01:33.923Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "dffed23fd7f2de433073f2453ab65ebc2954863a", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/P19-2030.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "dffed23fd7f2de433073f2453ab65ebc2954863a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264351147
pes2o/s2orc
v3-fos-license
Flight heights obtained from GPS versus altimeters influence estimates of collision risk with offshore wind turbines in Lesser Black-backed Gulls Larus fuscus The risk posed by offshore wind farms to seabirds through collisions with turbine blades is greatly influenced by species-specific flight behaviour. Bird-borne telemetry devices may provide improved measurement of aspects of bird behaviour, notably individual and behaviour specific flight heights. However, use of data from devices that use the GPS or barometric altimeters in the gathering of flight height data is nevertheless constrained by a current lack of understanding of the error and calibration of these methods. Uncertainty remains regarding the degree to which errors associated with these methods can affect recorded flight heights, which may in turn have a significant influence on estimates of collision risk produced by Collision Risk Models (CRMs), which incorporate flight height distribution as an input. Using GPS/barometric altimeter tagged Lesser Black-backed Gulls Larus fuscus from two breeding colonies in the UK, we examine comparative flight heights produced by these devices, and their associated errors. We present a novel method of calibrating barometric altimeters using behaviour characterised from GPS data and open-source modelled atmospheric pressure. We examine the magnitude of difference between offshore flight heights produced from GPS and altimeters, comparing these measurements across sampling schedules, colonies, and years. We found flight heights produced from altimeter data to be significantly, although not consistently, higher than those produced from GPS data. This relationship was sustained across differing sampling schedules of five minutes and of 10 s, and between study colonies. We found the magnitude of difference between GPS and altimeter derived flight heights to also vary between individuals, potentially related to the robustness of calibration factors used. Collision estimates for theoretical wind farms were consequently significantly higher when using flight height distributions generated from barometric altimeters. Improving confidence in telemetry-obtained flight height distributions, which may then be applied to CRMs, requires sources of errors in these measurements to be identified. Our study improves knowledge of the calibration processes for flight height measurements based on telemetry data, with the aim of increasing confidence in their use in future assessments of collision risk and reducing the uncertainty over predicted mortality associated with wind farms. Supplementary Information The online version contains supplementary material available at 10.1186/s40462-023-00431-z. Introduction European governments have pledged to reduce their national carbon emissions in an effort to slow the effects of climate change [1].To achieve targets of net zero emissions by 2050, countries including the UK are constructing offshore wind turbines for electrical energy generation [2].Turbine blades are a potential collision risk to seabirds; estimating the number of collision mortalities that might result from the development of a wind farm is an important aspect of Environmental Impact Assessments (EIAs).The extent of collision risk posed by wind farms to seabirds is dependent on several factors including: the flight heights exhibited by birds in relation to the rotor swept area of a turbine [3]; their flight speeds [4]; and avoidance behaviour undertaken in relation to individual turbines [5,6], or entire wind farms [7][8][9]. A variety of methods have been developed to record the flight heights of seabird species including: boat based visual survey [10], digital aerial survey [11], radar [6], Light Detection and Ranging (LiDAR) [12], laser range finder [13,14], and bird-borne telemetry devices [15,16].The benefits and disadvantages of each survey technique have been thoroughly reviewed (See: Desholm et al., 2006;Thaxter et al., 2015;Jongbloed, 2016;Largey et al., 2021).The advantage of bird-borne telemetry, in comparison to static or transect surveys of flight height collected using human or automatic observations at a site of interest, is the ability to continuously record the flight heights of an individual over an extended period allowing the observation of spatial, temporal, or behavioural variation in movement [5,15,16]. Flight height is frequently determined by telemetry devices through the Global Positioning System (GPS), which calculates three-dimensional position through the signal response time between 4 or more satellites.The accuracy of individual flight height measurements may be increased by scheduling a high sampling frequency (E.g.< 16 s in the system used in this study), as a continuously operating GPS unit has access to the most timely and accurate information about satellite positions and clock accuracy [5,21].GPS tag deployments may also incorporate or be used in combination with altimeters [21], which measure barometric pressure and allow the estimation of offshore flight height through a reference pressure taken at mean sea level [16].The accuracy of altimeter derived flight heights is therefore dependent on the spatial and temporal proximity of reference pressures (e.g. when birds were known to be resting on water) to the barometric pressure recorded at height.Due to meteorologically-driven variation in atmospheric pressure, frequent re-calibration of reference pressure is necessary as calibration values may become obsolete in unsettled weather conditions [16]; therefore reliable calibration may be challenging in practice [22].Calibration of barometric pressure sensors may also be carried out using GPS positional data [16]; therefore altimeters are not always liberated from GPS related error. High-resolution tracking data have demonstrated how flight heights vary with behaviour [15,16] and weather [23,24].Tracking data have also shown how birds adjust their flight heights in response to turbine rotor swept areas [5,25].Therefore, bird-borne telemetry can be routinely applied to examine: seabird area use in the offshore wind farm consenting process; responses to offshore wind farms following consent and construction [7,26]; and detect direct interactions with turbines [5,9].The impacts of collision on seabird populations arising from offshore wind farms are usually assessed in EIAs through Collision Risk Models (CRMs) such as the Band model, a mechanistic model which estimates the number of collisions with a wind farm based on the likelihood of a bird colliding with a turbine blade while flying through the rotor swept zone (RSZ), and the number of birds potentially occupying the RSZ at any given time within the wind farm [27,28].Flight height and speed are important parameters applied to the CRMs [16,29], and may have significant influence on estimates of collision risk [3,[30][31][32]. Bird-borne telemetry devices can provide behaviourallevel flight height data [15,16] which may be used to refine and improve parameter estimates applied to CRMs.However, the accuracy of observed flight height distributions may have significant influence on estimated collisions produced by CRMs [3,33].CRMs using flight height distributions based on GPS and altimeter altitude have been shown to produce higher estimates of collision mortality than flight heights obtained through visual survey methods [16].Therefore, knowledge of the accuracy of measurements of flight heights produced from GPS and altimeters, and their comparability, is of high priority. Flight height data applied to CRMs currently rely on visual and automated survey methods, and therefore may not take advantage of approaches that provide the best estimates of flight height produced through tracking studies [18].However, the value of altitude data may be limited if it is recorded within spatially and temporallyconfined tracking studies and is therefore not transferable to other colonies and seasons [34].An additional limitation to the routine use within CRMs of information on flight heights produced from telemetry data, such as barometric altimeters, is a current lack of knowledge of their accuracy.One specific challenge to improving accuracy of flight heights based on barometric altimeters is the lack of a feasible method for calibrating barometric altimeters taking into account in situ barometric pressure of the surrounding atmosphere.Here we present a novel method for calibrating barometric pressure using opensource atmospheric data in combination with GPS-based behavioural modelling.We then compare flight heights obtained from GPS and altimeters highlighting potential factors influencing the magnitude of vertical variation between the methods.Additionally, we investigate how the magnitude of variation between the methods may influence collision risk estimates. Study area and tag deployment We examined GPS and altimeter derived flight heights in Lesser Black-backed Gulls Larus fuscus tracked from the Isle of May (56°11′11"N 2°33′24"W) within the Firth of Forth Islands Special Protection Area (SPA) in Scotland, and Havergate Island (52° 05′ 02.3″ N 1° 33′ 12.2″ E) in the Alde-Ore Estuary SPA, England.Tracking data were available from the breeding season (May-August) for 2019 (Individuals: n = 15, Havergate; n = 25, Isle of May) and 2020 (n = 10, Havergate; n = 17, Isle of May).Individuals tracked in 2020 were those which retained their tags following deployment in 2019.Individuals at each site were fitted with UvA-BiTS 5CDLe GPS tags (~ 14 g, Weight; 62 × 25 × 10 mm, length × width × height, see Bouten et al., 2013) which remotely download data to a field-based receiver and laptop via a two-way wireless VHF (Very High Frequency) transceiver.Attachment was carried out using wing-loop harnesses made from Teflon ribbon, to enable long-term deployment, but with a weak-link to enable tags to detach after the period of study (up to 3-5 years; Clewley et al., 2021).Tag and attachment methods have previously been shown to have no measurable impacts on breeding success or over-winter survival for this species [36,37].The total weight of the tag and harness deployments were below 3% of individual body mass.Ethical approval for tag deployment was issued by the British Trust for Ornithology's independent Special Methods Technical Panel under the UK Ringing Scheme (licence no.4255). Data cleaning All data filtering was carried out using R (Version 4.1.1)[38] and using custom R functions.Data was restricted to when birds were on foraging trips, defined as periods when birds were outside a rectangular area surrounding the breeding colony.While birds were undertaking foraging trips, a base sampling rate of five minutes was used; however, a faster sampling schedule of 10 s was enacted when tags had surplus battery charge (i.e. during periods of sunlight), or when within a specified 'geofence' around proposed or operational offshore wind farms.Higher sampling rates (< 16 s) have been suggested for this GPS system to provide improved altitude accuracy [5,21]. Periods when birds were either offshore or onshore were identified within the GPS tracks.Ross-Smith et al. [15] found flight heights of Lesser Black-backed Gulls to vary between marine and terrestrial environments; therefore our analysis only considered offshore movement.Further cleaning and calibration steps are outlined in Fig. 1 and the Additional file 1. Expectation-maximisation binary clustering Behavioural states were inferred within the tracking data using Expectation-Maximisation Binary Clustering (EMbC) using R package EMbC (Version 2.0.1)[39].EMbC is a Gaussian mixture model based on trajectory speed and turning angle between successive GPS fixes, which classifies four states as: stopped (high turning angle, low speed), floating (low speed, low turning angle), commuting (high speed, low turn) and foraging/searching (high speed, high turn).EMbC was applied separately to each filtered sampling rate of five minutes and 10 s.The accuracy of clustering may depend on factors including sampling frequency [39]. Individuals which did not exhibit bimodal variation in flight speed and turning angle while offshore were filtered out of the EMbC modelling (n = 7; Isle of May, 2019).This was primarily attributed to birds which commuted directly between the Isle of May and mainland Scotland where they targeted terrestrial food resources, and therefore did not exhibit floating behaviour, an attribute necessary for the calibration of barometric pressure (see Sect. "4) Altimeter calibration").Additionally, GPS locations within 10 m (m) of offshore platforms, such as turbines or meteorological masts, were removed prior to EMbC modelling.This step was to remove periods when birds may be using offshore platforms to sit or roost [7], and may therefore be misidentified as sitting on the sea-surface. Altimeter calibration Barometric pressure sensors recorded a mean value of pressure in millibars (mbar) and temperature in kelvin (K) concurrent with each GPS fix.A mean pressure value was produced from a series of 10 mbar recordings at a rate of 10 Hertz (Hz).Altitude above sea level (h) based on barometric pressure was calculated using the following equation reproduced from Cleasby et al. [16] and Lane et al. [40,41]: Hourly values of Mean Sea Level (MSL) pressure (P 0 ) and temperature (T) of a 30 km resolution, were obtained from the European Centre for Medium Range Weather Forecasts (ECMWF) 'ERA5' reanalysis model (Fig. 1).Calibration of P 0 was carried out using values of barometric pressure recorded by the tag deployment during periods when birds were presumed to be on the sea surface.These 'floating' periods were inferred using the EMbC behavioural definitions of stopped and floating.Values of MSL pressure obtained from ERA5 were then corrected to actual sea level pressure using the nearest (in space and time) available observed value of sea surface pressure (Fig. 2).This was a cautionary step to account for potential error in ERA5 pressure values, specifically to account for divergent drift over time between the recordings made by the pressure sensor and ERA5 P 0 values.Due to the potential for the accuracy of the tag-recorded P 0 to decrease with time since the last floating bout, a threshold of one day from the last floating bout was set, beyond which values were excluded from analysis. To examine the potential reduction in accuracy of altimeter altitudes with increasing time since the previous floating bout (and therefore calibration), the differences between GPS and altimeter altitude were examined in relation to time from last calibration of P 0 (see Additional file 2). Conversion to mean sea level Flight heights produced using barometric pressure are calculated in relation to the actual sea surface, as For altimeter flight heights to be applicable to CRMs and comparable to GPS altitude, they must be converted in relation to MSL.Therefore, data on tidal height was used to correct possible variation in altimeter altitude related to the phase of the tide by the following steps (Fig. 1).Tidal height data for Harwich and Leith-the nearest available respective tidal gauges for Havergate (24 km) and the Isle of May (44 km)-were provided by the British Oceanographic Data Centre (BODC).Tidal heights above Chart Datum were recorded at a 15-min temporal resolution.These heights were converted to elevation in relation to MSL by calculating daily means, and then averaging to monthly MSLs (Danielle Edgar pers.comm.), and applied to recalculate raw GPS flight heights accordingly. Statistical analysis Generalized Linear Mixed Models (GLMMs) with a gamma distribution with a random effect based on individual, were used to compare altitudes obtained from GPS and altimeters, and test for potential differences among sampling schedules, colonies, and years.GLMMs were carried out using "glmer" provided by "lme4" R package (42) in R (Version 4.1.1)[38].Pairwise comparisons were made using Tukey's adjusted 'emmeans' [43] to investigate statistically separable altitude measurements in relation to method (GPS and altimeter), and each grouping of year (2019 and 2020), colony (Isle of May and Havergate), and sampling rate (five minutes and 10 s). Collision risk models Using estimated flight height distributions (see Additional file 1) we compared the estimated number of collisions attributed to foraging/searching and commuting behavioural states generated from GPS and altimeter data using Option 3 of the Band CRM [27,44] facilitated by R package StochLAB (Version 0.3.1)[45].The Band model calculates collision risk based on wind farm and turbine characteristics, and bird biological parameters and densities (see Additional file 1).Option 3 utilizes flight height distributions, rather than a uniform distribution across the rotor swept area.Collisions were calculated separately for each month of the 4 study months from May to August.Models were run using 12 differing theoretical wind farms, each assigned with distinct turbine parameters (adjusting hub height and rotor radius) and numbers of turbines.Each wind farm configuration equated to an electrical output of 430 Megawatts, equivalent to an Tukey's HSD test, was used to identify statistically separable collision estimates for each altitude measurement method, and each grouping of year, colony, and sampling rate. GPS and altimeter flight heights Flight heights produced from altimeter data were found to be higher than those from GPS data (Table 1).Primarily, this difference was slight between measurements recorded at a rate of five minutes (difference range = 2.55-8.69m, difference mean = 4.40 m, Table 1), however a difference was more notable at the 10 s sampling frequency (difference range = 0.6-16.28m, difference mean = 11.45 m, Table 1, Fig. 3).A pairwise comparison of methods of data combined from all sites and years found that the mean altitude was significantly different between altimeters and GPS at a sampling schedule of 10 s (Tukey, Z = − 31.03,p ≤ 0.05, Table 2), while showing no difference at the five-minute schedule (Tukey, Z = − 1.86, p = 0.25, Table 2).However, this relationship did not consistently persist when taking into account year and colony. At a sampling resolution of five minutes, GPS flight heights were higher in 2020 at both the Isle of May and Havergate (Fig. 3).Pairwise comparison indicated that the altitudes produced by GPS and altimeters were not significantly different based on data at a sampling rate of five minutes across both colonies and years (Table 2), and additionally at a sampling rate of 10 s for the Isle of May in 2020 (Tukey, Z = 0.17, p = 1.00).Some tag specific variation was present in the magnitude of difference between GPS and altimeter altitudes as observed visually in linear regressions presented in Fig. 4. Tags also display some consistent variation across year in the intercept and the slope of the relationship between altitude and GPS (Fig. 5). The time since last calibration of MSL pressure had no discernible influence on the magnitude of difference between GPS and altimeter altitudes (see Additional file 2).Additionally, the proportion of GPS fixes attributed to floating bouts per individual did not vary between colonies, however more samples of floating per individual were recorded in 2019 (n = 2532, Havergate; n = 1375, Isle of May) than 2020 (n = 965, Havergate; n = 785, Isle of May). Collision risk models Estimated numbers of collisions were found to be higher using flight height distributions (See Additional file 2) generated from altimeters than using those generated from GPS (Fig. 6).In comparison to GPS, collision rates were greater when calculated using flight height distributions based on altimeters, across all groupings (sample rates, years and colonies).Exceptions to this trend were identified for Isle of May in 2020 at sampling schedules of five minutes and 10 s, where significant differences were seen, and Havergate also in 2020 at a sampling rate of five minutes, where no difference was found (Table 3, Fig. 7).Mean collision rates calculated using the two different input flight height distributions were found to be significantly different (Table 3, Tukey's HSD Test for multiple comparisons p < 0.05, 95% C.I. = [12.47, 21.10]). The differences in collision rates estimated using flight height distributions produced from GPS and from altimeters were also compared within categories of sampling rate, colony, and year (Table 3).Tukey's HSD Test found that collision estimates generated from GPS and altimeter data based on a sampling rate of 10 s differed to a greater extent (Table 3, mean difference = 22.61, p < 0.05, 95% C.I. = [14.66,30.56]) than those based on a sampling rate of five minutes (Table 3, mean difference = 10.95,p = 0.00, 95% C.I. = [3.01,18.90]).Collision rates were significantly different between altimeter and GPS, across all categories of Table 3 Results of Tukey's HSD Test for multiple comparisons of collision estimates produced using flight height distributions generated from GPS and from altimeters, excluding floating or stationary bouts.Comparisons of means were examined in relation to sampling rate (five minutes and 10 s), colony (Isle of May and Havergate), and year (2019 and 2020) ) and ten seconds (Table 3, p = 0.18, 95% C.I. = [ − 2.00, 28.16]) (Fig. 7). The magnitude of difference in collision rates estimated using flight height distributions based on GPS or altimeters was found to narrow with increasing turbine size (Figs.6, 7).The number of overall collisions decreased with fewer but larger turbines. Discussion Flight heights produced by GPS and altimeters were largely comparable when examined collectively across sites, years, and sampling rates.However, we found that flight heights produced from altimeter data to be on average higher than those from GPS data, with a significant difference in flight heights (approx.11 m) obtained at a sampling rate of 10 s.The magnitude of difference between the two methods was also found to differ in relation to study year and colony.Altitudes derived from the two methods were more comparable in 2020-a year after tag deployments-than 2019; this year-based convergence occurred at both study colonies.The underlying cause of this temporal difference was unknown but may have arisen from different weather conditions experienced within each year [46].Unstable weather conditions, for example frequent periods of low pressure related to storms causing greater variability in sea level pressure, may lead to reduced altimeter accuracy between calibration bouts.While no difference was observed in the overall atmospheric pressure experienced at study sites between years, local-scale (< 10 km) and short-term (< 1 h) variations in pressure are harder to discern.Frequent bouts of floating behaviour by gulls would allow for regular calibration of sea level pressure, and therefore greater accuracy in altimeter flight height.However, no trend was found between the time difference from last floating bout and the magnitude of difference between GPS and altimeter derived flight heights.This suggests that error arising from pressure calibration is not an important source of the difference between altitudes produced from GPS and altimeters.Additionally, the proportion of fixes per individual attributed to floating did not differ between years.Previous altimeter deployments on gannets similarly found time since calibration to have a non-significant effect on flight height accuracy [16]; this was attributed to low variability in environmental pressure ascribed to stable weather over the tracking period.As a precaution, we assigned a one-day limit on the viability of the last calibration event, to eliminate potentially obsolete calibration factors being used. Despite flight heights produced by either method being largely comparable, rates of estimated collisions commonly differed between the two methods, with higher collision estimates being most frequently attributed to altimeters.This displays that small changes in flight heights applied to CRMs may have a disproportionate effect on the resultant collision rates estimated.GPS and altimeters therefore may both be biologically representative of a bird flight height, however caution must be taken when interpreting collision rates attributed to a single method. Flight height and behaviour in Lesser Black-backed Gulls may vary with season [47], year [48], diel period and environment [15,49].Flight behaviour, such as flapping or soaring flight, may also alter in response to meteorological conditions such as wind speed and direction [49,50].Combining flight height estimates across years may potentially account for this spatial and temporal variation.This is exemplified in the levels of colony/ year groupings of flight height distribution estimates we applied to the CRMs.Based on both five minute and 10 s resolution data, collision estimates generated from altimeters were found to be significantly higher than those produced from GPS.However, when flight height estimates were separated by year and colony, collision rates produced using flight height distributions generated from GPS and altimeter were more comparable in the 2020 season.The magnitude of difference of collision estimates based on the two methods varied between study years, and collating flight heights may account for localised sources of variation (cf Johnston et al., 2014).Furthermore, it is also important to account for individual variation in differences in collision estimates based on the flight height distributions derived from the two methods. Future considerations Higher resolution sampling schedules may more accurately assign behavioural states using EMbC modelling, enhancing the accuracy of the MSL pressure used within calibrations.However, we found that flight heights based on data collected at five minute and 10 s sampling intervals were largely comparable.This indicates that slower sampling rates, which may also be less battery intensive, may still produce representative flight heights using altimeter data.GPS derived altitudes, however, have been shown to improve in accuracy at higher resolutions [21], and may therefore by advantageous in considering finer scale behaviours, such a "last-second" turbine blade avoidance [5].Increased sampling rate had no discernible influence on altimeter pressure measurements which are recorded through a 10 Hz "burst" of readings concurrent with each GPS fix.While GPS may increase in accuracy with a greater number of satellites or increased resolution, much less is known about inherent accuracy in altimeters.Intrinsically within altimeters, pressure records are taken from an average of 10 Hz readings, accounting to some degree for individual measurement error.Therefore, error in altimeter altitudes primarily arises through the accuracy of sea level pressure required for the conversion of recorded pressure into altitude.Here we used ERA5 reanalysis modelled pressure, with a calibration step based on field-based pressure measurements when the tag was assumed to be at sea level.Therefore, accuracy of P 0 applied to the model was both dependent on the accuracy of the modelled MSL, and the behavioural model identifying floating bouts through GPS data.Frequency of floating bouts, and opportunities for calibration, may also be limited by differing behaviours exhibited by species.An alternative to this method may be the use of infield calibration measurements [14], for example from offshore meteorological buoys containing barometers.However, calibration of data from altimeters may be limited by distance and availability of such buoys.If accurate measurements of altimeter flight heights are required within a specific area, within a wind farm for example, barometric pressure sensors may be placed within an area of interest in combination with a dedicated tracking study.Variation in sampling error may additionally vary between devices, therefore individual tag effects should be taken into account when examining flight height distributions derived from multiple tags.A greater understanding of the influence of weather conditions on flight behaviour [23,49,51] and also altimeter performance may help to improve temporal and spatial variation in recorded flight heights. CRMs currently rely on flight heights measured in relation to MSL.Tidal height data was used to account for the influence of tidal elevation around MSL when adjusting altimeter altitudes-calculated in relation to actual sea level-so that these were applicable to CRMs.This additional calibration step on altimeters potentially reduced the accuracy of flight height records.Examining flight height in relation to actual sea level would potentially increase realism and accuracy of obtained altitudes, but would require the incorporation of tidal elevation's influence on GPS data which are measured in relation to MSL, and would be of less relevance to collision risk modelling, as turbine heights are constant in relation to MSL but not to actual sea level.It is currently not common practice to amend offshore GPS altitudes using tidal height records, proximity to roosting platforms, or to assess inherent altitude bias attributed to study location.Inclusion of these filtering steps may be of particular importance when examining fine scale flight height in relation to turbine rotor swept areas [5].Data presented here only examine flight height during the breeding season; examination of flight heights throughout the year, and consequently the temporal variation of collision risk associated with seasonal behaviour, is important to addressing the cumulative risk wind farms may pose throughout a species' life-cycle [47]. Conclusions With the growing development of offshore wind farms, the accurate assessment of collision risk is vital to project-specific consenting, and also to understanding the potential cumulative effects of collision at population levels [52].However, CRMs retain a degree of uncertainty, potentially arising from error in the measurement methods used to obtain model parameters such as flight height.Improving confidence in telemetry obtained flight height distributions, and potentially using behavioural-level data to more accurately quantify parameters applied to CRMs, requires steps to address these measurement errors.This will enable improved collision risk assessment which captures spatial, temporal, and behavioural variation in use of the marine environment and better reflect bird behaviour in relation to offshore wind turbines. Fig. 1 Fig. 1 Work-flow of analysis steps.Red boxes indicate where environmental covariates are applied to the telemetry data Fig. 2 Fig. 2 Example of atmospheric pressure calibration from individual "5970".Mean sea level pressure obtained from the ERA5 atmospheric reanalysis model (black) and barometric pressure sensor (red) and ERA5 measurements calibrated by pressures obtained from floating bouts (blue) Fig. 3 Fig. 3 Distribution of raw flight heights in relation to mean sea level (from − 20 to 300 m), excluding floating or stationary bouts, obtained from GPS data (red) and altimeter data (blue) for sampling rate resolutions of five minutes and ten seconds Fig. 4 Fig. 5 Fig. 4 Distribution of raw flight heights in relation to mean sea level (from − 20 to 300 m), excluding floating or stationary bouts, obtained from GPS data (red) and altimeter data (blue) in relation to study colony and year for sampling rate resolutions of five minutes and ten seconds Fig. 6 Fig. 6 Monthly collision estimates produced from Band Option 3 CRM for 12 hypothetical wind farms with differing turbine parameters, using modelled GPS (red) and altimeter (blue) flight heights for sampling rate resolutions of five minutes and 10 s.Hypothetical wind farms increase in hub height and rotor radius, and decrease in wind farm density, from 1-12 (specific wind farm parameters outlined in Additional file 1) Fig. 7 Fig. 7 Monthly collision estimates produced from Band Option 3 CRM for 12 hypothetical wind farms with differing turbine parameters, using modelled GPS (red) and altimeter (blue) flight heights in relation to study colony and year for sampling rate resolutions of five minutes and 10 s.Hypothetical wind farms increase in hub height and rotor radius, and decrease in wind farm density, from 1-12 (specific wind farm parameters outlined in Additional file 1) Table 1 Summary statistics for flight heights in relation to mean sea level produced from GPS and altimeters in relation to study colony and year for sampling rate resolutions of five minutes and ten seconds Table 2 Results of pairwise comparisons of GPS and altimeter flight heights (m), excluding floating or stationary bouts, from Generalised Linear Mixed Model with gamma distribution.Comparisons were examined in relation to sampling rate (five minutes and 10 s), colony (Isle of May and Havergate), and Year (2019 and 2020)
2023-10-21T13:18:38.790Z
2023-10-21T00:00:00.000
{ "year": 2023, "sha1": "c4022bc831b437e4bd23027cbbd8ade7cfdae992", "oa_license": "CCBY", "oa_url": "https://movementecologyjournal.biomedcentral.com/counter/pdf/10.1186/s40462-023-00431-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45fbc00bc6935c896465768b70ccf0187fe28734", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53305724
pes2o/s2orc
v3-fos-license
A Force-Directed Approach for Offline GPS Trajectory Map Matching We present a novel algorithm to match GPS trajectories onto maps offline (in batch mode) using techniques borrowed from the field of force-directed graph drawing. We consider a simulated physical system where each GPS trajectory is attracted or repelled by the underlying road network via electrical-like forces. We let the system evolve under the action of these physical forces such that individual trajectories are attracted towards candidate roads to obtain a map matching path. Our approach has several advantages compared to traditional, routing-based, algorithms for map matching, including the ability to account for noise and to avoid large detours due to outliers in the data whilst taking into account the underlying topological restrictions (such as one-way roads). Our empirical evaluation using real GPS traces shows that our method produces better map matching results compared to alternative offline map matching algorithms on average, especially for routes in dense, urban areas. INTRODUCTION Map matching is the process of mapping a geospatial trajectory obtained from a GPS receiver onto a given road network. As the coordinates obtained from these devices are not always precise, in dense road networks the task of matching these onto a real map is not trivial. Several candidate roads may exist in close proximity and a map matching algorithm must ensure that the resulting path on the road network is plausible and that physical constraints (e.g., one-way streets, obstacles) are respected. Map matching has been studied for over a decade [25] and a large collection of algorithms exist with varying degrees of complexity and accuracy. Existing algorithms can be divided into two broad categories: i) online or real-time algorithms, where the algorithm has to determine the likely position on a map given the history of previous points, for example on a vehicle equipped with a GPS navigation device, and ii) offline algorithms, where the entire trajectory is known in advance and the algorithm has to adjust the trajectory points a posteriori such that they represent on a map the likely route taken by the vehicle. The present article considers offline map matching. This problem has received less focus than its real-time counterpart as it is not useful for real-time navigation. However, in many applications, such as logistics and supply chain management, the analysis of vehicle trajectories is done a posteriori once the vehicles have returned to the depot, where a map matching algorithm is used to correct measurement errors by the GPS receivers and produce a trajectory that lies completely on a real map network. One of the main differences with the online case is the inclusion of the entire trajectory in the analysis, which provides additional information on the likely route taken. We propose a novel approach which borrows methods from force-directed graph design to direct the map matching strategy, improving on the existing map matching literature. The algorithms used in force-directed graph design aim to produce an elegant visualization of a graph topology (vertices and edges) on a plane [5,32]. To achieve this, each vertex is assumed to repel each other whilst each edge can expand or contract freely, and these forces are modeled using concepts from physics, such as electrical repulsion or spring forces. The system is then simulated on a computer where the edges and vertices are allowed to move according to physical laws and, after a number of iterations, a visually elegant layout of the graph is obtained. These approaches are presented in detail in Section 3. In this article we use similar techniques to achieve high precision map-matching results. The road network of the map is assumed to exert a force field and every vertex of the trajectory is attracted to the field in such a way that after a number of iterations the trajectory is closely matched to, or 'snapped', onto the road network. Using this approach, the force exerted is linked to the distance between the trajectory and the road, so the trajectory will be preferentially attracted to nearby roads and, in addition, the direction of the force is linked to the angle between the road and the trajectory, enabling one-way streets to be correctly represented (if a trajectory travels in the opposite way near a one-way road, it results in a repelling force and this road is avoided). These two features produce an effective map matching algorithm. To our knowledge this is the first method which uses force-directed algorithms for map matching. RELATED WORK ON MAP MATCHING METHODS The aim of map matching is to convert, based on some known map data, a list of GPS points into a trajectory (series of roads or links) denoting the most likely route traveled by the vehicle or moving object. Over the past years, many map matching algorithms have been proposed in the literature, both for real-time and for offline map matching, which cover a number of different types of applications and input data. A comprehensive review of over 30 map matching algorithms can be found in [25]. The authors classify the analytical approaches used in the algorithms into 'geometrical', which use proximity-based methods, 'topological', which use the notions of connectivity between the links (one-way roads, connectivity and reachability information), 'probabilistic', which further use information about the quality or accuracy of the GPS signal (typically obtained from the GPS sensor), and 'advanced', which use more specific methods such as Kalman filters, hidden Markov chains, timing information (e.g., to predict the exiting from a tunnel) and other application-specific approximation techniques. Typically the underlying map network is known, however some researchers [4,15,19,34] have developed approximation techniques to generate an unknown underlying map or to perform map matching without reference to a known map topology by observing the clustering of trajectories. Recently, improvements on these methods have been proposed, such as an efficient buffer topological algorithm to detect bicycle paths in Bologna [26], or a score-based matching for car trajectories in Zurich [21]. The ACM SIGSPATIAL 2012 competition [1,13,18,28,30,31,35] requested participants to determine a fast map matching algorithm for use in real-time systems; the focus of the competition was on algorithm speed since the competition used only ten vehicle trajectories and the provided instances were relatively easy to solve (good quality GPS points on a not very dense road network). The authors of [17] use a geometric distance measure to determine the nearby roads and then apply a Dijkstrabased algorithm to select those roads which satisfy the topological restrictions of the map. In a different direction, [14] select their road segments using an optimization method which takes advantage of cases where many users drove along a similar route, similar to trajectory clustering methods. More complex approaches can also be found in the literature, the most noteworthy of which is the voting-based map matching algorithm [38] where the most likely path is determined by the relative mutual influence between pairs of points, taking into account at the same time the temporal information (timestamps) of the GPS points. Among the probabilistic approaches, a method proposed in [2] ranks all topologically possible trajectories based on a calculated probability, which is a generalization of earlier hidden Markov chain or Viterbi map matching methods. This approach is extended in [23] where the number of turns in the resulting trajectory is taken into consideration and optimized using inverse reinforced learning. A similar comprehensive search method is used in [37] where a heuristic search algorithm is used to find and score each possible trajectory, and in [33] for real-time map matching. Another interesting approach is the one presented in [27], where the authors do not really perform map-matching but aim to 'correct' GPS trajectories by interpolation so that the resulting traces are closer to the real route taken, using a clustering algorithm which compares trajectories between them. This concept is similar to our proposed force-directed algorithm where we also 'correct' the raw GPS points, but we do so by considering the interaction of a particular trajectory with the underlying road network instead of comparing trajectories between them. The most commonly used approaches for map matching which combine both 'geometrical' and 'topological' methods are routingbased methods. This means that the map matching problem is converted into a routing problem, where in its simplest version the trajectory is divided in smaller segments, the endpoints of which are then matched onto a road (for example, moving each endpoint to the nearest point on the road), and the intermediate points of the segment are replaced by a routing calculation of the shortest possible route from one endpoint to the other, taking into account the road layout. This approach produces good results as it ensures that the produced route is close to the original points and that the route will be plausible, in the sense that it is guaranteed to lie on an existing road and all the topological restrictions will be satisfied. The most popular implementations of route-based algorithms for map matching are GraphHopper [6], and MapBox [20] both of which use shortest-distance routing directed by weights derived from the GPS trajectory to find a match. However, routing based methods are not fool-proof. They operate under the assumption that the driver who produced the GPS trace was driving on a shortest-distance fashion between periodicallysampled segments of the route, so short, circular loops within each segment (taken for example by taxis) will not be matched correctly. Our proposed method takes these routing-based methods one step further, adding an element of the 'probabilistic' map matching techniques: we use a force-directed algorithm to adjust, or correct, the obtained raw GPS points before applying a routing-based method, resulting in a more accurate match. COMPLEXITY AND EVALUATION OF MAP MATCHING ALGORITHMS The nature of the map matching problem presents some unique challenges. First of all, the difficulty of the task can vary significantly: if the trajectories obtained correspond to a rural setting (e.g., on an isolated highway) the task can be very easy or trivial, as there may only be one possible candidate road for the path taken. Conversely in a city center setting, the challenge is much harder as the road network is more dense. Similarly, the frequency of sampling of the GPS points is important, as recording one point every second will make map matching easier than, say, recording every minute. Finally, the quality of the GPS signal, the GPS receiver used and the underlying map are also important as a good quality trajectory will result in points that are closer to coordinates of the real road, making the task much easier. Although the density of the underlying map is a key factor in terms of determining the difficulty of the map matching problem and therefore the performance of a map matching algorithm, other elements that influence this are the quality of the GPS data and the mode of transport, which determines the speed of the vehicle. Bicycle and pedestrian trajectories are easier to match for a given sampling frequency as the object does not move much between successive GPS points. In terms of the underlying fixed road network map, which is necessary for most map matching algorithms, researchers tend to use data from the freely available OpenStreetMap service [22], which has a good coverage of road networks for most cities around the world. In order to evaluate the performance of map matching algorithms, the obvious step is to compare the produced trajectory with the 'ground truth', i.e., the actual trajectory taken by the vehicle. However, this approach can only be used in limited circumstances, as the ground truth is typically not available in most cases of large scale data collections (as it requires navigating along a predefined route or significant manual input to record the precise route taken). Some researchers derive the ground truth by manually matching some trajectories by sight, for example in [14] experienced human drivers were asked to trace, based on their experience, the 'ground truth' of a random subset of 100 trajectories among their dataset of Beijing taxi traces. Other researchers [36] create their own datasets by driving along a very small number of predefined routes (four). These approaches have many drawbacks, namely the fact that the chosen routes are defined in advance by the researchers, that human discretion is required to ascertain the route taken and that a large number of trajectories cannot be matched by hand. In order to mitigate these limitations and use trajectories for which no ground truth exists, some authors [17,21,24] propose distance-based metrics based on minimizing the distance between the GPS trajectory and the route produced by the algorithm. This suggests that map matches that are close to the original points are considered to be more accurate than those which are farther away. Some alternatives for the evaluation of map matching algorithms in the absence of ground truth include the comparison of the length of the original trajectory compared the the length of the matched route [26]. Finally, an interesting approach [24], although with limited practical use, is to collect two sets of GPS data, one of low quality used as the input trace and a second trace of high quality data used as a proxy for the ground truth. FORCE-DIRECTED GRAPH DRAWING METHODS Graph visualization is a well-researched field, as graph structures appear in many areas and graph drawing on a 2-dimensional plane quickly becomes challenging as the size of the graph increases. Recently, a number of approaches in this area have focused on 'forcedirected' methods which can automatically 'draw' large graphs on a plane [5,7,8,11,29,32]. A comprehensive overview of such algorithms is presented in [12] and some interesting, more recent variations appear in [9,10,16]. In these methods, a directed or undirected graph is modeled as a system of particles with forces acting between them and a compelling visual result is achieved when the particles are placed in such a way as to achieve a force equilibrium. An example of the input and output of a force-directed graph drawing algorithm from [5] is shown in Figure 1. One can see how the forces between the edges in 2-dimensions force the graph to spread out into a symmetrical equivalent representation. Figure 1: Example input and output graphs using a forcedirected graph drawing algorithm from [5] In general, all force-directed graph drawing algorithms consider repelling forces between non-adjacent vertices that are inversely proportional to the distance d between the vertices (c/d), or to the square of the distance (c/d 2 ) in order to reduce the strength of the force between distant vertices and yield a faster convergence. The edges are modeled as spring forces that can both expand and contract around an 'ideal' length, although pure spring forces (proportional to the displacement of the spring) are considered too strong and are usually replaced by the logarithm of the displacement (c 1 log(d/c 2 )), where d = c 2 is the desired 'ideal' length between vertices (often defined as the square root of the total drawing area divided by the number of vertices to ensure an even spread on the drawing area). As the computational simulation of a system of particles under the laws of physics is computationally intensive, an approximation is always used, where the force applied to each vertex is calculated in turn, and then the vertex is displaced by a small amount in the direction of the combined net force before the process iterates. Other refinements that have been proposed in the literature include modified force formulas, computation of various parameters based on some further graph characteristics (e.g., the diameter of the graph) and the introduction of a cooling coefficient (based on the field of simulated annealing in optimization) where the displacement of the vertices gradually becomes smaller to ensure that once a good configuration is found no major modifications to the layout occur. In the next section we present an algorithm that uses the techniques of force-directed graph drawing outlined in this section to perform a map matching of GPS trajectories onto a map. A FORCE-DIRECTED MAP MATCHING ALGORITHM We consider GPS trajectories defined by N points P 1 , P 2 , . . . , P N where a point is defined by its position in space and time P i = (lat i , lon i , t i ). We do not use information about the quality of the signal or receiver (accuracy and precision of the GPS receiver) as this is typically not included in many GPS production systems. The fixed road network or map is represented by a directed graph G = (V , E) of intersection points and straight line segments (links) between them, which are obtained from the public-domain mapping provider OpenStreetMap [22]. The key idea of force-directed map matching is to consider an 'electrical current' that passes through each edge E of the road network and results in an attractive or repulsive force on each point of a given trajectory. We set the magnitude of this force F e as follows: • inversely proportional to the distance d between the point P and the road edge E, • proportional to the cosine of the angle θ between the road edge E and the trajectory at P, • proportional to the length l of E. We explain the rationale behind each choice in turn: the first point specifies that trajectory points should be attracted more strongly by nearby roads, which is sensible. The second point relates the force to the angle between the trajectory and the road. The force should be at its maximum when the trajectory and the road are parallel to each other, it should reduce to zero when the two are perpendicular to each other and then become negative (repulsive) when the trajectory and the road point in opposite directions. The last requirement is necessary as the edges in the underlying road network are not of equal length. Without this constraint, if a road on the map was split into two edges, it would result in doubling the force on point P. Adjusting for the length of each edge E avoids the trajectory being pulled towards areas with high road density (many small roads). The direction of the force is taken to be either: (i) perpendicularly towards the edge E or (ii) towards the midpoint of the edge E. Note that unlike graph drawing algorithms, no forces are operating between individual vertices except for neighboring vertices as described below. We also assume that spring forces apply on each edge (P i−1 , P i ) of a given trajectory. These were set at the same standard log-distance formula as used in graph drawing: where c 1 , c 2 are constants and d is the length of the spring. We have set the natural length d to be equal to the length of the trajectory segment (P i−1 , P i ) as we assume that the distance between points on the true trajectory will be similar to the observed distance. The forces between the points can be attractive or repulsive and are applied in the direction of the edge (P i−1 , P i ). Once all the forces are calculated for each point i, each point is moved in turn by a distance proportional to the net force The key parameters of our algorithm are summarized in Table 1. We experimented with variations of the distance and force formulas as suggested in the force-directed graph drawing literature until we found an ideal combination for the strengths of the electrical and spring forces. The values that we used for the final algorithm are denoted by a star '*' in the table. Regarding our choices, we can comment that an electrical force proportional to the road edge length l is too strong for quick convergence and replaced it with √ l. Furthermore, the repulsive forces when the road and trajectory are pointing in opposite directions had to be significantly reduced to ensure that the trajectory is still attracted to nearby roads with the correct orientation. We also note that because of the sharp decrease in the magnitude of the electrical forces with the distance as well as for computational efficiency, we only include edges which are within 100 meters from the current point in the calculation of the electrical forces. A pseudocode of our force-directed algorithm is given in Figure 2. Once the trajectory is read, we use the force-directed method to attract the trajectory points towards the roads on the map. After a number of iterations, the trajectory will be close to a plausible map match. As a final step in order to convert the points into road segments, it is necessary to apply an algorithm to place the obtained points exactly on a map road. This algorithm can simply be to place each point to the nearest road segment that has the correct alignment (in the case of one-way roads), or alternatively, it could be an implementation of a traditional route-based algorithm. In our implementation and numerical experimentation we chose the latter method for a number of reasons that are explained in the following section. Read GPS points of trajectory T for t:=1 to iterations do for all points of the trajectory P i ∈ T do Calculate total force F e , F s on the point P i Update position ∆x of point P i end for end for Finalize position by placing the modified points exactly on the map EXPERIMENTAL EVALUATION This section presents the results of an experimental evaluation of our force-directed map matching algorithm applied on a large number of GPS trajectories where we compare its performance with other state-of-the art map matching algorithms. We consider a large dataset of taxi trajectories created in 2014 in Rome [3]. This data consists of timestamped latitude/longitude data corresponding to nearly 500,000 km of driving carried out by taxi drivers equipped with a GPS tracking device on a tablet computer. We chose this dataset for specific reasons: The road network in Rome is very dense, not grid-like with many short and irregularly shaped roads, many obstacles and one-way streets. (The average road segment length in our Rome map data was 29 meters.) This means that the map matching problem on this layout is more complex than a similar problem on city with a grid layout and large blocks. Moreover, this dataset records a trajectory as one GPS point every 15 seconds (so pretty infrequently) which increases the trajectory ambiguity, and the need for a proper trajectory correction. In line with most articles in the literature, we obtained the underlying map road network for Rome from OpenStreetMap [22] and filtered the road network to include only roads that are open to car traffic. The road network (corresponding to the assumed force field passing through each road segment) that was obtained is shown in Figure 3, while Figure 4 shows the distribution of the length of the road segments in the same area. In the implementation of our force-directed algorithm, we used the same route-based algorithm as Graphhopper for the last step to convert the final positions of the modified points into road segments. This choice was made for three reasons: first, we had to produce a path that can be correctly identified using its underlying OpenStreetMap name in order to compare it to the map matching produced by the routing-based algorithm; using the Graphhopper tool to do so is the obvious solution. Secondly, using Graphhopper as the last step in our method also demonstrates the superiority of our method compared to route-based methods, since if we produce a better map matching result this cannot be due to particularities or limitations in the Graphhopper implementation, as these would be present in our results too. The third reason was a practical one: with this step in place we do not need to wait until the force-directed algorithm converges (sometimes slowly, depending on the choice of the algorithm parameters) towards the final matched path; instead we can terminate our algorithm after a number of iterations when a good enough approximate match is found, and then post-process this result to obtain a feasible path. In other words, we compare the difference of performing a map match by a routing algorithm (such as Graphhopper) directly on the input data, against performing the same algorithm on data points that have been first displaced towards specific roads by our force-directed algorithm. The GPS data was cleaned and divided into distinct trajectories in the same way as the authors in [3]: when an anomaly is detected (defined as a speed of over 50 km/h), we look at the total duration of the anomaly. For anomalies under 42 sec we simply delete the incorrect GPS points; for anomalies between 42 sec and 8 min we delete the points and replace with intermediate points based on linear interpolation; for anomalies over 8 min or consecutive points over 8 min apart we assume that this is due to a break implying the end of a trajectory and the start of a new one. We further removed trajectories with fewer than 10 points or totaling less than 8 mins as they are too short for useful map matching. Finally, we also excluded a small number of trajectories which lie outside our chosen reference grid of latitude (41.8001, 41.9859) and longitude (12.382189, 12.608782). This approach resulted in a total of 18,111 In line with [26] we also excluded from the comparative evaluation trajectories totaling less than 300 meters in length and those for which the length index (defined in [26] as the ratio of the total length of the calculated trajectory divided by the total length of the original trajectory) is outside of the range [0.8, 1.2]. Our investigation showed that a large difference in trajectory length is due to undocumented particularities of the map network, for example around the touristic Piazza di Spagna area which in reality can be driven through by taxis but which is recorded on the map we used as a pedestrian-only area forcing any map-matching algorithm to take a long detour. Removing these trajectories resulted in a total of 11,154 trajectories, containing 7.1 million points and a total distance of 199,398 km driven over 16,753 hours. Figure 5 shows a histogram of the distribution of the lengths of the trajectories used in this analysis. Figure 5: Distribution of trajectory lengths The implementation of our algorithm was done in Java on a machine with 16GB of RAM and four CPU cores. We compared our algorithm to the popular routing-based map matching algorithm Graphhopper. An example of the algorithm output is shown in Figure 6. The original data is shown in blue and the routing-based map match is shown in purple. When we carry out the force-directed map matching algorithm the trajectory is modified to the green trajectory and the resulting map-match is shown in black. It is quite evident that, even using one iteration, the force-directed algorithm produces a better matching route. A second example depicting a trajectory with a loop is shown in Figure 7. We note that the routing-based map matching algorithm fails to detect the apparent loop in the trajectory, which is successfully identified once the trajectory points are modified under our force-directed algorithm. RESULTS AND DISCUSSION In order to evaluate the results of the proposed map matching algorithm, we used two metrics found in the literature for the comparative evaluation of map matching algorithms in the absence of the ground truth. It is worth noting that all evaluation metrics without ground truth will have some limitations since there is no fool-proof method of comparing a map match produced by one algorithm with one produced by another algorithm. Nonetheless, these metrics measure elements that can be reasonable deemed to feature in bad matches, such as the path being too far away or its length being too different to the original trajectory, and therefore can be used to assess the quality of map matching. We first used the method proposed by [26] to evaluate map matches for bicycle paths in Bologna: we calculate the length index I L , which is defined by dividing the length of the matched route R by the line-interpolated length of the GPS trace: and assume that the closer this index is to 1 the better the match. In other words, it is assumed that a good map matching algorithm will produce a path with length similar to the length obtained from the GPS points. Although this is not necessarily true, it provides a good approximation by penalizing algorithms which produce paths are too short or too long for the trajectory, for example paths containing a lot of detours or that omit loops of the trajectory. The results of using this metric are shown in Table 2. We note that the force-directed algorithm results in an index which is closer to 1 and therefore produces a better match than the route-based map matching method. A distribution of the index according to the length of the original trajectory and the number of points in the trajectory are shown in Figure 8 and Figure 9 respectively. We can observe that the force-directed algorithm performs consistently better, except for very short trajectories. There is little difference in the distribution of the length index according to the number of iterations used in the algorithm. The second evaluation metric used is linked to the average absolute error of the calculated path compared to the original GPS points. This method has been used in [21]. For each GPS point of the original trajectory we define its distance to the matched path as the minimum of the distances of the point to the line segment of the matched path. The distance of a point to the segment is defined as the perpendicular distance if the projection of the point to the line segment lies between the endpoints of the segment, otherwise where P ′ is the projection of P on the line AB. The average error is then defined as the average distance of each GPS point to the matched path: In other words, the average error can be considered as the average, along the trajectory, transverse distance between the trajectory and the matched route and a smaller error denotes a better match. The results using this method are shown in Table 2 and the distribution of this metric by the length and the number of points of the trajectory are shown in Figure 10 and Figure 11. We note that the force-directed algorithm also produces better results using this evaluation method than traditional map matching, and the performance improves as the number of iterations of the force-directed algorithm increases. This trend continues until approximately 20 iterations, when the average error stabilizes and remains around 23% better than routing-based method on average. This suggests that a longer running time of the force-directed algorithm does not produce better results, and only a small number of iterations is needed to perturb the trajectory points sufficiently for a good, potentially optimal, match to be found. It is also worth noting that in 12% of the trajectories both algorithms produced the same matching path, reflecting our observation that for several trajectories there can only be one or very few plausible routes and the task of finding a map match is easier. In terms of computational time, Table 2 and Figure 12 show the average computational time taken by each of the two algorithms, measured in seconds of elapsed clock time. We note that while the routing based algorithm is able to transform one GPS trajectory into a sequence of roads in less than one second, the force-directed one takes significantly more time, on average 13.52 seconds and up to 17 seconds for the trajectories over 30km. This is because of the large number of interactions that have to be taken into account during Figure 11: Distribution of the average error in meters by the number of points in the trajectory the calculation of the forces between the trajectory and the road network. The computational time of the force-directed algorithm increases linearly with the number of points in the trajectory. The slight decrease in computational time for trajectories over 30km long is due to the small number of trajectories in this range and to the fact that many of these trajectories were on fast motorways, resulting in fewer GPS points than was typical for their length. The increase in computational time to under 20 seconds per trajectory poses no practical limitations, since the algorithm is designed for offline processing of trajectories and is an acceptable price to pay if it results in more accurate road matches. Under the two evaluation metrics considered the proposed algorithm performs better in terms of the quality of the produced path than the baseline routing-based map matching algorithm, at the expense of increased computational time, although as mentioned earlier, all evaluation metrics have limitations in in the absence of ground truth data. CONCLUSIONS AND FUTURE WORK This paper presented a novel algorithm that can be used to match trajectories obtained from GPS receivers onto a known map network. The algorithm borrows techniques used in force-directed graph drawing in order to incrementally perturb the trajectory and make it converge onto a good, likely path whilst at the same time ensuring that topological limitations such as one-way streets are satisfied. The interactions between the trajectory points and the underlying road network are modeled by a physical system evolving under the influence of physical-like forces which were described in detail in this work. Numerical experimentation using real trajectories in a dense, urban road network demonstrates that the proposed method produces better map matching paths than routing-based map matching alone, providing a framework for the use of force-directed algorithms in The future work in this direction includes the evaluation of the algorithm using new data, including datasets which contain the ground truth, and the development of more reliable metrics of evaluation, for example using a version of the Fréchet distance which can better measure the similarity of spatio-temporal trajectories. We are also working to further explore the optimal values of the parameters of the algorithm, such as the optimal number of iterations. Equally, a comparison of the performance of the proposed approach with other relevant implementations, in particular [2,23,35,38] and commercial software [20], is under investigation. The performance of the algorithm in difficult constellations and uncommon layouts, such as fly-overs or roads separated vertically and multilane matching (which are very uncommon in the Rome dataset used for this paper) remains to be evaluated. Finally, the use of the GPS temporal information (timestamps) to determine some of the parameters of the algorithm has the potential to further improve the accuracy in the case of sparse data.
2018-11-15T14:06:08.682Z
2018-11-06T00:00:00.000
{ "year": 2019, "sha1": "2101bf55e8bf44e4641fc64b359ea153dbb25fb9", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1903.12400", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2101bf55e8bf44e4641fc64b359ea153dbb25fb9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253696307
pes2o/s2orc
v3-fos-license
Perigraft seroma formation after Norwood–Sano procedure From the Division of Cardiothoracic Surgery, Department of Surgery, and Department of Surgery, Children’s Healthcare of Atlanta, Emory University School of Medicine, Atlanta, Ga. Disclosures: The authors reported no conflicts of interest. The Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest. Received for publication July 18, 2022; revisions received Oct 11, 2022; accepted for publication Nov 7, 2022. Address for reprints: Subhadra Shashidharan, MD, 1405 Clifton Rd NE, Atlanta, GA 30322 (E-mail: Subhadra. Shashidharan@choa.org). JTCVS Techniques 2022;-:1-4 2666-2507 Copyright 2022 The Authors. Published by Elsevier Inc. on behalf of The American Association for Thoracic Surgery. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/). https://doi.org/10.1016/j.xjtc.2022.11.004 Perigraft seroma after Norwood procedure with RV-PA shunt (yellow: shunt; red: seroma). pleural effusions. He was noted to have large a mediastinal shadow on radiography of the chest. A computed tomography (CT) scan showed a large hypodense collection in the superior mediastinum ( Figure 1), a patent RV-PA shunt and distal anastomosis, no filling defects within the pulmonary arteries (PAs), and mild narrowing of the proximal bilateral branch PAs. The patient was asymptomatic, so we opted to monitor the suspected seroma with serial imaging. He was discharged home on postoperative day 52. During outpatient follow-up, he was found to have an elevated gradient across the neoaortic arch along with newly depressed ventricular function on echocardiography. CT imaging revealed a size discrepancy between the native and reconstructed aorta (Video 1). We then proceeded with arch revision and seroma evacuation. Intraoperatively, we found a large collection of proteinaceous material surrounding the RV-PA shunt. He again progressed well postoperatively except for some persistent tachypnea. A repeat CT scan showed reaccumulation of the seroma, a widely patent RV-PA shunt and distal anastomosis, and unobstructed central branch PAs ( Figure E1). His respiratory status stabilized, so we decided to address the seroma during his stage 2 palliative procedure (bidirectional Glenn) at 4 months of age. Intraoperatively, another well-organized proteinaceous mass was found around the RV-PA conduit. He tolerated the procedure well, recovered fully, and was discharged home after 1 month. A repeat CT scan a few months later showed a persistent anterior mediastinal fluid collection, albeit much smaller than before surgery, and it continued to decrease in size over time on serial imaging. A 2.32-kg neonate born at 36 weeks with HLHS also underwent the Norwood procedure with the same 5-mm ringed RV-PA conduit as above on day of life 5. Postoperatively, she was weaned off inotropes, with stable hemodynamics. She required chest tube placement for a right pleural effusion associated with respiratory insufficiency. There was concern for mediastinal seroma on radiography of the chest, which was confirmed with CT imaging (Figure 2). The CT also showed a widely patent RV-PA shunt and distal anastomosis, no filling defects within the PAs, and small main and branch PAs (no formal z scores reported) (Video 2, Figure E2). On postoperative day 22, she underwent mediastinal exploration. Intraoperatively, a very large amount of serous fluid and proteinaceous material was noted in the mediastinum and right pleural space. Her postoperative course was complicated by high chest tube output and left pleural effusion requiring chest tube placement. She ultimately recovered and discharged home at 2 months of age. On follow-up CT scans, there was no residual mediastinal seroma or pleural effusions. DISCUSSION Perigraft seromas are suspected to be the result of plasma ultrafiltration through the polytetrafluoroethylene graft, but it is not clear why this happens. Differences in oncotic or hydrostatic pressure may be to blame, and elevated pulmonary vascular resistance could contribute to this. Our patients had either "mild narrowing" or "small" bilateral branch PAs, which is not uncommon in patients with HLHS. The low birth weight and early gestational age of the second patient increased her risk for morbidity and mortality following stage I palliation, but not specifically for elevated pulmonary vascular resistance. We did not identify any other pre-or postoperative variables in either case that might have contributed to seroma formation. We routinely used 5-mm grafts for these shunts regardless of weight until recently when, for other reasons, we selectively started using 6-mm grafts in certain patients weighing more than 2.5 kg. The cohort remains too small, however, to draw any conclusions about the effect of graft size on seroma formation. These cases illustrate the rarity of these seromas and may give some insight into how management can be tailored to the patient, as both immediate as well as delayed surgical interventions were employed with favorable initial outcomes in both cases.
2022-11-20T16:20:00.801Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "11c60dd88f72c25a58b28dc291b013c55da9d9b0", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "b51ebaa79aef8bb1672d237d5d0aebf639995c42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
102900960
pes2o/s2orc
v3-fos-license
Characterization analysis of raw and pyrolyzed plane tree seed (Platanus orientalis L.) samples for its application in carbon capture and storage (CCS) technology Raw and pyrolyzed samples of the plane tree seeds (PTS) were tested by various advanced analytical techniques including simultaneous TG-DSC technique, FTIR analysis, X-ray diffraction (XRD) analysis, Raman spectroscopy analysis, GC–MS (gas chromatography–mass spectrometry) analysis and scanning electron microscope analysis, for its characterization procedure and the pre-treatments in possible application in CCS. Nondestructive analytical method (XRD) showed that raw material is typical for carbon-rich material, where was identified increase in interlayer spacing within graphite structure. The XRD results of pyrolyzed sample at 850 °C showed a sudden loss in interlayer spacing. Spectroscopic analyses of pyrolyzed sample demonstrated the presence of typical aromatic structures found in amorphous carbon. Results indicate the high levels of the growth in basal planes of graphite structure in pyrolyzed sample. It was established that integrated reaction model parameters for pyrolysis of untreated PTS sample realistically describe active temperature period required for charcoal forming, under non-isothermal conditions. It was found that mechanical treatment of material results in increase in the number of chemical compounds. Micrograph showed the presence of variety of shapes and structures, where after pyrolysis, some dissipated pores were detected. One of these pores was partially blocked in some places, depending on the size of surface area. The results showed that the resulting char has very good features for further activation process, while the PTS would represent a good candidate in its application in the CCS. Introduction Carbon dioxide (CO 2 ) emissions from power plants and factories are among the biggest contributors to global warming. Technologies for capturing CO 2 from gas streams have been used for many years to produce a pure stream of CO 2 from natural gas or industrial processing for use in food processing and chemical industries. Both postcombustion CO 2 capture [1,2] and oxy-combustion [3] technology provide retrofit options for existing coal-fired power plants. Methods currently used or developed for CO 2 separation include but not limited to the following issues: Physical and chemical solvents, particularly monoethanolamine (MEA) [4,5], various types of membranes [6,7], adsorption onto solids [4,8,9], cryogenic separation [10] and other novel technologies, include ionic liquids [11], nanoparticle organic hybrid materials [12] and chemical looping sorbents [13,14]. The mentioned methods can be used on a range of industrial processes; however, their use for removing CO 2 from high-volume, low-CO 2 concentration flue gases, such as those produced by coal-fired power plants, is more problematic. The high capital costs for installing post-combustion separation systems to process the large volume of flue gas are a major impediment to post-combustion capture of CO 2 . Significance of CO 2 emission reduction was also discussed at the world level, if we take into account the 2015 United Nations Climate Change Conference, COP 21, which is held in Paris [15]. The purpose of the meeting was that the carbon dioxide emissions should be zero by 2070 in order to prevent climate disasters. In December, after two decades of tense climate negotiations, representatives of nearly 200 countries have reached a milestone on the road to that goal. By the Paris Agreement, which clearly signals the transition from fossil fuels to renewable energy sources, governments will be responsible for the urgent objectives of greenhouse gas emissions. The meeting was commended for the longterm goal of achieving net gas emissions to zero in the second half of the century. A number of integrated carbon capture technologies are designed and built around existing or new energy conversion systems. The past experience shows that proposed power plant with carbon capture would have substantially lower greenhouse gas emissions and somewhat higher emissions of acid precursors than a gas-fired power plant without carbon dioxide capture and storage (CCS). Carbonaceous materials such as activated carbons are attractive as CO 2 adsorbent [16,17] due to its wide availability, the high thermal stability, low cost, good chemical resistance, ease of preparation and the control of pore structure, low regeneration energy and low sensitivity to water due to its hydrophobicity properties. Activated carbons in the form of granules, extrudates, powders, fibers or beads can be produced from suitable thermosetting precursors by either thermal or chemical routes. The common commercial feedstocks include biomass material such as wood, coconut shell and fruit pits, and fossilized plant matter such as peats, lignites and all ranks of coal. Namely, the various materials precursors were used in order to obtain the porous carbons for CO 2 capture [18][19][20][21]. The main objective of this work is the application of the plane tree seeds (Platanus orientalis L.) as biomass source in order to obtain activated carbons (AC) as a promising future project for use in CCS. Forests and other green areas as living organisms that photosynthesize absorb the free CO 2 in atmosphere, stabilize them in more stable complex compounds and contribute to their long-term storage. For this purpose, one of the most important strategies for global warming and climate change in developed countries is to store CO 2 in the forest ecosystem (plant, dead cover and soil). This strategy is generally described as carbon sequestration. Urban trees and urban forests have an important position in order to sequestration CO 2 in the city centers [22]. Two tree species, Platanus orientalis L. and black poplar (Populus nigra), store a very high mass of carbon compared to their frequency in the population, storing 4% each of the total carbon stored while only representing 0.1 and 0.3% of the tree population, respectively [23]. Urban trees reduce atmospheric carbon dioxide (CO 2 ) in two ways: directly-trees absorb and sequester CO 2 -and indirectly-trees lower the demand for heating and air conditioning, thereby avoiding the emissions associated with electric power generation and natural gas consumption. The aim of this paper is to select the experimental techniques for characterization analysis of the raw and pyrolyzed samples of the plane tree seeds (PTS) (Platanus orientalis L.) and assessment of the results in order to obtain a potential solid sorbent for a sorption processes of the harmful gases such as CO 2 . The paper provides an insight into the results of pyrolyzed product at high temperatures which would be used as an adsorber for gases, where applied experimental techniques serve for purpose of its characterization, primarily obtained by the pyrolysis process of the ''fruit'' parts of Platanus orientalis L. tree. The actual material was selected because we estimated that the pyrolyzed product would have good predispositions for use in CCS and also that the literature data related to this biomass precursor in obtaining carbon black are very scarce. The following techniques have been used in this paper to characterize the crude precursor and pyrolyzed product: (a) Static pyrolysis process (monitored at high fixed operating temperature with an static heating regime of the precursor material) was conducted in a horizontal tube reactor, where the measurement was taken in order to obtain larger quantities of carbon for commercial purposes and for further analysis required by other experimental techniques, (b) the pyrolysis process was carried out in the TG-DSC device for the purpose of thermal testing of the sample to identify the appropriate transformations that the material suffers during the process in an inert atmosphere, and to detect the various heat phenomena on the laboratory-scale grade (the purpose of this measuring was not to examine the various experimental factors on the studied process, such as the different particle size of the sample, the influence of the variation of the heating rate, the effect of changing the flow rate of the carrier gas, the influence of the variation in the initial mass of the sample and the influence of the partial water vapor pressure); also, the TG technique was frequently used for the evaluation of solid sorbents for CO 2 capture [24,25], (c) Fourier transform infrared (FTIR) spectroscopy was used in order to obtain the information about the activity of active chemical functional groups and to have an insight into how CO 2 could interact with them; (d) the Raman spectroscopy of the pyrolyzed sample was used primarily to characterize the char structure and is fundamentally important for the clean utilization of applied biomass precursor, (e) X-ray diffraction (XRD) analysis was used in order to obtain information about spatial structures of the raw material and the obtained char at elevated temperatures, (f) GC-MS [gas chromatography (GC)-mass spectrometry (MS)] analysis was used in the characterization procedure of the organic compounds which contain the raw material and to characterizes the volatile pyrolysis products, since most of thermal degradation that results from free radical reactions was initiated by bond breaking, and depends on the relative strengths of the bonds that hold molecules together. This analysis allows us to get information about organic matter contents comprises the chemical families capable for trapping the CO 2 molecules; (g) the scanning electron microscope (SEM) was used to evaluate the structural variations in char particles after thermal treatment. It should be noted that actual work also describes the possibility of utilizing plane tree seeds (PTS) by-product as biofuels by producing char via pyrolysis. Therefore, the presented results should be viewed from two perspectives: The obtained char can be used as a solid fuel or as a precursor in the activated carbon production. Material The material used in our study was harvested from single trees of plane tree (Platanus orientalis L.) growing in the Belgrade parks (Serbia) in September month. The plane tree seed diameter was about 2-3 cm. Achenes with their thin bristle fibers were used for experimental work. Their length was up to 1 cm and about 1 mm thick, while bristles were shorter and a few times thinner. Regarding the number of replicates, it should be pointed out that we had taken technical replicate measurements. In all cases, for the data from the three replicate measurements of each type to assess the statistical significance of the difference in the signal responses of the tested sample, we found p [ 0.05 (assuming a Gaussian distribution), indicating that they are not significantly different, what was provided unarguable confirmation of sample replicates performance in comparison with the results. Pyrolysis in a horizontal tube reactor The raw plane tree seed (PTS) was first peeled off and washed with water to remove dirt and then dried at T = 80°C for 24 h in an oven (Carbolite Gero GmbH & Co. KG, Hesselbachstraße 15, 75242 Neuhausen, Germany) to reduce the moisture content. The dried PTS was then ground (using Planetary Micro Mill Pulverisette 7, Fritsch, Industriestrasse 8 55743 Idar-Oberstein, Germany) and sieved (Vibratory Sieve Shaker AS 200 basic (0.5-1.5 mesh), Retsch, Retsch-Allee 1-5, 42781 Haan, Germany). Particle size fractions in a range of the 0.5-1.5 mm were used for pyrolysis experiment. The pyrolysis was carried out in a stainless steel horizontal tube reactor with open plate pellets (Protherm Furnaces, model PTF 16/38/250, Turkey). About 20 g of materials was placed in a horizontal tube reactor. During the pyrolysis process, the purified nitrogen (N 2 ) at a flow rate of u = 500 cm 3 min -1 was used as purge gas. The reactor temperature was increased from the room temperature up to desired operating (static) temperature of 850°C. When operating (working) temperature reached the desired value, it was held for 1 h. At the end of pyrolysis, the gaseous flow of N 2 in a reactor was maintained during the cooling until the room temperature. The heating rate mode was constant and was amounted b = 4°C min -1 . Proximate, ultimate and chemical ingredients analyses The ultimate analysis was carried out on the LECO elemental analyzer, model CHN 628 (LECO Corporation, St. Joseph, Michigan, USA). The instrument range for carbon content is from 0.02 mg up to 175 mg, hydrogen content from 0.1 mg up to 12 mg and nitrogen content from 0.04 mg up to 50 mg, respectively. Precision ranges of the instrument are the follows: carbon (0.01 mg), hydrogen (0.05 mg) and nitrogen (0.02 mg). Helium (99.995%) was used as a carrier gas, while samples were combusted in a stream of the pure oxygen (99.995%). Prior to each measurement, the calibration checking was verified with certified reference material as ethylenediaminetetraacetic acid (EDTA). Both calorific and elemental analyses were performed in certified laboratory and with compliance of all requirements presented in ASTM D5373 [26]. Determination of the higher heating value (HHV) was carried out on the device IKA, model C200 (IKA Ò -Werke GmbH & Co. KG, Janke & Kunkel-Str. 10 79219 Staufen, Germany/Deutschland). Samples were prepared by grinding to the granulation below 200 microns. The moisture content of the sample is equal to the equilibrium moisture, i.e., sample is brought into a state of equilibrium with a moisture content in the laboratory. Moisture content was determined before each measurement in a calorimeter. Measurements were taken in isoperibol mode. For starting procedure, the floss known calorific value was used and the unity will automatically subtract from the measured values. One end of the ignition wire is connected to the carrier Characterization analysis of raw and pyrolyzed plane tree seed (Platanus orientalis L.)… 467 through which the sample is passed during the ignition current, and the other hand was immersed in the sample. In order to achieve complete combustion, the bomb calorimeter is filled with oxygen (99.5%) to a pressure of 30 bars. Before each measurement, the inspection of the device with the help of a certified tablet sample-benzoic acid-was carried out (also known as the thermal power). Measurements have been taken with double repetitions in which the criteria are achieved and where repeatability of measurements was prescribed in accordance with ISO 1928 [27]. The lower heating value (LHV) is calculated from the measured values of the upper thermal power and elemental composition of the fuel, according to ISO 1928 [27]. The sample weight was about 0.5 g. After sieving, in accordance with the TAPPI standard method (T 257 cm-12) [28], the raw material with particle dimension of 0.5-1 mm was taken for the compositional analysis. The moisture content of the samples was determined gravimetrically according to TAPPI standard method T 264 cm-97 [29]. The content of cellulose in the samples was determined by the Kürschner-Hoffer method [30]. The lignin content after extraction (toluene-ethanol) was determined by the Klason method [31], with a spectrophotometric determination (Specord Ò Plus UV/Vis Spectrophotometer, Analytik Jena AG, Analytical Instrumentation Konrad-Zuse-Str. 1, 07745 Jena, Germany) of acid-soluble lignin (based on absorption of ultraviolet radiation, with most often used wavelength of 205 nm) according to TAPPI method T UM 250 [32]. For the determination of extractives soluble in organic solvents, the TAPPI standard method T 264 cm-97 was used. In actual case, a mixture of toluene and methanol in a volume ratio of 1:2 (C 6 H 5 CH 3 /C 2 H 5 OH = 2/1, v/v) was used. The content of extractives soluble in hot water is determined according to TAPPI standard method T 207 cm-99 [33] and mineral content through the ash, according to the standard method ASTM D1102-84 [34]. The content of hemicellulose is determined approximately, as a complement to the content of certain components up to 100%. The results are expressed in relation to the absolute dry weight of the studied material and presented as the mean (arithmetic mean) for four repeated measurements. Pyrolysis of the raw material in a commercial device for thermal analysis The pyrolysis process of the raw sample was carried out on a simultaneous TG-DSC device for thermal analysis testing (SETSYS Evolution, Setaram, France). The single measurement at the heating rate of 10°C min -1 was taken in alumina (100 lL) crucibles, under inert atmosphere [nitrogen (N 2 ) atmosphere with flow of carrier gas which was u = 20 mL min -1 ]. The actual heating rate was chosen that not to be too low (2 or 5°C min -1 ) or too high (beyond 20°C min -1 ) to avoid effects of the thermal lag which may arise from different factors such as characteristics of the experiments and influence of the Arrhenius parameters changing associated with the experimental data [35]. It has been shown that the heating rate of b = 10°C min -1 is the most optimal for performing a thermo-analytical test for the studied system. On the other hand, there are no recordings at multiple values of the heating rates, either for the purpose of influencing the various experimental fatigues associated with the sample itself (such as particle size and variation in the initial mass of the sample used in the experiments) or for the kinetic analysis of the pyrolysis process. Before introducing an inert atmosphere, working environment was evacuated up to 10 -2 mbar (1 Pa). TG resolution was 0.1 lg. Both TG and DSC measurements were taken through the single recording with a operative mass sample of 5.0 ± 0.1 mg, in the experimental temperature range of DT = 40-810°C. All TG and DSC results in this study were corrected by baseline which obtained from the runs with empty alumina crucible, under the same conditions to eliminate the experimental system errors. FTIR analysis The surface functional groups and structure of the raw material were studied by the Fourier transform infrared spectroscopy (FTIR). The FTIR spectrum of the raw material was collected using a PerkinElmer Spectrum two FTIR spectrometer (PerkinElmer, Inc., Waltham, Massachusetts, USA) in transmission mode. Sample for measurement was prepared using the pressed KBr pellets (1:100) technique. The spectrum was recorded in the range from 4000 to 400 cm -1 with the resolution mode of 4 cm -1 , in order to obtain the spectrum lines. The X-ray diffraction (XRD) analysis The raw and pyrolyzed samples were characterized by the X-ray powder diffraction (XRPD) analysis using Ultima IV Rigaku (Rigaku Corporation 3-9-12, Matsubara-cho, Akishima-shi, Tokyo 196-8666, Japan) diffractometer, equipped with the Cu Ka 1,2 radiation source (a generator voltage of 40.0 kV and a generator current of 40.0 mA). All samples were recorded in the range of 5-80°2h, with a scanning step size of 0.02°and at a scan rate of 2°min -1 . The Raman spectroscopy analysis of a sample after pyrolysis process The Raman spectrum of the pyrolyzed sample was collected on a DXR Raman microscope (Thermo Scientific, USA) equipped with an Olympus optical microscope and a CCD detector, with a diode-pumped solid-state highbrightness laser (532 nm) and a 109 objective. The pyrolyzed sample after cooling was placed on X-Y motorized sample stage. The analysis of the scattered light was carried out by the spectrograph with a grating 900 lines mm -1 . The laser power was 1 mW. The Raman spectrum in the range between 800 and 1800 cm -1 was deconvoluted using a commercial ''Peak-FIT'' program. The linear baseline and Gaussian shape (Gaussian Amp function) were used to describe the individual Raman bands. GC-MS (gas chromatography-mass spectrometry) analysis The GC-MS analysis was performed on the raw and pyrolyzed plane tree seed (PTS) samples. The PTS samples for measurements are delivered in a plastic bag. About 1 g of the sample was measured and transferred in glass vial for extraction. Extraction process was held in ultrasonic bath for 30 min at the room temperature with the addition of 5 mL of the methanol, which is extraction solvent. Analytes dissolved in methanol were separated from the sludge by putting it through the membrane filter. Then, the 1 mL of filtrate was taken for the analysis by the GC-MS method. The qualitative analysis of the compounds in the tested sample was performed with the gas chromatograph TRA-CE TM 1300 Thermo Fisher Scientific (Thermo Fisher Scientific Co., Waltham, Massachusetts, USA) which separates different analytes in a mixture by chromatograph column DB5, HP5, 30 m 9 0.25 mm i.d., film 0.25 lm, Trace Gold TG-5MSGC (Thermo Scientific TM Trace-GOLD, Thermo Fisher Scientific Co., Waltham, Massachusetts, USA). The column temperature mode starts with initial temperature of 40°C during 4 min; then, the temperature was increased at T = 280°C, with an increments of 10°C min -1 , and at T = 280°C, it was held until 42 min. The injected volume was 3 lL with autosampler AI 1310 (Thermo Scientific TM AI/AS 1310 Series Autosampler, Thermo Fisher Scientific Co., Waltham, Massachusetts, USA). The detection was carried out with Thermo Scientific ISQ LT single quadrupole mass detector (ISQ TM Series, Thermo Fisher Scientific Co., Waltham, Massachusetts, USA) in a 1.2-1100 u mass range. The data processing was performed in software with the commercial name Xcalibur 2.2. SP1, which includes the library data search. Spectroscopic methods as well as the chemical analyses such as GC-MS generate huge amounts of data, and the numbers of observations are far smaller than the number of variables, where the observations correspond to the samples and the variables correspond to (for example) all of the data points in their spectra. Multivariate data analysis (MDA) makes it possible to extract meaningful information from such data. Principal component analysis (PCA) can be used to obtain an overview of the data, determine which observations deviate from the others, and analyze the relationships among observations. The groupings identified in an initial PCA can be used to classify new samples and identify samples that do not fit into the established groupings, which may merit further investigation. SEM analysis The morphology of raw and pyrolyzed samples was observed using the scanning electron microscope (SEM) JEOL JSM-5800 (JEOL, Ltd., Akishima, Tokyo, Japan). The tested sample was placed in a carbon strip (which is sticky on the both sides-on one of the sticky sides with tweezers, the pulverized sample was placed), which is then entered (with side where the sample was not submitted) to the carrier of the device, and then carries out the corresponding measurement. The integral reaction heats correlated with experimental temperatures during pyrolysis The biomass pyrolysis mass loss can be classified into three stages: the moisture removal in the first stage, organic content removal in the second stage in which different kinds of hemicelluloses and cellulose composited and in the last stage, the fixed carbon content decomposition may occur. The heat released or required DQ (J) to any experimental temperature T from the basic temperature (T * ) can be expressed as [36]: where q is the original heat flow at time t measured by DSC. The corresponding integral reaction heat H (J g -1 ) to any experimental temperature is where Q and Q * are the residual heat evolved at the actual temperature T and time t, and the heat evolved at the start time (t * ) of decomposition, respectively; m and m * are the residue and the basic sample masses at the temperatures T and T * , respectively. DSC signal is influenced by the baseline excursion of the apparatus, so it should be corrected in advance. The heat H r (J g -1 ) released for biomass pyrolysis can be presented as H r = (Q -Q * )/m. Further, in order to correlate the integral reaction heats with temperature, it was assumed that the monitored reaction is a function of the reaction temperature and appropriate numerically derived parameters, which connect (from physical aspects) the reaction heat with temperature of a given reaction. This method requires setting the appropriate interpolation equation for each thermal transformation. Results and discussion The ultimate/proximate results Table 1 presents the results of ultimate and proximate analyses of the plane tree seed (Platanus orientalis L.) samples together with HHV and LHV values, respectively. Analysis findings show that wood precursor has acceptable heating values compared with wood and its products [37], but much elevated heating values compared to agricultural wastes [38]. Also, the observed raw material contains the high content of carbon and hydrogen (Table 1). In addition, the ash content is not so great, if we make a comparison with palm of Phoenix dactylifera L. [39], palm leaflets [38] or sugarcane [40]. However, the resulting ash content is not negligible, bearing in mind that high-ash content biomass functions as a precursor to form highporosity carbons, which can be obtained from wood materials. Also, it can be observed that tested sample represents the carbon-rich sample (C(%) = 47.760, Table 1). In addition, the combination of rather high oxygen content (O(%) = 40.329, Table 1) (which is here larger than the one present in Platanus orientalis leaves, O (%) = 33.700% [41]) and high organic volatile matter in biomass sample may indicate the potential for creating the large amounts of inorganic vapors in combustible processes. Also, the content of oxygen in starting material is an essential for the subsequent activation process (by the CO 2 sorption process) of obtained carbons by pyrolysis, because during activation, the decrease in oxygen fraction may occur by identifying the intensification of C-CO 2 reaction. The fairly high values of HHV and LHV were identified, and obtained values are very close to the values obtained for Macadamia nut shells (HHV = 20714 kJ kg -1 ) and Cypress wood chips (LHV = 18727 kJ kg -1 ) which are also rich in the carbon, but these values are less than those identified for bituminous coal (HHV = 27061 kJ kg -1 and LHV = 24856 kJ kg -1 ) [42]. Content of cellulose (which is basic building blocks of the cell wall of plants) amounts 33.790% (Table 1), which is a low value, probably because it is a ''fruit'' of the plant. For oriental plane tree (Platanus orientalis L.), Rowell et al. [43] state the cellulose content of 44.0%, without specifying that it is a date for ''fruit'' of the plant. In contrast, the total lignin content of 25.890% (Table 1) is slightly higher than for the Klason lignin content, which amounts 21.000% for oriental plane tree. Contents of extractives dissolved by the hot water and in a mixture of toluene/ethanol (results amount 9.680 and 7.120%, respectively (Table 1)) are high, probably because the tested material comes from the physiologically active parts of the plant. Figure 1 shows the FTIR spectrum of Platanus orientalis L. sample, with designated all characteristic vibrational bands. The broad absorption peak at * 3420 cm -1 observed in actual spectrum is due to hydrogen bonded O-H stretching vibration of hydroxyl groups. Theoretically, hydroxyl bond should have weakened as dehydration of monosaccharide present in wood material proceeded during the thermal conversion. The sharp absorption bands at * 2926 cm -1 and * 2852 cm -1 are attributed to asymmetric and symmetric stretching vibrations of methylene (C-H) groups. However, this band is much weaker than the same band in the annual fiber crops. The band at * 1640 cm -1 in Platanus orientalis L. sample (Fig. 1) is good indicator for estimating the changes produced in hardwood and softwood, in either fresh or dried state. The occurrence of such band indicates that d H-O-H bending vibration arises from water molecules. Also, this band supports existence of C=C vibrations, which is followed by the presence of aromatization of sugars structure. In addition, the observed peak at * 1243 cm -1 (Fig. 1) may be attributed to vibrations of guaiacyl rings and stretching vibrations of C-O bonds (which were observed in softwoods, as Platanus orientalis L.) [44]. The clearly observable band at * 1115 cm -1 can be attributed to the aromatic C-H in-plane deformation (typical for syringyl units) and also to the secondary alcohols or C=O stretch typical for wood species. The band caused by O-H out plane bending vibrations is located at the * 674 cm -1 (Fig. 1) [45]. Figure 2 shows the XRD patterns of the raw (the powderlike sample which is pulverized with particle size of r m-= 1200 lm) sample (a) and pyrolyzed (b) sample, where the pyrolyzing sample is monitored at the operational temperature of 850°C. Platanus orientalis L. Characterization analysis of raw and pyrolyzed plane tree seed (Platanus orientalis L.)… 471 cases (Fig. 2a, b), the broad peak at around 2h = 24°is an indication of an amorphous phase. On the other hand, for the raw sample (Fig. 2a), there are several unindexed small peaks latched on to broad peak and continue to appear till the end of diffractogram. These peaks presumably correspond to the mineral oxides (such as K, Mg, Pb and Ca) [46]. Comparing the raw sample diffractogram results (Fig. 2a) with FTIR spectrum (Fig. 1), the ''graphite spectrum'' contains the broad peak centered at * 1640 cm -1 attached to water molecules, and this phenomena apparently does not exist in pyrolyzed sample at the high temperatures, because obviously diffraction patterns are not identical. However, for pyrolyzed sample at 850°C, the diffraction peak at around 2h = 10 o is disappeared, because after pyrolysis, we have sudden loss in interlayer spacing. This probably indicates the collapse of structures into the graphite [47]. The pyrolysis temperature strongly affects on the changes in the carbon layer interspaces and also has an influence in organization/disorganization of the entire structure. In addition, for pyrolyzed sample (Fig. 2b), the broad peak at around 2h = 44.5°can be assigned to turbostratic band of the disordered carbon material (100 direction) [48]. However, the peaks were not as sharp as that of pure carbon [49] and these results are direct evidence for the presence of carbon in all considered samples. From TG curve (Fig. 3), it can be seen that mass loss of tested sample mainly occurs at three stages: 40-193°C (mass loss Dm I = 2.06%), 193-357.13°C (mass loss Dm II = 47.45%) and 357.13-800°C (mass loss Dm III-= 27.41%), respectively. On the other hand, from DSC curve, we have identified one endothermic effect at temperature T endo = 122.16°C and two exothermic effects at temperatures T exo1 = 325.03°C and T exo2 = 655.29°C, respectively. The first stage attached to T endo value characterized by mass loss decrease of 2.06% is due to the loss of water physically bound to the plane tree seed and desorption of strongly bound water inside the wood precursor fiber. (This applies particularly to temperatures above 100°C, where evaporation of water occurs, and which is followed with heat absorption from the system.) The second stage, attached to T exo1 value characterized by mass loss decrease of 47.45%, can be attributed to devolatilization of present organic contents (including all pseudocomponents of biomass usually, through the chain scission reactions and breaking of some carboxyl, carbonyl bonds, etc., in a ring structure evolving H 2 O, CO and CO 2 ). The exothermic peak at 325.03°C is typical for hemicelluloses decomposition in wood species [50,51]. The lignin decomposition occurs in a much broader temperature interval of 250-550°C, but the minor decomposition can be observed already in the lower temperatures [52]. Consistent with previous fact, lignin does not present a specific maximum decomposition rate and, therefore, cannot be identified within a particular temperature range. TG-DSC results The part of present acids evaporate as a steam in the second stage. In addition, the exothermic phenomena begin above 270°C, so that the external heating is no longer needed, because the exothermic reaction generates heat. It can be pointed out that the decomposition of cellulose occurs in the temperature interval of 305-375°C which is also exothermic [51]. The share of hydrocarbons in volatiles increases, especially the proportion of methane, while the share of some gases such as carbon monoxide (CO) decreases. Also, the hydrogen (H 2 ) starts to develop, while the amounts of tar which is produced substantial depend on the lignin and cellulose decompositions in this process stage. The most interesting for us is the third stage, which takes place in a temperature range of 357.13-800°C with weight loss decrease of 27.41%. This high-temperature region corresponds to the polycondensation, the molecular re-arrangement and formation of carbon structure. Our obtained results are in good agreement with approved results reported in the literature that multi-stage pyrolysis process can save 30% energy and processing time by using a first target temperature about 325°C (see above results) and heating rates in the range of 5-10°C min -1 [53]. This strictly corresponds to the appearance of mass loss Fig. 3 Simultaneous TG-DSC curves for non-isothermal pyrolysis process of untreated plane tree seed sample (Platanus orientalis L.) at the heating rate of 10°C min -1 decrease of 27.41% in a third stage [53]. The black porous solid that remains at the end of pyrolytic process (at the very end of third stage; Fig. 3) comprises mainly the elemental carbon. It should be noted that at identified temperature of 655.29°C where strongly exothermic effect occurs (Fig. 3), the transformation of the wood precursor to charcoal reaches its maximum, and then, it is already established. The charcoal at this temperature may still contain appreciable amounts of tar, perhaps 30% by weight trapped in the structure. This soft burned charcoal needs further heating to drive off more of the tar and thus raise the fixed carbon content of the charcoal to acceptable percent for good quality in commercial purposes. To drive off this tar, the charcoal is subject to further heat inputs to raise its temperature above T sample = 600°C, thus completing the carbonization stage. Therefore, this phase and T exo2 can serve as a reference points for our further study of the pyrolytic behavior in the process that took place in a horizontal tube reactor. The integrated reaction model parameters for pyrolysis process of untreated plane tree seed sample (Platanus orientalis L.) are shown in Table 2. It should be notice that the integrated reaction heats expressed through numerically derived dependencies had a very high Adj. R-square (R 2 ) values, especially for Exo2 stage (0.99914) ( Table 2). Based on established R 2 , we can conclude that integrated heat model very well describes the real situation, where additional information can be derived from parameter w o which is part of a complex parameter, c o (Table 2). Namely, the parameter w o which is equal to 133.15°C describes the active temperature period required for charcoal forming, under the programmed heating mode. The charcoal produced after 655°C and above is stable, and in accordance with established fact, we can proceed with further analysis. However, the increasing charcoal yields require minimizing the carbon losses in the form of the gases and liquids and promoting desired pathways, such as primary solid-phase dehydration, decarboxylation and decarbonylation reactions, as well as secondary conversion of the pyrolysis vapors to solids. Raman spectroscopy results of pyrolyzed sample at 850°C Figure 4 shows Raman spectrum of pyrolyzed sample at 850°C, where in accordance with the literature, the baseline-corrected Raman curve from 800 to 1800 cm -1 can be peak-fitted with total 7 Gaussian bands. Band intensity or area ratios were usually applied to describe the structure evolution. In this study, the area ratios of some major Raman bands were conducted to investigate the following properties: (i) A G /A All , band area ratio between G band and all these 7 bands, was used to describe the aromatization extents and (ii) A (VR?VL?GR) /A D , band area ratio of the sum of V R , V L and G R bands to the D band, was used to describe the relative content ratio of the smaller aromatic rings to a larger ones [54]. Figure 4 shows that the Raman bands R, S R and G L are not present in the spectrum, where these missing bands correspond to C-H bond that belongs to aromatic ring (primarily R and S R bands) and sp 2 hybridization, at the C=O active functional group (G L band) [37]. The G band at 1605.45 cm -1 corresponds to aromatic ring quadrant breathing [37]. The D band at 1299.48 cm -1 is primarily referred to medium-to-large-sized aromatic structures (larger than or the equal to six rings systems) [55]. The area between G and D band was curve-fitted with 3 bands, including G R (1546.36 cm -1 ), V L (1455.80 cm -1 ) and V R (1366.20 cm -1 ). These three bands correspond to aromatic structures found in amorphous carbon (with relatively smaller aromatics with 3-5 fused rings) [56]. The maximum peak intensity shows G and V R bands, but prolonged with D band characterized with greater width and slightly lower intensity. The width of this band is smaller only in the case if nanographite structure is present at higher temperature. However, the latter is not present in obtained product. It should be pointed out that the G band at 1605.45 cm -1 is clearly appeared on the graphite. The G band is a doubly degenerate (iTO and LO) phonon mode (E 2g symmetry) at BZ (Brillouin zone) center that is Raman active for sp 2 carbon networks. Therefore, since there is a clear G band in the Raman spectrum, we Characterization analysis of raw and pyrolyzed plane tree seed (Platanus orientalis L.)… 473 can say that product after pyrolysis contains sp 2 carbon networks. Intensity of G band is higher than intensity of G band for a single graphene layer [57], while the FWHM (full width at half maximum) of this band for our sample is a larger than the FWHM of a single graphene layer. The observed feature is the characteristic for the existence of turbostratic graphite, and this is in complete agreement with the obtained XRD results (Fig. 2b). Intensity of D band can be used to probe the density of defects in the structure, which may lead to the conclusion about the structural defects that can be localized at 2D (two-dimensional) structures. However, the most reports concur that position of D band is related to structural disorder. Strictly, the position of D mode may vary depending on the type of the laser and also of the used wavelength for excitation. Variations in wavelength (k o ) will change the Raman sampling depth, possibly leading to the variation in the spectrum if the carbon structure varies with depth. In addition, the intensity ratio (I D /I G ) is used as the most useful parameter indicating sp 2 cluster size. So, the D band indicates disorder in the graphitic structure, but does not necessarily indicate sp 3 hybridization. According to Tuinstra and Koenig [58], the ratio of D peak intensity to that of G peak varied inversely with L a [where L a is the cluster size (the cluster diameter)] as I D / I G = C(k)/L a [where C(k) = 532 nm]. The following results were obtained: A G /A All = 0.16531, A (VR?VL?GR) / A D = 4.28848, I D /I G = 0.48 and L a = 1108.33 nm, respectively. The value of A (VR?VL?GR) /A D is quite high indicating a much larger proportion of aromatic rings. Value I D /I G * 0.48 is much lower than I D /I G value (= 1.41) for carbons obtained from Limonea acidissima (''wood apple'') which is typical for disordered materials such as glassy carbon [47]. Decrease in I D /I G indicates the growth of aromatic rings, i.e., the structure of the sample is closer to that of graphite. This result is further supported by the side of the intensities of D and G bands (Fig. 4). Namely, these two bands were attached to disordered behavior (D band) and graphitic behavior (G band). Intensity of G band is much greater than the intensity of D band (Fig. 4). Very high value of L a may suggest high levels of the growth in basal planes of graphite structure in pyrolyzed sample. This result is in full compliance with a lower value of I D /I G ratio. The S band can be considered as a small measure of cross-linking density and substitutional groups (Fig. 4). B and S L (Fig. 4) can be assigned to contributions from ether-and benzene-related structures. Also, the absence of G L band in the Raman spectrum indicates outage of carbonyl C=O structures, which has previously been indicated. Namely, in pyrolyzed sample, the residual lignin may attend reduction in carbonyl (C=O) groups and also reduction in aliphatic hydroxyl (OH) structures [59]. Generally, the detected molecules belong to the group of biogenic compounds, which means that they are synthesized in living systems (such as bacteria, plants, animals). In majority, these are the organic acid esters of long chains (saturated and unsaturated), free organic acids and intermediates of metabolism in living systems. This is particularly evident in the case of ''raw1'' sample where fatty acids dominates (especially the palmitic acid (Peak 27), the most common fatty acid found in the plants and animals) and expressed fatty acid derivatives (among them, the most common are 9-hexadecen-1-ol (Z)-and 9-octadecenoic acid (Z)-; Peaks 35 and 36) (Fig. 5a) [60,61]. Identified compounds in all tested samples are presented in Table 3. In pulverized sample (''raw2''), more extracted chemical compounds were identified compared to untreated sample ( Table 3). The mechanical treatment of the sample results in an increase in the number of chemical compounds, where there is a splitting into a more several chemical species. For carbonized (pyrolyzed) sample, we have the largest abundance of aromatic compounds with a higher numbers of aromatic rings (such as benz[j]aceanthrylen-1-ol, 1,2dihydro-3-methyl-) and then isomeric amines of butane, lactones, etc. (Table 3), and these results are in excellent agreement with previously presented Raman spectrum results. Characterization analysis of raw and pyrolyzed plane tree seed (Platanus orientalis L.)… 475 The aromatics represent a very important group of chemicals, and in this regard at higher temperatures, dehydrogenation/aromatization reactions can eventually lead to the larger polynuclear aromatic hydrocarbons and, eventually, increases in pyrolysis process. So, the general trend is that the higher pyrolysis temperature (850°C) and longer times enhancing the formation of above detected compounds. It should be noted that this assumption is consistent with lignin pyrolysis mechanism, i.e., the monomolecular dissociation of guaiacols into corresponding radicals (like catechols and cresols) at the higher pyrolysis temperatures. However, the organics subgroup (in the liquid form) consists of organics that are mainly produced during devolatilization, depolymerization and carbonization reactions in the biomass. The knowledge of the composition of volatiles produced in pyrolysis is a topic of interest as it helps in the raw material adaption process, process control, process behavior and operation, energy process and energy optimization, and in the production of green chemicals. On the other hand, the chemical compounds which contain the oxygen group or an oxygen chemical species are also dominated. Furthermore, the oxygen-containing groups became abundant and they promote the increment of the Raman intensity. This fact can explain increases in D and G bands in Raman spectrum of pyrolyzed sample at 850°C (Fig. 4). Meanwhile, special attention must be given for CO 2 interactions with C=O group in esters. In this respect, the probable shift effect in the C=O stretching frequency may indicate the interactions between gas molecules and polar C=O group. The strength of this interaction depends on the binding geometries for C=OÁÁÁCO 2 complex, and these structures may differ mainly in the orientation of the gas molecule with respect to the different R groups of the C=Ocontaining molecule. In the acetate groups, it is expected that the addition of ester oxygen groups makes the C=O oxygen more electron-rich and hence favorable for gas binding. This is very important for capturing the CO 2 molecules on the carbonized material and for its further activation in the essential promotion of the selected precursor in the CCS. SEM results Scanning electron microscopy (SEM) technique has been extensively used to qualitatively explore the surface morphology of chars. Surface morphology of solid product obtained after pyrolysis process in a horizontal tube reactor was also investigated in this work, using the SEM technique. Figure 6 shows the microstructures of the raw plane tree seed (PTS) sample (a) and solid products obtained by carbonization process (b-d) at the fixed operational temperature in a horizontal tube reactor, where all SEM images are presented in high resolutions (for raw sample: 500 lm, 91000 and 91500 magnifications with a resolution of about 10 lm). As it can be seen from presented micrographs, the obtained product from plane tree seed pyrolysis shows the variety of structures. We may see that the obtained carbonized solid product contains oval-cracked (Fig. 5b), torsionally twisted (Fig. 5c) and the hollow-twisted (Fig. 5d) tube-shaped structures. These features can be classified within expanded char structure, contrary to flake ones [62]. However, the detected cracks arise from natural behavior of the wood, but also due to the mechanical and thermal stresses. The presence of small clusters can be observed (the central part of Fig. 6c) and in the upper left corner of Fig. 6d), which may be produced from the application of elevated temperatures and formation of highpressure reaction, causing a rapid diffusion of reactant species which contain and carbon atoms. This phenomenon can facilitate that carbon atoms be deposited on some catalyst particle (which may result from the minerals presented in a raw sample). This behavior is quite possible if we take into account the large number of identified aromatic ring structures which may strongly promote this behavior [62]. However, the part of these pores is partially blocked (generally not opened) at some places, which greatly depends on the size of the surface area. In addition, the above observations were consistent with report given by Alvarez et al. [63]. It is projected that the degree of structure breakage will increase with higher pyrolysis temperature. In addition, within general case, the carbonized samples show the morphology which is similar to that of raw material (Fig. 6a). Namely, the PTS is made up of fibers of different thickness and shape, and same structure was retained after pyrolysis. It can be concluded that from the pyrolysis process of raw plane tree seed (PTS) samples, the promising carbonized material can be obtained. Carbonization in itself is relatively not a costly step. Even though retorts may be of high capital cost, they do not require very much labor per unit of production. Typically, the carbonization step may represent about 10% of total costs from growing and harvesting the tree to arrival of the finished charcoal into the bulk store. The three major factors may affect the conversion yield and they are as follows: (1) moisture content of the wood at time of carbonization, (2) type of carbonizing equipment used and (3) the care with which the process is carried out. Namely, the activation process may result in widening up of the pores. However, the difference in pyrolysis behavior of different biomass materials can result in ultimate chars with different pore structure and porosity. Consequently, the promising precursors such as wood or wood-based materials are being used to produce activated carbon with high surface area, so that wood and wood-based biomasses are still prominent precursors for activated carbon. In addition, the low-temperature char (below 400°C) was not selected for actual study, because contains volatile matter. The chars produced at higher temperatures (above 400°C) do not contain (or have very low) volatile matter. Therefore, the chosen experimental conditions are adjusted to obtain the char for its conversion into activated carbon. Based on our previously published results [64], it was found that the activated product (through CO 2 activation) is characterized by the fine pore structure on the external surface, which may be classified within submicropores that would have crucial role for the gas (CO 2 ) sorption. It was found that for 2 h at 750°C and 2 h at 850°C activated samples, the following values of the mesopore surface (S meso ), the micropore surface (S mic ) and the total surface areas (S tot ) were obtained [64]: S meso (2 h: 750°C) = 47.7 m 2 g -1 , S mic (2 h: 750°C) = 526 m 2 g -1 , S tot (2 h: 750°C) = 573.7 m 2 g -1 and S meso (2 h: 850°C) = 94.3 m 2 g -1 , S mic (2 h: 850°C) = 610 m 2 g -1 and S tot (2 h: 850°C) = 704.3 m 2 g -1 , respectively. Reported results [64] have been indicated the large surface area (the higher surface areas are probably due to the opening of the restricted pores, as mentioned above) that is available for the adsorption/desorption processes, so clearly represents a big contributor for the high surface area, at elevated activation temperatures. So, taking into account presented results in this work and previously reported results [64], it follows that pyrolysis of PTS throughout elevated thermal treatments shows porosity developing in resulting chars with the meso-microporous structure particles. It was found that BET surface area for PTS activated chars at 850°C using low heating rate (704.3 m 2 g -1 ) is much higher than the same obtained for rice straw, hickory wood, rapeseed bagasse, hornbeam shell, bamboo, apricot stone, hazelnut shell and grape seed, at the higher heating rates [65][66][67][68]. This is because, if the heating rate is too high, a higher temperature is reached inside raw material and a partial graphitization with formation of graphene structures occurs. This graphitization is not in favor of the development of a large surface area. So, the higher pyrolysis temperatures and lower heating rates favor the formation of porosity by increasing surface area of chars. It is striking that our result of 704.3 m 2 g -1 surface area for the activated char is a very good result, which promotes PTS as a promising route for its consideration in CCS. Conclusions In this study, the raw and pyrolyzed samples of the plane tree seeds (Platanus orientalis L.) (PTS) were tested by the various analytical techniques. The qualitative and quantitative characteristics of investigated wood precursor and pyrolyzed products were established. From chemical composition analysis, it was found that raw material is the carbon-rich, with cellulose content greater than 30%. FTIR results showed that Platanus orientalis L. raw sample is characterized by the presence of aromatization of sugars structure. The XRD results of raw material have shown that the tested sample is typical for carbon-rich material, with existence of graphite structure. The XRD results of pyrolyzed sample at 850°C showed a sudden loss in interlayer spacing. It was established that pyrolyzing temperature strongly affects on the changes in carbon layer interspaces. It has been found that integrated reaction model parameters for pyrolysis process of untreated PTS sample realistically describe the active temperature period required for charcoal forming, under non-isothermal conditions. Raman spectroscopy analysis of pyrolyzed sample showed the presence of typical aromatic structures found in amorphous carbon. The presented results indicate the high levels of the growth in basal planes of graphite structure in pyrolyzed sample. It was found that mechanical treatment of material results in the increase in number of chemical compounds, where there is a splitting into a more several chemical species. GC-MS results for carbonized sample were confirmed the largest abundance of aromatic compounds with a higher numbers of aromatic rings. Based on GC-MS results, aromatic hydrocarbons, aliphatic hydrocarbons, esters, ketones, phenolics, aldehydes and acids can be identified as the most prominent organic products. Aromatic hydrocarbons and aliphatic hydrocarbons can be considered as important raw materials in the variety of applications in petrochemical industry, while the phenolics can be considered as the high-added-value chemicals. Micrographs of obtained product from pyrolysis have been demonstrated a variety of structures. Existence of some dissipated pores was detected. Actual study gives us the opportunity to consider obtained char for its conversion into activated carbon through the thermal conversion processes. Comparing the results of this study with previous reported data, it was concluded that activated material has a large surface area which is available for adsorption/desorption processes. Also, comparing the results of the activated carbon with similar results reported in the literature, it was concluded that PTS and its activated carbons can be taken as promising candidates in the CCS.
2019-04-09T13:08:36.338Z
2018-03-23T00:00:00.000
{ "year": 2018, "sha1": "f74a62bdbeb04f2095889e495344fb63a2ae749a", "oa_license": "CCBYNC", "oa_url": "https://vinar.vin.bg.ac.rs/bitstream/123456789/7779/1/bitstream_10334.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "f74a62bdbeb04f2095889e495344fb63a2ae749a", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
260069406
pes2o/s2orc
v3-fos-license
Effectiveness, Immunogenicity and Safety of COVID-19 Vaccination in Pregnant Women: A Rapid Review Study Background Pregnant women infected with the coronavirus disease 2019 (COVID-19) are at risk for adverse pregnancy outcomes, and the only real preventive strategy against COVID-19 is mass vaccination. This study aimed to examine the effectiveness, immunogenicity, and safety of Covid-19 vaccination in pregnant women. Methods A combination of search terms was performed by 2 researchers independently in the Web of Science, PubMed, and Scopus databases, the World Health Organization website, and the US Centers for Disease Control (CDC) website up to February 2022. After the selection of eligible studies, the review process, description, and summarization of the selected studies were performed by the research team. Results Finally, 22 articles were included in this study. Evidence supports the safety of COVID-19 vaccination during pregnancy. There is no risk of transmitting COVID-19 to infants during lactation. In addition, antibodies made by vaccination can protect infants through breast milk. Conclusion The scientific community believes that being vaccinated as soon as possible is the best course of action because there is no evidence to suggest that the COVID-19 vaccine poses a risk to expectant or nursing women. Introduction On December 31, 2019, a cluster of cases of pneumonia was reported to the World Health Organization (WHO) from Wuhan, China. A novel coronavirus was identifiedsevere acute respiratory syndrome coronavirus2 (SARS-CoV-2)-and the outbreak was declared as a public health emergency of international concern in January 2020. In previous outbreaks of other coronavirus infections, such as severe acute respiratory syndrome (SARS) and middle east respiratory syndrome (MERS), serious complications were reported in pregnant women (1). Changes in the immune and respiratory systems during pregnancy increase the risk of pneumonia and other complications. Women are also more susceptible to serious infections such as the flu (2). Pregnant women are more at risk than others for developing severe coronavirus disease 2019 (COVID-19) dis-2 ease. Outbreaks appear to be exacerbated during pregnancy and in women with preeclampsia, depression, nausea during pregnancy, preterm birth, low birth weight, and low Apgar score in the infant (3,4). Vaccination is an important strategy for the prevention and control of pandemics and endemics. Vaccines have also been developed for COVID-19. Sputnik was the first vaccine to be registered in August 2020, followed by AstraZeneca, the second vaccine to be licensed in the United Kingdom in December 2020, and Pfizer to be licensed for emergency use by the US Food and Drug Administration in the same month (5). Vaccination is done during pregnancy to prevent the death of the mother and the infant from infectious diseases, especially diseases such as influenza. There is a lot of information about the safety and efficacy of the flu vaccine (6). Although the clinical phases of vaccine studies did not include pregnant women, the US Food and Drug Administration and the Committee on Advisory Studies on Immunogenicity have ruled that COVID-19 vaccination is safe for pregnant and lactating women (7,8). The British Committee for Vaccination and Safety identified pregnant women as a high-risk group on December 16, 2021, and emphasized the importance of vaccination as well as booster dosing in this group to prevent COVID-19 complications and admission to intensive care units for both the mother and the fetus (9). Since the beginning of vaccination for pregnant women, various studies have been performed to evaluate the side effects of the vaccine and the immunogenicity of the vaccine. The purpose of this review study was to evaluate the efficacy and immunogenicity reported in various studies after vaccination in pregnant women. Methods A rapid review of the published literature was performed to provide a brief report of the available research evidence related to the effectiveness and safety of COVID-19 vaccination in pregnant women. Search Strategy We performed a literature search using the online databases of Web of Science, PubMed, Scopus, the WHO website, and the US CDC website for relevant publications up to February 2022. The search strategy was as follows: (("2019-nCoV") OR (COVID-19) OR (SARS-Cov2)) AND ((Pregnancy OR (Pregnant women) OR (gestation)) AND ((Vaccine*)) AND ((safety) OR (immunogenicity) OR (effectiveness) OR ("adverse event")). Inclusion and Exclusion Criteria The inclusion criteria were as follows: (1) studies published in English; and (2) studies on the effectiveness, immunogenicity, and safety of COVID-19 vaccination in pregnant women. The exclusion criteria were as follows: (1) duplicate articles; and (2) unofficial country reports such as nonpeer-reviewed dissertations, conference proceedings/papers, statements by professional organizations, and et cetera. Study Selection and Data Collection Once duplicates were removed, the initial search results were screened by 2 independent researchers based on abstracts and titles. Then, the full texts of related articles were evaluated based on the inclusion and exclusion criteria and eligible studies were selected. Studies in which researchers did not reach a decision were reviewed by a third researcher. Two authors independently extracted data from eligible studies using a data extraction form. The following information was extracted from the full text of selected studies: the first author's name, study type, study design, sample size, gestational age, vaccine type, number of injected doses, pregnancy outcome, effectiveness, immunogenicity, and safety of COVID-19 vaccination. Risk of Bias and Quality Assessment The risk of bias was assessed using the Newcastle Ottawa scale (NOS) as recommended by Cochran (10) by 2 independent reviewers for the cohort, case-control, and cross-sectional studies. The NOS score ranged from 0 to 9 based on 3 sections such as selection, comparability, and assessment of outcome in a study. Based on this scale, a maximum of 9 points can be awarded to each study. In the present study, articles with a NOS score ≥5 were considered as having high-quality methodologies. Results We identified 238 studies from 5 databases after removing duplicates and then screened articles by title and full text according to study objectives. Finally, 22 studies were included in the present study ( Figure 1). Types of COVID-19 Vaccines and Pregnancy Despite the fact that there are several different COVID-19 vaccinations available, only 3 were officially licensed for use in pregnant women before February 2022. COVID-19 vaccines that have been recommended for pregnant women include mRNA vaccines such as Pfizer-BioNTech BNT162b2 and Moderna mRNA-1273 or inactivated vaccines such as Sinopharm BIBP COVID-19 vaccine (11,12). There are only a few reports on the safety and effectiveness of AstraZeneca/Oxford and Janssen vaccines, and in these studies, pregnancy was the exclusion criterion (13,14). In fact, reported cases are based on accidental pregnancies during trials (15). On February 15, 2022, the WHO updated its statements that available COVID-19 vaccines such as Pfizer, Moderna, AstraZeneca, Janssen, Sinovac, and Novavax are safe for pregnant and lactating women. Despite the fact that pregnant women were excluded from part of the COVID-19 trial vaccines, there is evidence to support the safety of COVID-19 vaccines during pregnancy, including monitoring of pregnant women who had received the vaccine and animal studies that did not find any negative effects (16). None of the COVID-19 vaccinations mentioned above that are approved for use during pregnancy include live virus. Therefore, these vaccinations cannot spread the infection to unborn children or pregnant women (11,12). Immunogenicity and Vaccine Effectiveness The features of the studies that assessed immunogenicity and vaccine effectiveness are shown in Supplementary Table 1. Most of them were observational cohort studies. These studies used a variety of methods, including case reports, case series, and cohort studies to simply describe pregnant women who had received vaccinations. Other studies compared the safety and efficacy of vaccines in vaccinated, infected, and noninfected pregnant and nonpregnant women (Supplementary Table 1). After reviewing the 14 related studies, it was observed that all the results are based on a positive immune response in the mother's blood serum, positive antibodies in cord blood samples, and breast milk (17)(18)(19)(21)(22)(23)(24)(25)(26)(27)(28)(29). In other words, all of them imply that IgG and anti-spike antibody titers were increased after vaccination, especially after a second dose. While in Bookstein et al study, it was reported that serum antibody (IgG) was positive among pregnant and nonpregnant women, pregnant women had significantly lower serum SARS-CoV-2 IgG levels compared with nonpregnant women (24). Another study reported that higher levels of cord blood antibodies were detected in vaccinated women compared with COVID-19-recovered women (19). Also, in Gray et al study, it was reported that the antibody titres after vaccination among pregnant and lactating women was similar to nonpregnant women (21). As a result, even though pregnant women are considered a high-risk population, immunization can still be successful. According to the results of Mithal et al study, from 22 deliveries, only 3 neonates (1 set of twins) did not have positive IgG tests (27). The reason was the short interval between vaccination and delivery. Two of the mentioned mothers were vaccinated ˂3 weeks before delivery. Therefore, the time between the injection and delivery should be taken into account for the newborns' optimum immunogenicity. In 3 studies vaccines' effectiveness during pregnancy have been reported (20,29,30). The vaccine effectiveness has been reported as an association between a vaccine and the risk of SARS-CoV-2 infection among pregnant women. In these studies, the risk of SARS-Cov-2 infections was reduced after vaccination (20,29). In addition, Dagan et al reported that the effectiveness of COVID-19 related hospitalization was 89% after 7 to 56 days after the second dose among vaccinated pregnant women (30). Vaccine Safety The characteristics of studies that assessed the safety of vaccines are described in Supplementary Table 2. Cohort studies (n = 10), surveillance studies (n = 1), crosssectional studies (n = 1), and case reports (n = 1) were the 4 most popular study types. The main aim of these studies was the assessment of safety and complications associated with vaccines among vaccinated and nonvaccinated pregnant women, or among pregnant women compared with nonpregnant women (Supplementary Table 2). After reviewing the related studies, no difference was observed in reported vaccine-related reactions in vaccinated pregnant women compared with other groups, such as unvaccinated pregnant women or nonpregnant women (24,(34)(35)(36)(37)(38). Most reported vaccine-related complications were local reactions at the injection site (pain, swelling) and systemic reactions (fever >38 o C, headache, malaise, myalgia, fatigue) (24,29,31,(36)(37)(38). In reality, a similar pattern of reactogenicity among pregnant women compared with others was reported. We can say that complications after vaccination is a usual event. In fact, side effects are the result of the body's reaction to develop antibodies to protect against COVID-19. Thus, the chances of occurrence of any of these complications after vaccination depends on the personality features. In these studies, vaccine-related pregnancy outcomes were assessed. Pregnancy outcomes, such as stillbirth, preterm delivery, spontaneous abortion, and fetal growth restriction/small gestational age, were reported and the most reported neonatal outcomes were congenital anomalies and low birth weight (20,21,29,(32)(33)(34)(35)(36)(37)(38). Although these outcomes have been reported in vaccinated pregnant women, vaccine-related pregnancy outcomes among vaccinated pregnant women were not higher than other groups. Thus, more research and postdelivery surveillance systems are needed to prove that vaccination is associated with pregnancy outcomes. Anomalies after vaccination among pregnant women have been assessed in another study by Shimabukuro et al. Among the participants with completed pregnancies who reported congenital anomalies, none of them had received COVID-19 vaccines in the first trimester or before conception. All pregnancies with major congenital anomalies had received COVID-19 vaccination only in the third trimester of pregnancy (after the period of organogenesis) (38). Thus, it seems that vaccination does not have a role in congenital anomalies. In addition, in a study by Blakeway et al, 3 types of fetal malformations (spina bifida, ventriculomegaly, and hydronephrosis) were reported in women who received the COVID-19 vaccine. Spina bifida was diagnosed before receiving the first dose of the vaccine and was not related to vaccination. Ventriculomegaly was diagnosed and isolated at 37 weeks of gestation, with no related brain abnormalities. Hydronephrosis was mild, with no related abnormalities at birth. As a result, according to the researcher reports, the observed outcomes were not associated with the vaccination (36). Three studies reported on the risk reduction of adverse outcomes and reactions among those pregnant after vaccination (31,33,34). These studies implied that vaccination of women in the third trimester of pregnancy was not associated with adverse maternal outcomes. In fact, vaccination was not associated with adverse pregnancy outcomes or neonatal complications (31,34). In addition, it was observed that a 2-dose vaccination among pregnant women was associated with longer gestational period, and consequently increased birth weight compared with the single dose (33). There is inadequate evidence of deleterious effects on either the mother or the fetus from vaccination of pregnant women with the COVID-19 vaccinations, despite reports of certain vaccine-related problems. Therefore, there is evidence that the COVID-19 immunization is safe to get during pregnancy, and the benefits of receiving the vaccination outweigh any potential risks of contracting SAS-Cov-2 infection during pregnancy. For instance, a recent publication concerning a national recommendation for the COVID-19 vaccine stated that none of the health organizations suggested delaying the COVID-19 vaccination when pregnant, nursing, or trying to conceive. Therefore, women who are attempting to get pregnant or who are already pregnant shouldn't have any reservations about receiving the COVID-19 vaccine. Because there are no safety issues related to COVID-19 immunization, according to the evidence currently available (39). WHO and CDC Recommendations Based on the WHO statements, during pregnancy, the risk of serious illness caused by COVID-19 is high. Also, pregnant women are at higher risk of delivering their neonate prematurely if exposed to COVID-19. Although there is less information on immunizing pregnant women, there is evidence that the safety of the COVID-19 vaccine during pregnancy has been improving, and there are currently no documented concerns regarding safety. Particularly in countries with high transmission rates or if people work in a high-risk industry, which raises their risk of exposure to COVID-19, the benefits of receiving the vaccination outweigh the possible risks. There is absolutely no probability that the vaccination could result in COVID-19. There is no risk of COVID-19 transmission to newborns while breastfeeding because the current vaccinations do not contain live viruses. Additionally, the antibodies produced by immunization can shield infants when given breast milk (40,41). There is no biological evidence at this time that COVID-19 vaccination antibodies or vaccine components could affect reproductive organs or reduce fertility, which is relevant if you intend to get pregnant in the future (40). Discussion Women during pregnancy experience physiological, immunological, and coagulation system changes. Evidence shows that women during pregnancy have a robust immune response to non-fetal specific antigens (42). Therefore, compared to the general population, they are likely to be more susceptible to SARS-Cov-2 infection and hypoxia due to these changes during pregnancy, especially alterations in the respiratory system, such as a decrease in lung volume and an increase in oxygen use (43). Many clinical trials do not assess the effects of pharmacological materials in these groups as a result of the "Revitalization Act," which forbids women of reproductive age from participating in phase I and early phase II of (8). Only mRNA vaccines have been investigated among pregnant women in various stages of the clinical trials for COVID-19 vaccinations, out of all the produced vaccines (11,12). About other COVID-19 vaccines, reported cases are based on accidental pregnancies during trials. In light of the information gaps on the efficiency and safety of the COVID-19 vaccination among expectant mothers, post-vaccination safety monitoring and evaluation are crucial and necessary. Studies have generally shown that vaccination is effective in pregnant women, and because of the unique circumstances of this high-risk group, strong immunity against COVID-19 is developed after vaccination and is the same as in non-pregnant people, so pregnancy or breastfeeding have no effect on the vaccine's efficacy. Vaccination is recommended in this period to save the life of the fetus and the mother. Regarding the safety of the vaccine, although studies have reported some consequences of pregnancy after vaccination (32,36), they are still not sure whether these consequences are really related to the vaccine or not. Therefore, it is necessary to conduct observational studies along with active care to find the cause of this type of outcome in the support of pregnant mothers. It can be said that the experience of systemic and local complication is one of the common complications of vaccination and does not pose a risk to human life (24,29,31,34,(36)(37)(38). The benefits of vaccination in high-risk groups will therefore outweigh the drawbacks. Because of this, becoming immunized is advised during the first, second, or third trimesters or the first few weeks after giving birth, depending on the health guidelines of different countries. There is no reason to wait at any of those times because the vaccine is safe (44, 45). The results of a systematic review suggested that maternal vaccination protects the fetus and reduces the SARS-Cov-2 infection. Additionally, pregnant, nursing, and nonpregnant women had considerably greater antibody titers from the vaccine than women who had previously contracted SARS-CoV-2 during pregnancy. The same outcomes about unfavorable incidents as what we conclude were seen in this study (46). A literature reviews also showed that IgG after vaccination in pregnant, lactating, and nonpregnant women increased significantly and was stronger than pregnant women who were previously infected with SARS-CoV-2 (47). Another systematic review study revealed an association between longer intervals between receiving the first dose of the vaccination and delivery and rising placental transfer ratios in cord blood. According to safety data, rates of vaccine-related responses in lactating and pregnant women were comparable to those in the general population. There was no observed increase in the probability of unfavorable obstetrical or neonatal outcomes. One study demonstrated that pregnant women were less likely to experience COVID-19 when vaccinated (48). The lack of long-term follow-up is a weakness in the designs of these studies. We still need adequate data to determine the ideal time for vaccination to trigger the placental immune transfer because some studies only select women who were vaccinated in the third trimester and some studies only compare before and after delivery status for a few periods of time, such as 6 weeks after birth. Thus, the degree of infants' protection against COVID-19 or the duration of such potential protection need to be further studied. Additionally, more studies and postdelivery surveillance programs are required to demonstrate the association between the vaccine and successful pregnancies and to better convince expectant mothers to get vaccinated. Currently studies cover only included data from 3 FDA-and WHO-approved COVID-19 vaccines that conducted on pregnant women. There has not yet been research on other vaccinations. It is advised to conduct more research into the efficacy and safety of other COVID-19 vaccine types, such as inactivated vaccines (such as the Sinopharm BIBP COVID-19 vaccine), particularly in countries where access to mRNA vaccines like Moderna mRNA-1273 and Pfizer-BioNTech BNT162b2 is restricted. Conclusion In general, it can be said that vaccination of pregnant women is a good protective factor against COVID-19 and the pregnancy-related consequences observed after vaccination are not related to the COVID-19 vaccine. The scientific community believes that being vaccinated as soon as feasible is the best course of action because there is no evidence to suggest that the COVID-19 vaccine poses a risk to expectant or nursing women. The researchers' findings show that vaccination in this group of people is better than no vaccination, even though assessing the vaccine's long-term effects involves further observational studies and involves maintaining track of pregnant women who have received the vaccine. Naturally, pregnant women who work in clinical settings or in occupations that need a lot of interpersonal interaction should pay particular attention to this matter. Myalgia, arthralgia, headache, local pain or swelling and axillary lymphadenopathy were significantly less common among pregnant women after each dose, while paresthesia was significantly more common among the pregnant population after the second dose. There was no significant difference in the rate of side effects based on whether the vaccine was given in the first, second or third trimester of pregnancy, except for local pain / swelling, which was significantly less common after the first dose. The rate of SARS-CoV-2-related hospitalizations was 0.2% among the vaccinated group vs 0.3% among the unvaccinated group.
2023-07-23T15:18:46.378Z
2023-03-20T00:00:00.000
{ "year": 2023, "sha1": "825bd39542413d7784f5e9570abb478e3d69cdb6", "oa_license": "CCBYNCSA", "oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/hashemisa-A-10-5829-2-09c6eb4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "121557a3436ca7571a13c8cb523e67564c438e34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237638865
pes2o/s2orc
v3-fos-license
Diversity and abundance of terrestrial gastropods in Skikda region (North-East Algeria): correlation with soil physicochemical factors Background: The inventory process is the first method to protect and safeguard animal biodiversity. This study carries out a quantitative and qualitative inventory of terrestrial gastropods at three sites in Skikda province (northeastern Algeria). The relationship between terrestrial gastropod diversity and soil physicochemical factors was investigated using statistical analyses. Results: The inventory data reveals the presence of four families and eight species showing varied predominance rates of Cornu aspersum species according to each site in the city of Skikda (Azzaba 53.88%; Ben-Azzouz 56.12%; ElHadaiek 37.92%). The maximal specific richness was registered in the El-Hadaiek site (seven species), and the highest mean richness was noted in the Ben-Azzouz site (392 individuals). Of the eight gastropod species identified, three species (Cornu aspersum, Cantareus apertus and Rumina decollate) were classified as constant species. The Shannon– Weaver diversity and equitability indices vary by site. Conclusion: The presence of certain species in one site and their absence in other sites, as well as the variation in ecological indices, could be attributed to the effect of soil-physicochemical factors. Background Human industrial and agricultural activities and increasing population growth rates, as well as economic and technological factors, have negatively impacted biodiversity (Aronson et al., 2014;Douglas et al., 2013;Gaston et al., 2013;Mackenzie & Michael, 2018;Yanes, 2012) by inducing marked changes in the structure of biological behaviours and the dysfunction of surrounding ecosystems (Chen & Blume, 1997;Sha et al., 2015). Interestingly, land snails play a crucial role in the functioning and stability of ecosystems due to their contribution to the provision of food for other animals, decomposition of plant material and maintenance of soil calcium content (Lange, 2003). Additionally, their short lifetime and limited dispersal ability make them excellent bioindicators (Watters et al., 2005). Furthermore, snails and slugs can be important links in the transfer of chemicals from vegetation or plant litter to carnivores (Coughtrey et al., 1979;Nica et al., 2012Nica et al., , 2013. Consequently, such transfer along food chains is an important eco-toxicological aspect (Laskowski & Hopkin, 1996). Several studies on land snail inventories have been carried out in several biotopes of Algeria, notably in the north-western (Damerdji, 2008(Damerdji, , 2013 and north-eastern regions (Larbaa & Soltani, 2013;Douafer & Soltani, 2014). Recently, a survey of gastropods was conducted in five areas of north-eastern Algeria (Belhiouani et al., 2019). However, we are not aware of a study on the biodiversity and abundance of terrestrial gastropods in other areas of northeast Algeria. The Skikda region (north-eastern Algeria) is the location of highly developed petrochemical industries, which cause serious risks to human and environmental safety by progressively destroying natural resources, water and air quality (Fadel et al., 2016;Kahoul et al., 2014;Zeghdoudi et al., 2019). Researching the biotic and abiotic factors that influence land diversity and abundance is essential to prevent and protect terrestrial ecosystem. Many researchers have demonstrated that climatic factors, soil physicochemical factors and plant communities affect the distribution and abundance of terrestrial gastropods (Gärderforns et al., 1995;Lewis Najev et al., 2020;Nekola, 2003). In Algeria, several studies have been conducted on the effect of environmental factors on the diversity and abundance of land snails (Belhiouani et al., 2019;Douafer & Soltani, 2014). The present study aims to (1) identify and investigate the abundance and diversity of terrestrial gastropod species (snails and slugs) in three sites located in Skikda province, and (2) investigate the relationship between terrestrial gastropod diversity and soil physicochemical factors using statistical analysis (correlation analysis). Study areas The city of Skikda (36°52′34 N, 6°54′33° E) is located in the north-east of Algeria, 510 km from Algiers (capital of Algeria). It has a Mediterranean climate, characterised by two seasons: a mild, rainy winter and a hot, dry summer. The average annual precipitation varies between 600 and 800 mm/year. The annual temperature varies from 9 in winter to 27 °C in summer. Humidity during the day is about 70% (Souilah et al., 2019). The study was conducted in three cities of Skikda province ( Fig. 1): El-Hadaiek covering an area of 271.75 km 2 , Azzaba extending over an area of 805.34 km 2 , and Ben-Azzouz, the largest city of the province, with an area covering 228.28 km 2 . Table 1 lists the characteristics of the three sampling sites. Sampling methods and taxa identification Snail sampling was carried out using the quadrat method. Quadrats likely to be suitable breeding habitats for snails and to be sampled were randomly selected by projecting a grid onto a map of the study area. For each site (Fig. 2), a 400 m 2 quadrat was established using a tape measure for each sampling site. Snail sampling was carried out by two persons searching each quadrat for 2 h, following the method of Benjamin et al. (2014) with modifications. Live snails and slugs, as well as dead snail shells were collected by hand from different natural habitats (on the ground, under tree trunks, on plants and in crops). All samples obtained were preserved in 70% ethanol at the Laboratory for the Optimization of Agricultural Production in Subhumid Areas (University of Skikda). The snails collected from the three sites were thoroughly identified based on animal morphological features (shape, size, colouration and ornamentation of the shell). Identification was based on the key features reported by Barker (2001) and Bouchet et al. (2005). When a snail was collected, the plant species was noted. Samples of plant species collected in the field were transferred to the laboratory for identification, based on the method previously reported by Quezel andSanta (1962-1963) and Andreas (1998). Data analysis The identified snail individuals were counted and subjected to the determination of the constancy index, relative abundance, specific and mean richness, and some diversity indices (Shannon-Weaver and equitability index). The constancy index (C) is calculated according to Dajoz (1985): where C is the centesimal frequency, Pa is the total number of samples containing the species considered and P is the total number of samples taken. According to Dajoz (1985), three different categories are distinguished: constant species (C ≥ 50%), accessory species (25% < C < 50%) and accidental species (C ≤ 25%). The relative abundance (A) index allows to study the distribution of a species in a given region, and to identify the species as common, rare, or very rare species (Dajoz, 1985). It is calculated by the following formula: where n i is the total number of considered species, and N is the total number of individuals found. According to Dajoz (1985), species can be classified into three different groups: common species (A > 50%), rare species (25% ≤ C ≤ 50%) and very rare species (C < 25%). The specific richness (S) is the number of species found in the study area (Blondel, 1975;Ramade, 1984). The mean richness (S′) is defined according to Blondel (1975) and is calculated by the following formula: where n i is the total number of considered species, and P is the total number of samples taken. The Shannon-Weaver index (H′) index is calculated by the following equation (Shannon & Weaver, 1963): where Pi is the relative frequency (n i /N) and R is the total number of species. The equitability index (E) constitutes a second fundamental dimension of diversity (Ramade, 1984). The equitability is expressed as follows: Soil sampling and determination of physicochemical parameters The analysis of soil physicochemical properties was carried out on samples collected manually using a trowel (Koranteng-Addo et al., 2011) to a depth of about 10 cm. Three representative soil samples were collected from each site using the same quadrate method to study molluscs diversity. In brief, the soils were air-dried for 3-6 days, crushed and sieved through a 2 mm diameter sieve and then stored in a non-metallic container. Soil pH is measured in soil-water suspension (soil/water ratio = 1/2.5) according Gauchers (1968). The electrical conductivity (EC) can be determined on a soil extract (soil/water ratio = 1/5) using a conductivity meter (Delaunois, 1976). Organic matter (OM) was quantified by the method proposed by Anne (1945), based on the determination of the percentage of organic carbon in the soil. This method is based on the oxidation of organic carbon with potassium bicarbonate and titration of the solution with Mohr salt (0.2 N). The determination of total limestone was calculated according to the method of Duchaufour (1970) based on the reaction of calcium carbonate with hydrochloric acid (HCl). The total porosity (P) is calculated from the apparent and true densities. Soil field capacity was determined using the software of Saxton et al. (1986), while soil humidity (H) was calculated from the difference between the weight of wet soil and dry soil using a precision balance. Statistical analysis Data are displayed as mean ± standard deviation (SD). Comparisons of physicochemical factors were tested for statistical significance by one-way ANOVA with Tukey's post hoc test. The relationship between the specific richness of terrestrial gastropods and the physicochemical characteristics of the soils was also examined using the Pearson correlations test. Statistical tests were performed using MINITAB software (version 16, Penn State College, PA, USA) where p < 0.05 was considered significant. Gastropod species inventory The inventory of terrestrial gastropods carried out at the three selected sites reveals the presence of eight species belonging to four malacological families (Milacidae, Helicidae, Geomitridae and Achatinidae). Table 2 summarises the species inventoried in accordance with previously reported classification criteria (Bonnet et al., 1990;Chevallier, 1992;Germain, 1969). The results show that each of the gastropod families Milacidae and Helicidae includes two species; Geomitridae includes three species, and Achatinidae includes one species. Flora inventory For the flora inventory in the study sites, lists of plant species were made (Table 3). In all study areas, 12 species belonging to 12 families were identified. Furthermore, we observed that there was no difference in plant species between the three sampling sites and that there was no relationship between snail diversity and plant richness. Terrestrial gastropod structure and distribution in the study sites As shown in Table 4, the species Cornu aspersum presents minimal and maximal abundance rates in El-Hadaiek (37.92%) and Ben-Azzouz (56.12%), as well as Page 5 of 10 Zaidi et al. JoBAZ (2021) 82:41 maximal and minimal values in Ben-Azzouz (25%) and El-Hadaiek (2.92%). Slugs present a very low abundance (2.04%) in El-Hadaiek and zero-abundance in Azzaba and Ben-Azzouz. In accordance with the findings reported by Dajoz (1985), the constancy values obtained (C%) show the presence of 100% of the species Cornu aspersum, Cantareus apertus and Rumina decollata in the three selected sites. They are therefore considered as constant species (C ≥ 50%). Similarly, Cernuella virgata was found as a constant species in El-Hadaiek and Ben-Azzouz, and as an accessory species in Azzaba (25% < C < 50%), while the species Cochlicella barbara is constant in El-Hadaiek and accessory in Ben-Azzouz and Azzaba. Furthermore, the species Trochoidea elegans was found as a very accidental species in El-Hadaiek and Azzaba (C ≤ 50%) and accessory species in Ben-Azzouz. Slugs are constant in El-Hadaiek and very accidental in Azzaba and Ben-Azzouz. Biodiversity indices The specific richness is represented by seven, six and five gastropod species in El-Hadaiek, Ben-Azzouz and Azzaba respectively (Table 5). However, the maximal values of the mean richness are 392 and 366.5 in Ben-Azzouz and Azzaba respectively ( Table 5). As indicated in Table 5, the Shannon-Weaver (H′) diversity index varies between 0.51 in Ben-Azzouz and 0.68 in El-Hadaiek. The equitability index (E) is defined as the fundamental dimension of diversity enabling the comparison of population structure. Thus, the results obtained show that the values of the equitability index vary between 0.28 and 0.35. Relationship between specific richness and soil physicochemical characteristics The correlation between the specific richness and physicochemical soil characteristics in all study sites was analysed (Table 7). A highly significant correlation only between the specific richness and organic matter (R = 0.904, p < 0.001) and field capacity (R = 0.956, p < 0.01) can be noted. In contrast, specific richness shows a highly significant negative correlation (R = −0.888, p < 0.001) with permeability. A significant negative correlation is observed between the specific richness and porosity (R = −0.783, p < 0.05). Discussion This study investigated the abundance and diversity of terrestrial gastropod species at three sites located in Skikda province, as well as the impacts of soil physicchemical factors on snail diversity, was examined. The results revealed an important diversity of the malacological fauna in Skikda province, particularly in the city of El-Hadaiek, and differences in ecological indices (constancy index, relative abundance, specific and mean richness, Shannon-Weaver and equitability indices) between the selected study sites. Furthermore, the soil at the El-Hadaiek site was found to be characterised by high organic matter and field capacity and low porosity and permeability. The results suggest that these parameters are important in determining the richness of terrestrial gastropods in Skikda province, and we do not rule out that other environmental factors may also be important. The biodiversity and distribution of land snails depend on several factors, such as soil characteristics (André, 1982;Douafer & Soltani, 2014;Ondina et al., 1998), climatic factors (Ameur et al., 2019;Hermida et al., 1994), anthropogenic disturbances (Belhiouani et al., 2019) and vegetation (Damerdji, 2013;Damerdji & Amara, 2013;Ondina & Mato, 2001). Several studies have evidenced the need to protect mollusc biodiversity on a global scale (N'dri et al., 2016;Hallgass & Vannozzi, 2016;Nicolai & Ansart, 2017;Heiba et al., 2018;Desoky, 2018;Dedov et al., 2018;Borreda & Martinez-Orti, 2017). In Algeria, some inventories of terrestrial gastropods have been recently carried out in different biotopes (Bouaziz-Yahiatene & Medjdoub-Bensaad, 2016;Hamdi-Ourfella & Soltani, 2016;Ramdini et al., 2021). In the present study, specific richness was lower in the sites of Ben-Azzouz and Azzaba than in El-Hadaiek. Similarly, specific richness is expressed by seven, six and five species respectively at El-Hadaiek, Ben-Azzouz and Azzaba respectively. Previous studies conducted in the northern-eastern region of Algeria have revealed 13 species of terrestrial pulmonate gastropods in El-Kala, El Hadjar and Sidi Kassi (Larba & Soltani, 2013). In addition, Helicidae has been identified as the most abundant family in all three sites with high percentages in Ben-Azzouz and Azzaba, and this is in line with previous results on biodiversity in eastern Algeria (Douafer & Soltani, 2014). This dominance is explained by Chevallier (1992) who assumes the action of the cool and humid environment and selected dark varieties. The abundance results show that Cornu aspersum is a common species in all the study sites (El-Hadaiek, Azzaba and Ben-Azzouz). Moreover, the results of species constancy indicate that the species Cornu aspersum, Cantareus apertus and Rumina decollata are constant in the three study sites (100%) with a significant biomass and a potential for species adaptation via different climates and soils, since the other species present variable constants in various sites. In this regard, Damerdji (2008) reported three constant species and four accidental species in Tlemcen (southern Algeria). The author also reported (Damerdji & Amara, 2013) that the specific richness equals four species (two constants, one accessory, and one accidental) in the region of Naâma (southwestern Algeria). The diversity of the Shannon-Weaver index is lower in El-Hadaiek (0.68 bits) compared to that noticed in El-Kala site (3.05 bits) (Douafer & Soltani, 2014). Also, the equitability index varies between 0.35 and 0.28 (< than 1), suggesting that the distribution of different gastropod species is not in equilibrium with each other (Ramade, 1984). Similar results were obtained in the city of Tlemcen (Northwest Algeria) by Damerdji (2008), while the regions of El Hadjar, Sidi Kaci and El-Kala located in the Northeast Algeria presents an equitability index superior to 0.50 (Larba & Soltani, 2013). This is probably related to the differences in environmental variables between the sites. The distribution and activity of land snails depend on several factors, such as soil characteristics (André, 1982;Gärderforns et al., 1995;Ondina et al., 1998), climatic factors (Hermida et al., 1994) and vegetation (Lewis Najev et al., 2020;Nekola, 2003;Ondina & Mato, 2001). With regard to mollusc nutrition, the flora inventory shows the proliferation of gastropods on all botanical species of the study sites. Several studies have shown a correlation between vegetation and mollusc distribution (Barker & Mayhill, 1999;Millar & Waite, 2002;Martin & Somer, 2004). However, in this study, no relationship was found between the distribution of land snails and dominant plant species in the three study sites. According to the work of Nunes and Santos (2012) conducted in the forests of Ilha (Brazil), this result could be explained by the homogeneity of the study area. Physicochemical factors influence snail diversity in three study sites containing organic matter, field capacity, porosity and permeability. Correlation analysis reveals that organic matter and field capacity are positively correlated with snail specific richness. However, a negative correlation is noted between this parameter (specific richness) and two soil physicochemical factors (permeability and porosity). Organic matter (OM) in the soil provides essential nutrients for plant growth and influences the soil's ability to retain moisture (Chapin et al., 2002) and can also positively affect the abundance of terrestrial gastropods. In this study, the OM content in the soils of the three sites is > 5%, and thus the soils become very rich in organic matter (Abiven et al., 2009), as was found in El-Hadaiek (13.75 ± 0.47%) study area. On the other hand, the field capacity (the maximum volume of water that a soil can retain) is very high at El-Hadaiek (36.07 ± 4.01) compared to the other sites (Azzaba and Ben-Azzouz). This parameter is related to soil texture (amount of clay), which is the most important factor affecting the distribution of gastropods (Outeiro et al., 1993). The soils of the El-Hadaiek and Ben-Azzouz sites are characterised by a clayey-silt texture, while the silt-clay texture characterises the Azzaba site. The clayey-silt soils retain more water than silt-clay soils with a particulate structure. The porosity follows the granulometric nature of the soil; in clay soils (e.g., El Hadaiek), the porosity value is on average ≤ 50. In the other two sites (Azzaba and Ben-Azzouz) where there is less clay, the porosity is around 50%. The presence of clay clogs porous spaces and slows down the circulation of water in the soil. Water circulation Page 8 of 10 Zaidi et al. JoBAZ (2021) 82:41 (permeability) is slowed down in soils with low porosity, as in the case of El-Hadaiek site (0.25 ± 0.02 cm/h). This is because when soil permeability decreases, the soil pores are filled with water, resulting in higher humidity. Moisture is necessary for the respiration and reproduction of land snails (Coney et al. 1982) and for the production of mucus which is essential for locomotion (Cameron, 2009). These results are similar to those obtained by Millar and Waite (2002), Martin and Sommer (2004), Tattersfield et al. (2006) and Horsák et al. (2007). Conclusion This paper presented the first inventory of gastropod molluscs in Skikda province. It aims to protect terrestrial ecosystems and preserve biodiversity. The study showed that this region has an important malacofauna like other regions of North-East Algeria. A total of eight species of terrestrial gastropods were reported from three different sites located in this region. The results show that the diversity and abundance of gastropods vary from site to site, due to different physicochemical soil characteristics, including field capacity, permeability, organic matter and porosity.
2021-09-27T13:11:46.466Z
2021-09-26T00:00:00.000
{ "year": 2021, "sha1": "fe105a6536a3ed31a61ef5ee24db385d53b851e7", "oa_license": "CCBY", "oa_url": "https://basicandappliedzoology.springeropen.com/track/pdf/10.1186/s41936-021-00239-6", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "150d795684ab372f04d62360e39095646b927f01", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
119487814
pes2o/s2orc
v3-fos-license
Dark Matter and Baryogenesis from Non-Abelian Gauged Lepton Number A simple model is constructed based on the gauge symmetry $SU(3)_c \times SU(2)_L \times U(1)_Y \times SU(2)_\ell$, with only the leptons transforming nontrivially under $SU(2)_\ell$. The extended symmetry is broken down to the Standard Model gauge group at TeV-scale energies. We show that this model provides a mechanism for baryogenesis via leptogenesis in which the lepton number asymmetry is generated by $SU(2)_\ell$ instantons. The theory also contains a dark matter candidate - the $SU(2)_\ell$ partner of the right-handed neutrino. Introduction The Standard Model of elementary particle physics provides an extremely accurate description of Nature at the most fundamental level. Despite its remarkable successes, it explains only 5% of the Universe, while the remaining 95% is attributed to the mysterious dark matter and dark energy. In addition, the Standard Model itself has its own shortcomings: the inability to generate the observable matter-antimatter asymmetry of the Universe, the hierarchy problem, massless neutrinos, unknown origin of flavor, and many more. Although a plethora of models dealing with those issues have been constructed, it is still an open question which of those theories, if any, provides the correct or at least partially correct description of Nature at higher energies. We simply need more experimental data to find this out. In the meantime, further systematizing and rethinking of our model building efforts is definitely required. The Standard Model is based on the gauge group SU (3) c ×SU (2) L ×U (1) Y [2,3,4,5,6]. Apart from this local symmetry, it also has two accidental global symmetries: baryon number and lepton number. One might wonder whether those are just residual symmetries left over from the breaking of a more fundamental extended gauge symmetry. Efforts of gauging baryon and lepton number were carried in the past [7,8,9,10,11,12], but only the models constructed recently [13,14,15,16] are experimentally viable. In theories of this type gauge coupling unification does not occur naturally and so far only partial unification has been achieved [17,18]. Nevertheless, simple extensions of the Standard Model gauge group provide a good playground for testing various approaches to the dark matter and baryogenesis puzzles. In this talk, I will discuss one of such extensions containing a dark matter candidate and offering a mechanism for producing a lepton asymmetry, which ultimately can explain the matter-antimatter asymmetry of the Universe. The model The theory we propose is based on the gauge group: Field Fermionic sector The Standard Model quarks are singlets under SU (2) , whereas the lefthanded lepton doublet l L and the right-handed electron e R are the upper components of SU (2) doublets, where the lower components are the new partner fieldsl L andẽ R , respectively. To cancel the gauge anomalies involving SU (2) one requires an extra SU (2) doublet of Standard Model singlet fields, The remaining anomalies involving just the Standard Model gauge groups are canceled by introducing new SU (2) singlet fields: The particle content of the model along with the quantum numbers of the fields is shown in Table 1. The Standard Model quarks were not included since they transform trivially under SU (2) . Higgs and gauge sector Although breaking of the extended gauge group down to the Standard Model can be achieved with just one new SU (2) doublet Higgs, for reasons discussed later we introduce two new Higgs fieldsΦ 1,2 and assume that one of the vacuum expectation values (vevs) is much larger than the other, v 1 v 2 . This can be easily engineered by choosing appropriate values for the parameters in the scalar potential: where we neglected terms involving the Standard Model Higgs field. After Φ 1,2 develop vevs, which do not mix with the Standard Model electroweak gauge bosons. Particle masses The Yukawa part of the Lagrangian is given by: where a, b = 1, 2, 3 are flavor indices. After SU (2) symmetry breaking, the Yukawa matrices Y l , Y e and Y ν lead to vector-like masses for all new fermions in the theory. The Yukawa matrices y e and y ν produce the usual Standard Model lepton masses and, along with y e and y ν , contribute also to lepton partner masses. The fermionic mass matrix is given by: where v = v 2 1 + v 2 2 and v is the Standard Model Higgs vev. The off-diagonal elements are due to the Yukawa terms involving the Standard Model Higgs and introduce mixing between the electroweak singlets and doublets. We assume Y l,e,ν v y e,ν v, y e,ν v, which is a phenomenologically natural assumption and frees the model from electroweak precision data constraints. The mass eigenstates consist of six electrically neutral and six electrically charged states. As shown below, after SU (2) breaking there remains a residual global U (1) symmetry which prevents the new particles from decaying to solely Standard Model states. Therefore, if the lightest of the mass eigenstates is electrically neutral, it becomes a natural candidate for dark matter. This implies that the dark matter particle in the model is the SU (2) partner of the right-handed neutrino, which after electroweak symmetry breaking receives a small admixture from the electroweak doublets: whereν L and ν R are the upper components of the electroweak doubletsl L and l R , respectively, and The Higgs spectrum of the theory is that of a generic two Higgs doublet model. There are five physical scalar/pseudoscalar fields remaining after SU (2) breaking. They are mixtures of the original CP-even and CP-odd components ofΦ 1,2 and their masses depend on the choice of parameter values in the scalar potential (5). Regarding the gauge sector, since there is no mixing between SU (2) and the other gauge groups, after SU (2) breaking the new vector gauge bosons develop equal masses, where g is the SU (2) gauge coupling. Field Global symmetries There exist two global symmetries of the Lagrangian. Only one of them, which we denote by U (1) , remains unbroken after SU (2) breaking. Charges of the fields under this symmetry are provided in Table 2. Using the fact that the Yukawa couplings y ν are tiny to account for the smallness of the Standard Model neutrino masses, and assuming that y ν are small as well, the U (1) global symmetry is promoted to two global symmetries, U (1) L and U (1) χ , which separately survive the breaking of SU (2) . The charges under those global symmetries are also shown in Table 2. Note that the charge under U (1) L can be interpreted as Standard Model lepton number, whereas the U (1) χ charge is the dark matter number. Baryogenesis We now discuss the details of the baryon number asymmetry generation in the model. It relies on the fact that a primordial lepton asymmetry is produced by SU (2) instantons † . The subsequent stages combine key features of several mechanisms: Dirac leptogenesis [20,21], asymmetric dark matter [22,23,24,25,26,27] and baryogenesis from an earlier phase transition [28]. † We note that a similar idea was presented in Ref. [19], which was brought to our attention after the completion of this work. SU (2) instantons Because of the non-Abelian nature of SU (2) , the model exhibits nonperturbative dynamics in the form of SU (2) instantons, which are active only above the SU (2) breaking scale. The instantons preserve the global U (1) symmetry, but they do not conserve the global U (1) L and U (1) χ symmetries discussed in the previous section, since those symmetries are both anomalous under SU (2) interactions. Following the calculation in Ref. [29], we find that the instantons induce the following dimension-six interaction terms: where the dots denote Lorentz contractions and, for simplicity, we assumed just one generation of matter. The generalization to three families is straightforward. The last term in (12), for example, generates two interaction terms, one of which gives rise to ν LẽL →ν R e R shown in Fig. 1. For this process, as can be read off from Table 2, both the Standard Model lepton number and the dark matter number are violated by one unit: ∆L = −1 and ∆χ = 1, respectively. Therefore, the first condition for a successful leptogenesis, lepton number violation, is present in the model. CP violation and phase transition The remaining Sakharov conditions [30] require sufficient CP violation and out-of-equillibrium dynamics, which in our model can be realized via a first order phase transition. The scalar potential (5) contains four complex parameters: m 2 12 ,λ 5 ,λ 6 , andλ 7 . One phase can be rotated away by redefining the phase ofΦ † 1Φ2 , leaving three physical phase combinations [31]. It is straightforward to show that for natural values of parameters the amount of CP violation in the model meets the criteria for a successful baryogenesis [1]. The last condition that needs to be checked is whether the model can actually accommodate a first order phase transition of the Universe. For this purpose we analyze the finite temperature effective potential [32], under the simplifying assumption v 1 v 2 : where the first line is the tree-level Higgs contribution, the second line is the one loop zero temperature Coleman-Weinberg correction, and the last line is the finite temperature part. The sum is over all particles in the model, with appropriate factors corresponding to the number of degrees of freedom and statistics. The plot of the effective potential is shown in Fig. 2 for a choice of parameters which sets the first order phase transition at the critical temperature T c = 200 GeV. We chose v = 2 TeV, just above the current limit v 1.7 TeV set by the LEP-II experiment [15], so that the condition v (T c )/T c 1 for a strongly first order phase transition is fulfilled. Bubble nucleation and lepton asymmetry As the temperature of the Universe decreases and drops to T c , bubbles of true vacuum start forming and expand, eventually filling out the entire Universe. A bubble expansion is schematically shown in Fig. 3. Outside the bubble the SU (2) symmetry is not broken and the SU (2) instantons remain active. Inside the bubble, on the other hand, SU (2) is broken and the instanton effects are exponentially suppressed. As the bubble expands in the presence of CP violation, part of the lepton asymmetry generated by the instantons just outside the bubble becomes trapped inside the bubble. The same is true regarding the dark matter asymmetry. Although SU (2) instantons are not active inside the bubble, one might worry whether the Standard Model lepton and dark matter asymmetries will be washed out by the Yukawa interactions involving y ν and y ν , since they explicitly violate U (1) L and U (1) χ , while conserving only their sum U (1) . This, however, is not an issue, since the small values of y ν and y ν imply that the right-handed neutrinos and their partners reach chemical equillibrium long after the SU (2) phase transition. As a result, the Standard Model lepton and dark matter number asymmetries survive until the electroweak phase transition, with just the lepton asymmetry being partially converted into a baryon asymmetry by the electroweak sphalerons, as discussed in the subsequent subsection. The process of accumulation of the Standard Model lepton and dark matter asymmetries outside the expanding bubble is described by the diffusion equations [33,34] where n i denotes the number density of a given particle species, D i is the diffusion constant, Γ ij is the diffusion rate, k j is the number of degrees of freedom times a factor arising from statistics and γ i is the CP violating source [35]. In our model there is a set of twelve diffusion equations and eight constraints coming from the Yukawa and instanton interactions [1]. The solution to this set of diffusion equations is shown in Fig. 4, which plots the Standard Model lepton and dark matter particle number densities normalized to entropy as a function of the distance from the bubble wall located at z = 0, where z < 0 corresponds to the outside of the bubble and z > 0 describes the inside of the bubble. The ratio of the generated Standard Model lepton and dark matter asymmetries in our model is: and is independent of the model parameters. There exists a natural and experimentally allowed choice of parameters for which Figure 4: Standard Model lepton and dark matter particle number densities vs. distance from the bubble wall for a set of natural parameter values. Baryon asymmetry As mentioned earlier, the SU (2) instantons become inactive after SU (2) breaking and the dark matter number freezes in inside the bubble. This is not exactly the case for the Standard Model lepton asymmetry, since the Standard Model electroweak sphalerons remain active until the electroweak phase transition and convert part of the lepton number to baryon number. The resulting baryon asymmetry generated by the sphalerons is [36] ∆B = 28 79 ∆L , therefore the final baryon asymmetry to entropy ratio is which agrees with the observed value. Dark matter The dark matter candidate in our model (10) is composed mostly of the SU (2) doublet partner of the right-handed neutrino. It is therefore predominantly a Standard Model singlet, with only a small admixture of an electroweak doublet picked up by its interactions with the Standard Model Higgs field. Because the dark matter and baryon number asymmetries in our model are closely related (approximately equal at present time), the dark matter mass is uniquely determined by the observable ratio of the dark matter and baryonic relic densities. Assuming the dark matter is relativistic at the decoupling temperature, its mass is given by: A dark matter mass of a few GeV is generic in asymmetric dark matter models and it is generally challenging to make its symmetric component efficiently annihilate away. In the current model this issue is circumvented by arranging one of the Higgs components to be lighter than 5 GeV. In such a scenario a successful annihilation can proceed through the channels shown in Fig. 5. A light Higgs component can be realized provided that the quartic terms in the scalar potential are small, which in the case of λ 1 is also needed for a first order phase transition. This mass range for a new scalar/pesudoscalar is not strongly constrained by low energy experiments, but may be accessible in the future [37]. Since the new Z gauge bosons do not interact with quarks, there are no tree-level direct detection diagrams involving the Standard Model singlet component of the dark matter. As a result, the direct detection constraint in this case can only arise from loop processes, but the GeV-scale dark matter limits coming from the CDMSlite experiment [38] are much less restrictive than the LEP-II constraint of v 1.7 TeV, which we already took into account. Regarding the contribution of the direct detection diagrams involving the electroweak gauge bosons, it can be estimated using the results of Ref. [18] and is fully consistent with experiment in the phenomenologically natural limit Y ν v y ν v, y ν v we adopted. Conclusions In this talk, I discussed a new model extending the Standard Model gauge group with a non-Abelian gauged lepton number SU (2) . The model realizes a mechanism for baryogenesis based on leptogenesis in which the lepton number asymmetry is generated by SU (2) instantons. It also contains a natural dark matter candidate -the partner of the right-handed neutrino. Despite the theoretical advantages, it is difficult to test this theory experimentally. Since new physics resides in the lepton sector of the model, the best way to probe it would be in a new high energy lepton-lepton collider. Let me end by saying that there is no reason not to expect the Standard Model gauge symmetry to be enhanced at higher energies. Analyzing other simple theories with extended gauge groups seems like a worthwhile effort, as it may shed more light on the outstanding issues of the Standard Model. However, one should always keep in mind that "Nature will do what Nature does and it's up to experiment to be the final judge" ‡ .
2017-05-20T12:48:00.000Z
2017-05-20T00:00:00.000
{ "year": 2017, "sha1": "c770cdfdb7c9d220e102b6283dff46afe2eea2cb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1705.07297", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c770cdfdb7c9d220e102b6283dff46afe2eea2cb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
23459843
pes2o/s2orc
v3-fos-license
Effect of protein and carbohydrate solutions on running performance and cognitive function in female recreational runners This study compared the effects of a carbohydrate–electrolyte–protein solution (CEPS, 2% protein plus 4% carbohydrate), carbohydrate–electrolyte solution (CES, 6% carbohydrate), and noncaloric sweetened placebo (PLA) on both 21-km running performance and cognitive function. Eleven female recreational endurance runners performed a 21-km time-trial running on three occasions, separated by at least 28 days. In a randomized cross-over design, they ingested CEPS, CES, or PLA at a rate of 150 mL every 2.5 km with no time feedback. A cognitive function test was performed before and after the run. Participants ingested approximately 24 g/h carbohydrate plus 12 g/h protein in CEPS trial, and 36 g/h carbohydrate in CES trial during each 21-km trial. Time to complete the time-trial was slightly shorter (P < 0.05) during CES (129.6 ± 8.8 min) than PLA (134.6 ± 11.5 min), with no differences between CEPS and the other two trials. The CEPS trial showed higher composite of visual motor speed than the PLA trial (P < 0.05). In conclusion, CES feedings might improve 21-km time-trial performance in female recreational runners compared with a PLA. However, adding protein to the CES provided no additional time-trial performance benefit. CEPS feeding during prolonged exercise could benefit visual motor speed compared to PLA alone, but no differences in the performance of the other cognitive function tests were found. Introduction The preponderance of research on carbohydrate-electrolyte solution (CES) consumption during endurance exercise has shown that exercise performance is improved and fatigue is delayed compared with a noncaloric placebo (PLA) or water [1][2][3], likely via maintenance of euglycemia and a high rate of carbohydrate (CHO) oxidation [4,5]. Because there appears to be an upper limit to exogenous CHO oxidation mediated by absorption mechanisms [4], it has been hypothesized that the addition of other macronutrients to a CHO drink can further improve performance. Recently, the addition of protein (PRO) to the CES (CHO-electrolyte-PRO a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 solution, CEPS) has been suggested to further improve exercise capacity compared with the CES alone [3,6,7]. These studies utilized cycling time to exhaustion (TTE), which, although a frequent measure of performance, has shown a poor reproducibility [8]. In contrast, the exercise protocols that require a set amount of work to be completed as quickly as possibly (i.e. a time-trial) or involve the accomplishment of the greatest amount of work in a set period of time are closer to the competitive task and are more reproducible. Several other studies have also assessed the impact of CEPS on time-trial performance, yet no studies reported additional improvement in time-trial when PRO added to the CES [8][9][10]. However, the CEPS trial contained about 20% to 37% more calories than the CES-only trial in these studies [8][9][10]. While a generic effect of adding calories during prolonged exercise has been widely acknowledged, additional calories can slow down gastric emptying, thus, could put the participants at a greater gastrointestinal distress wherein effort may have been diminished [11]. Furthermore, participants received a high rate of CHO (60g/h) feeding during exercise in their research [8,10]. It has been suggested that when CHO is ingested at levels that approach the maximal rate of exogenous glucose oxidation of approximately 60-70 g/h [5], the addition of PRO to a CES does not further enhance performance [10]. Thus, it is necessary to investigate the effect of CEPS consumption on exercise performance when total CHO and energy intake during exercise is not so high. This would be particularly beneficial for athletes and individuals who are concerned about caloric intake during training or potential gastrointestinal problems. Theoretically, the addition of PRO to the CES may facilitate faster fuel-medium transport across the lining of the intestine [12] and result in a greater insulin response [3], which may benefit the endurance performance. Additionally, energy matching between trials could be better to evaluate whether the combination of exogenous CHO and PRO would provide more benefits to endurance exercise, compared with CHO ingestion alone. Sex-based differences in substrate metabolism and feeding tolerance during endurance exercise have been well established. Specifically, women generally oxidize lower proportions of CHO than men, but experience more gastrointestinal symptoms during a bout of endurance exercise [13,14]. Sex also influences PRO oxidation during exercise, particularly of the branched chain amino acid leucine. In comparison to men, premenopausal women oxidize less leucine during endurance exercise [15]. Furthermore, non-oxidative leucine disposal is greater in women (reflective of whole-body protein synthesis) during endurance exercise than in men [15]. Taken together, the findings from sex comparative studies show that women rely to a lesser extent on CHO and PRO sources during endurance exercise. In addition, women in the luteal phase have a lesser reliance on CHO sources to fuel endurance exercise compared with women in the follicular phase [13], re-enforcing the importance of standardizing the menstrual status in such kind of studies. Despite the well-recognized sexual dimorphisms in physiologic responses during exercise, there is a paucity of research examining the effects of CEPS aimed to enhance performance in women. There is also mounting evidence that CHO feedings can ameliorate the cognitive dysfunction that occurs following exercise [16][17][18]. The term "cognitive function" describes the performance of objective tasks that require conscious mental effort [19]. Such tasks require (verbal, spatial, and working) memory, attention, and executive control [20]. In many sports, participants have to simultaneously perform physically demanding mechanical work and a decisional or perceptual task. Altered cognitive function may affect mood, motivation, the processing of incoming somatosensory information, perceived exertion, and the excitability of the motor cortex, and then impaired voluntary performance during exercise [21][22][23]. The reduction in cognitive function following prolonged exercise are associated with some nutrition metabolic responses, i.e., low blood glucose and high free fatty acid concentration [24,25]. CHO supplementation could potentially enhance cognitive performance by increasing cerebral glucose uptake and oxygen consumption [26], reducing ammonia production [24], limiting the transport of tryptophan into the brain, and presumably 5-hydroxytryptamine synthesis [25]. However, there are also studies show that CHO ingestion did not improve cognitive function during endurance exercise [27,28]. These discrepancies may arise from the use of different exercise protocols and/or CHO consumption protocols. Regardless, further study is warranted to better clarify the role of CHO in cognitive function. A consistent beneficial effect of PRO supplementation on cognitive function has been observed across different populations under resting conditions [29,30]. However, to date, the data is still limited examining the effect of PRO or CEPS ingestion during prolonged exercise on cognitive function. This is unfortunate because endurance performance may be affected by decreased cognitive function when athletes are tired. Therefore, the purpose of this study was to investigate whether the CEPS would improve running performance and cognitive function, as compared with a CES and a PLA in female recreational marathon runners. We hypothesized that CEPS feeding would improve the endurance performance and attenuate the expected reductions in cognitive performance induced by prolonged exercise compared with a CES-only drinks and a PLA. Materials and methods Participants Eleven female recreational runners (age: 32.4 ± 6.7 years, body mass index: 21.0 ± 2.1 kg/m 2 , and maximal oxygen uptake (VO 2max ): 49.0 ± 6.6 mL/kg/min; mean ± SD) volunteered to participate in this study. Inclusion criteria for the study included a self-reported weekly running frequency > 3 days/week over the preceding 2 months and all participants had at least one marathon race experience. The purpose and potential risks of the experiment were explained to them before participation. They all signed a written consent form and completed a menstrual cycle questionnaire to determine the length of their menstrual cycle. The present study was approved by the Ethics Committee of The Education University of Hong Kong. Study design All participants performed three main experimental trials in a randomized, double-blinded, counterbalanced manner, at intervals of at least 28 days. In each main trial, participants consumed one of three solutions, namely the CES (6% CHO), CEPS (4% CHO + 2% PRO), and PLA, at a rate of 150 mL every 2.5 km during a 21-km time-trial run. The running performance and cognitive function were recorded after the run. Preliminary tests Before the main experimental trials, all participants reported to the laboratory for the assessment of their VO 2max and for familiarization with the exercise protocol. They completed a series of preliminary tests to determine the following: 1) the relationship between oxygen uptake (VO 2 ) and submaximal running speed through a 16-min incremental test on a level treadmill and 2) the VO 2max through an uphill incremental treadmill running test to volitional exhaustion, as described by Williams et al [31]. On the basis of the two preliminary test results, a running speed equivalent to 70% of each participant's VO 2max was determined. Thereafter, participants completed a familiarization trial to confirm the running speeds equivalent to 70% of the individual VO 2max . This speed was used in the first 5 km of the 21-km main trials. All participants were also instructed to complete three cognitive function tests to become familiar with the test battery and to minimize the learning effect before the main trials. Physical activity, nutrition, and menstrual status control All participants were instructed to maintain their aerobic exercise as regularly as possible for the duration of the experiment in order to minimize variance in physical condition among trials. For 48 h before each main experiment trial, the participants were asked not to perform strenuous and unaccustomed physical activities. Participants were also instructed to record their food and drink consumption for 48 h before the first main trial and repeat the same diet before each subsequent trial, as well as to refrain from alcohol or caffeine consumption 24 h before main trial. In addition, all participants completed three main experimental trials at intervals of at least 28 days to standardize the menstrual cycle phase; the trials were usually finished within 10 days after menses ended. To offset any potential effects, each experiment was conducted at the same time of the day (e.g., 9:00 am, 12:00 pm, or 2:00 pm). The participants were instructed to consume at least 500 mL of water 2 h before arriving at the laboratory to ensure that they were normally hydrated before the main trials. Constant temperature (22˚C) and relatively humidity (60%) were maintained throughout the experiment by a thermostat. Experimental trials Upon arriving at the laboratory, participants assumed a seated position and rested for 30 min. After baseline data collected, a standardized 5-min warm-up was performed at 6 km/h. Then, the speed of the treadmill was immediately increased to the intensity of 70% of the individual VO 2max until participants completed the first 5 km. Thereafter, the participants ran at whatever speed they wished for the remaining 16 km of the performance run [32]. They could freely alter the speed of the treadmill at any time throughout the trials by using two buttons on the treadmill. To ensure maximal effort during the run, all participants received constant verbal encouragement. The only feedback that participants received during the time-trial was distance covered, which was displaced in the corner of the treadmill screen. Every 2.5 km throughout the run, 150 mL of one of the three solutions was randomly provided to the participants in an opaque cup covered by a lid, and participants commenced drinking from the onset of the time-trial. Both the participant and the investigator were blind to the contents of the solutions. The three solutions were formulated according to Aquarius (Coca-Cola, HK), and they contained the same electrolyte profile and were similarly flavored. The only difference among the three beverages was that the CES contained 6% CHO in the form of sucrose, CEPS contained 4% CHO plus 2% whey PRO (bcshop, HK), and PLA was a noncaloric artificially sweetened solution. The total energy was matched between the CES and CEPS (Table 1). In the CES trial, the CHO ingestion rate was~36g/h, whereas in CEPS trial, the CHO and PRO ingestion rates were~24g/h and~12g/h, separately. Data collection and sample analysis Data collection procedures are illustrated in Fig 1. The body mass (in underwear only) was measured to the nearest 0.1 kg before and after exercise using a weighing machine (Body Weight Precisa, DPS-Promatic, Forli, Italy). The VO 2 , carbon dioxide production (VCO 2 ), and respiratory exchange rate (RER) were measured every 5 km throughout the exercise protocol by using a metabolic cart system (Cortex Metalyzer II-R, CORTEX, Germany). The heart rate (HR) was continuously recorded during the running using a Polar HR monitor (Polar-Team System, Polar Electroy, Finland). Subjective measures, such as a subjective rating of perceived exertion (RPE), perceived thirst (PT), and abdominal discomfort (AD), were recorded just before the gas collection. The RPE was measured using the Borg scale ranging from 6 to 20 [33], and the PT and AD varied from 0 to 10, where 0 denoted "not so much" and 10 denoted "very much" [32]. Capillary blood samples were collected to determine the blood lactate and glucose levels using YSI 1500 (Yellow Spring Instrument Co. Ltd., USA) and a biochemical analyzer (Roche 1 ACCU-CHEK Reflotron plus, USA), respectively. Urine samples were collected before and after each main experimental trial to measure the urine specific gravity (USG; Atago UG-alpha, Atago Co. Ltd., Tokyo, Japan). Cognitive function tests A battery of cognitive function tests (imPACT Package, imPACT Application, Inc., Australia) [34] was used to provide information on various cognitive parameters before and after exercise. Completion of the entire battery required approximately 20 min. The imPACT battery comprises six tasks-word memory learning, design memory learning, Xs and Os, symbol match, color match, and three letters-which yield the following composite scores: verbal memory, visual memory, visual motor speed, reaction time, and impulse control. The cognitive efficiency index measured the interaction between accuracy (percentage correct) and speed (reaction time) in seconds through the symbol match test. A high score indicated that the participants had done well in both the speed and memory domains of the symbol match test. A low score indicated that the participants had performed poorly in both the speed and accuracy components. In addition, the symptom section included 22 symptoms such as headache, sadness, and fatigue. This symptom score presented the summary information regarding the participant's self-reported symptom data. A high score reflected a higher symptom total. All of the tasks were fully supervised, and brief onscreen instructions were provided. Responses were recorded on the computer. Statistical analysis Statistical analysis was performed using SPSS software (SPSS 21.0, IBM, USA). Variable that consisted of a single measurement per trial were analyzed using a one-way (trial) repeatedmeasures analysis of variance (ANOVA). Variables that included multiple measures per trial were examined using a two-way (trial × time) ANOVA. When a significant main effect or interaction was identified, data were subsequently analyzed using a Bonferroni post hoc test. The significance level was set at P < 0.05. Data are presented as mean ± SD. The omnibus effect size (ES) of the different trials was partial eta square (η 2 ) and may be calculated using the following equation: partial η 2 trial = sum of squares trial / sum of Squares trial + sum squares of error. The ES between any two different trials of three trials was d and d = (mean of experimental group)-(mean of control group) / sum of standard deviation [35]. Cohen defines the partial η 2 of 0.01, 0.06, 0.14 as small, medium, and large effects, and the threshold of d was 0.2, 0.5, 0.8 for small, medium, and large effects, respectively [35]. Exercise performance Average time to complete the 21-km time-trial was 132.4 ± 11.5 min, 129.6 ±8.8 min, and 134.6 ± 11.5 min for the CEPS, CES, and PLA trials, respectively. The endurance performance was approximately 3.7% shorter in the CES trial than in the PLA trial (P < 0.05), but no differences were observed between the CEPS and the other two trials (P > 0.05). The omnibus ES of the three trials on endurance performance was large (partial η 2 = 0.34), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.27, 0.19, and 0.49, respectively. Compared with the CEPS and PLA trials, 7 or 8 of 11 participants performed better in the CES trial. In addition, 9 of 11 participants posted faster times in the CEPS trial than in the PLA trial (Fig 2). Summary data of the physiological measures assessed after ingesting either the CEPS, CES or PLA during the 21-km run are presented in Table 2. The blood glucose was significantly higher in the CES trial than in the PLA trial (P< 0.05), but no differences were found between CEPS and other two trials (CEPS vs. CES vs. PLA Table 3 shows the data for subjective estimates. There were no differences for RPE among the three trials (CEPS vs. η 2 = 0.14), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.31, 0.06, and 0.37, respectively. No significant differences in visual motor speed composite (accuracy) were found among three trials during the pre-exercise test, but there was a higher visual motor speed in the CEPS trial than in the PLA trial (P <0.05). The omnibus ES of the trials on visual motor speed was large (partial η 2 = 0.37), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.11, 0.63, and 0.50, respectively. The reaction time was not influenced by the ingested solutions (P > 0.05). The omnibus ES of the trials on reaction time was large (partialη 2 = 0.15), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was0.09, 0.56, and 0.52, respectively. For impulse control (errors), no differences were observed among all three conditions (P > 0.05). The omnibus ES of the trials on impulse control was medium (partial η 2 = 0.06), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.28, 0.44, and 0.11, respectively. Regarding the total symptom score, the solutions did not show any significant effect (P > 0.05). The omnibus ES of the trials on total symptom score was medium (partial η 2 = 0.04), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.55, 0.32, and 0.17, respectively. Similarly, cognitive efficiency index showed no significant variation during exercise among the three trials (P > 0.05). The omnibus ES of the trials on cognitive efficiency index was large (partial η 2 = 0.29), and the ES for CEPS and CES trials, CEPS and PLA trials, and CES and PLA trials was 0.46, 0.94, and 0.51, respectively. Discussion This study compared the effects of three different solutions, which were (namely the CES, CEPS, and PLA) consumed during a 21-km run, on the exercise performance and cognitive function in female recreational runners. We observed that compared with the PLA feedings, ingesting CES improved 21-km time-trial performance. However, the addition of PRO to a CES did not further enhance endurance performance. CEPS feeding enhanced visual motor speed compared to the PLA, but no differences in the performance of the other cognitive function tests were found. It is widely accepted that CES ingestion improves endurance performance [1,2]. This was also the case in our experiment, with the CES improving time-trial performance by 3.7% over the non-caloric PLA. As summarized by Coyle and Jeukendrup [4,5], this effect was most likely due to maintenance of blood glucose or a higher rate of CHO oxidation during exercise. In the present study, the blood glucose concentration of PLA treatment was lower than in the CES treatment during exercise. Even though the CEPS treatment contained more calories from PRO and CHO, performance during this treatment improved 1.7% but not statistically different from the PLA. It should to be noted that participants were recreational runners with infrequent exposure to high-intensity exercise in the present study, so a greater variability of fitness level provides a partial explanation for the lack of a significant benefit [36]. In addition, 9 out of 11 participants ran faster in the CEPS trial than in the PLA trial (Fig 2), suggesting that CEPS ingestion might assist endurance performance. Three previous studies have suggested that ingestion of a CEPS during prolonged exercise extends TTE, compared with a CES only [3,6,7]. However, the practical applications for these works are constrained by the fact that endurance athletes do not typically compete in events that require sustaining a fixed power output for as long as possible. The time-trial protocol in which athletes are required to complete a certain amount of work in the shortest time possible has shown to be more reliable and reproducible for research [8]. Additionally, in these studies, the CEPS contained approximately 20%-25% more calories than the CES supplement [3,6,7]. Therefore, the beneficial effect of the CEPS versus the CES-only supplements could be attributed to the high caloric content of the CEPS rather than a PRO-specific physiological mechanism per se. Regarding the time-trial performance, our results were consistent with those studies who reported no improvement in performance when PRO was added into the CES [8][9][10]. The experimental solutions in these 3 studies were matched for total CHO but not for total caloric content. Approximately 20% to 37% more calories was contained in CEPS treatment than their CES-only. Despite this, the cyclists and runners did not perform better with the addition of PRO. Therefore, these studies suggest that the co-ingestion of CHO and PRO during endurance exercise seems not to further improve time-trial performance. Jeukendrup suggested that, to be ergogenic for performance, the optimal intake of CHO was approximately 60-70g/h [5], and a convenient way for athletes to satisfy their hydration needs is to drink 600-1400ml/h solution [10]. However, in our present study, participants received only 600 mL/h fluid drink during the exercise bout and the CHO was provided a rate of around 24g/h in the CEPS trial and around 36g/h in the CES trial. The lower rate of fluid intake and CHO intake could explain at part for the failure of CEPS to improve time-trial performance. It should be noted that most of studies having examined the effects of CEPS on performance included male participants only [3,6,10]. Very few data exist on how women respond to such beverages. Compared with men, women rely to a lesser extent on CHO and PRO sources to fuel a bout of endurance exercise [13,15], which suggested that female may respond differently in a response to a nutritional regimen. In the present study, one limitation was that we did not calculate the CHO and fat oxidation because the indirect calorimetry method was not appropriate for calculation of whole body substrate utilization when PRO was ingested. It has been suggested that trained participants may be more suitable for physical performance test, compared with untrained participants [36]. Therefore, another limitation of the present study was that only recreational female runners were recruited. In summary, the results of the present study indicated that CEPS ingestion during endurance exercise may not further improve time-trial performance in females, compared with iso-caloric CES consumption. In the future, research needs to be conducted using trained females as subjects, and more sophisticated measures of substrate utilization were applied to determine the mechanism. This study was also designed to determine the impact of CES or CEPS feedings during endurance running on the performance of cognitive function. This had generally not been studied under these conditions but could clearly affect endurance performance. The mechanisms by CES feeding which affect cognitive function have yet to be elucidated, but it is thought to involve the alterations in the cerebral availability of glucose [26], the balance of neurotransmitters such as serotonin and dopamine [25], and/or ammonia production [24]. Previous research examining the role of CES feedings in maintaining cognition during exercise has led to mixed results. Two studies [16,18] and our laboratory [17] have evaluated the effects of CES during prolonged running or team sport exercise and found an improved cognitive performance in CES trial compared with a PLA. Other studies, however, have indicated that there is no benefit of CES to cognitive performance during running and walking exercise compared with the PLA [27,28], and our results were consistent with these research. Although the PLA trial showed lower blood glucose concentration, the glucose level for the PLA was relatively high, with small differences between the PLA and CES treatments (PLA vs. CES: 4.45 ± 1.01vs. 5.53 ± 1.05mmol/L). That could be an explanation for the failure observation of differences in cognitive function between CES and PLA trials. Several studies have also demonstrated a positive effect of PRO feedings on cognitive function across different populations in controlled settings, but without exercise [29,30]. To our knowledge, the present study may be the first study exploring the cognitive performance of co-ingestion of PRO and CES during endurance exercise. This is obviously important because common occurrences of cognitive dysfunction of athletes during exercise may determine the outcome of a competition. Our data suggest that small amount of PRO to the CES during endurance exercise enhance visual motor speed than the PLA. We speculated that branched-chain amino acids (BCAA) from the PRO might attenuate central fatigue during this trial, because Blomstrand et al. [37,38] reported improved mental performance with BCAA feeding. In summary, our results suggest that CES feeding in the form of a 6% CHO during running exercise can improve 21-km time-trial performance in female recreational runners than a non-caloric PLA solution. Nevertheless, adding 2% PRO to a 4% CHO drink showed no additional benefit on endurance performance. The CEPS treatment provided further benefit for visual motor speed than the PLA ingestion only. Supporting information S1 Data. The participants' data for exercise performance and physical measures. (XLSX)
2018-04-03T05:56:16.078Z
2017-10-12T00:00:00.000
{ "year": 2017, "sha1": "48400e2c6e319aabe6e7424aeffc09a480794dbc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0185982&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48400e2c6e319aabe6e7424aeffc09a480794dbc", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }